The Journal/build_in_public
build_in_public12 min read

Everything I Learned Managing 82 Employees (And How I Applied It to AI Agents)

I ran a marketing agency with 82 people for years. 7-figure revenue. Then I shut it down and rebuilt with 8 AI agents. Every management lesson transferred directly.

Everything I Learned Managing 82 Employees (And How I Applied It to AI Agents)

I ran a marketing agency with 82 people for years. 7-figure revenue. Clients in multiple countries. I learned every lesson the hard way.

Then I shut it all down.

Three years after shutting down the agency, I started building with AI agents. Today I run a team of 13 agents that handles my entire operation. The crazy part? Every single management lesson from the agency transferred directly.

This is the complete breakdown. What I learned managing humans, what broke when I applied it to AI agents, and what actually works now.

Lesson 1: Vague Roles Create Chaos

At the agency, my biggest early mistake was hiring generalists and telling them to "help out." Nobody knew who owned what. Tasks fell through the cracks. Two people would work on the same thing while something critical got ignored.

The fix was obvious: every person gets a specific title, a specific set of responsibilities, and clear boundaries on what they don't touch.

With AI agents, same problem on day one.

I set up my first agent and gave it a system prompt that basically said "you're my assistant, help me with everything." It tried to do everything. Did nothing well. Couldn't prioritize. Couldn't say no to tasks outside its scope.

So I split it into 13 agents with specific roles:

!8 specialized AI agents, each in their own lane

Q is the COO. She doesn't do the actual work. She reviews other agents' output, trains them when they mess up, and upgrades the SOPs. At the agency, this person cost me $120K a year. With AI, she runs 24/7.

Bolt is IT and infrastructure. Docker, deployments, server issues. He doesn't touch content. He doesn't touch project management. He fixes things that break.

Wave is the project manager for my wellness brand. Timelines, content calendars, deliverable tracking. She knows her project inside out and doesn't get distracted by anything else.

Vibe writes code. Landing pages, scripts, tools, automations. When I need something built, I talk to Vibe. Nobody else.

Atlas handles content and research. Market research, competitor analysis, drafts, reports. If it involves writing or researching, it's Atlas.

Sage is strategy. Big picture thinking. Frameworks. Helping me pressure-test decisions before I make them.

Pixel is the dedicated platform developer. He owns the codebase for our SaaS product, codes features, visually verifies every change in a browser, and debugs issues in real-time.

Aria is a specialist. She handles medical content and research for a separate project. Domain expertise that none of the other agents have.

Thirteen roles. Zero overlap. Everyone knows their lane.

This is basic management. But most people setting up AI agents skip it completely. They set up one agent and dump everything on it. That's like hiring one person and making them your marketer, developer, designer, accountant, and project manager.

It didn't work with humans. It doesn't work with AI agents.

Lesson 2: Written Processes Beat Tribal Knowledge

The agency ran on SOPs. Standard Operating Procedures. Every task had a documented step-by-step process.

Why? Because when someone quit, the process stayed. When we onboarded a new hire, they could follow the SOP and produce decent work on day one. The knowledge lived in the document, not in someone's head.

Most people give their agents instructions in conversation and hope they remember. That's tribal knowledge. It's what kills agencies when key employees leave, and it's what kills agent performance when sessions reset.

I write SOPs for every repeatable task. The structure I use:

  • Role — Who this SOP is for and what they own
  • Trigger — What kicks off the task
  • Steps — Numbered. Specific. No room for interpretation
  • Output — Exactly what the deliverable looks like when it's done
  • Escalation — When to stop and flag me instead of guessing

That last one is critical. At the agency, the worst damage always came from employees who guessed instead of asking. Same with AI agents. An agent that confidently does the wrong thing is worse than one that stops and says "I'm not sure about this."

Lesson 3: The Feedback Loop Is Everything

Here's what nobody tells you about SOPs: the first version always produces garbage.

At the agency, I'd write a process, hand it to a new hire, and the output would be... not great. Not because the person was bad. Because my process wasn't clear enough. I'd assumed knowledge they didn't have. I'd skipped a step that seemed obvious to me.

The fix was never "hire a better person." It was always "write a better SOP."

!The daily coaching feedback loop

Build. Execute. Look at the output. Find where it went wrong. Fix the process. Run it again.

With AI agents, this loop is identical. My first SOPs for Atlas produced content that was generic and vague. Instead of blaming the model, I looked at the SOP. Too many assumptions. Not enough specifics. Didn't include examples of what good output looks like.

Three iterations later, same agent, same model, dramatically better output.

The difference between people who give up on AI agents after a week and people who build real systems with them? The second group iterates on their processes. The first group blames the AI.

At the agency, getting a new hire to full productivity took 2-3 months of SOP refinement. With AI agents, it takes 3-4 cycles. Days, not months. And once the SOP is fixed, it stays fixed forever.

Lesson 4: Give Them Real Tools

You wouldn't hire a graphic designer and not give them Photoshop. You wouldn't hire a developer and not give them access to the codebase.

But that's exactly what most people do with AI agents. They set up an agent and expect it to do real work with no email, no browser, no credentials, no access to the actual systems it needs.

Every one of my 8 agents has:

  • Their own email address. Not a shared inbox. Their own account.
  • Their own browser. With their own session, cookies, and logged-in state.
  • Their own password manager. Credentials stored properly.
  • Their own workspace. Files, notes, memory, context. All separate.

This is expensive to set up. Not in money, but in time. Creating 8 email accounts, 8 password vaults, configuring 8 browser profiles. It took a full day.

But at the agency, onboarding a new employee also took a full day. IT setup, account creation, tool access, training materials. Nobody complains about that because everyone knows employees need tools to work.

AI agents are the same. If you want real output, give them real access.

Lesson 5: Someone Has to Check the Work

The most expensive lesson I learned at the agency: unsupervised work degrades over time.

In the beginning, a new hire is careful. Follows the process. Asks questions. Produces good work. Three months in, they start cutting corners. Taking shortcuts. Drifting from the SOP. Not out of laziness, just human nature.

We solved this with weekly reviews. A manager checks the output against the SOP. Catches drift early. Gives feedback.

With AI agents, the same thing happens. An agent follows the SOP perfectly for a while, then starts hallucinating. Makes assumptions. Fills in gaps with made-up information instead of flagging uncertainty.

My solution: Q.

Q is my COO agent. Her only job is quality control. She reviews the other agents' output. She checks it against the SOPs. When something is off, she either fixes it or flags it for me.

This is the part most AI setups are missing. Everyone builds agents that DO work. Almost nobody builds an agent that CHECKS work.

At the agency, the ratio was roughly 1 manager per 8-10 people. With AI agents, I have 1 reviewer for 5 workers. Similar ratio. Similar results.

Lesson 6: Trust Is Earned, Not Granted

At the agency, I never gave a new hire full authority on day one. They'd start with small tasks under supervision. Over weeks, they'd prove themselves and earn more autonomy.

I do the same with my agents.

New agents start supervised. They do the work but nothing goes out without Q reviewing it first. When Atlas writes a draft, Q checks it before it reaches me. When Vibe builds something, Q tests it before it's deployed.

As an agent proves reliable, I loosen the guardrails. Bolt now handles certain infrastructure fixes autonomously because he's earned that trust through consistent output. Q still gets a notification, but she doesn't block the work anymore.

If quality drops, the autonomy shrinks back. Same as the agency. Trust takes weeks to build and one bad delivery to lose.

Lesson 7: Build a Network, Not Solo Agents

This is the one most people miss completely.

At the agency, employees didn't work in isolation. The designer would ask the strategist for direction. The project manager would ping the copywriter for content. When something technical broke, everyone knew to call IT.

My AI setup works the same way. It's not 8 independent agents. It's a network.

!Parallel execution across departments

Q is the hub. She trains everyone, reviews everything, upgrades SOPs. But the agents also work with each other directly.

When any agent hits a technical problem, they know to contact Bolt. They don't try to fix servers themselves. They don't come to me with error logs. They ping Bolt, he handles it, and they keep working.

Wave manages the wellness brand and needs content. She doesn't wait for me to assign Atlas. She reaches out to Atlas directly: "I need 3 social posts for the product launch by Thursday." Atlas delivers. Wave reviews. I find out about it in the morning briefing.

This isn't automation. This is organizational design.

The moment your agents start coordinating with each other without you in the middle, that's when the system starts to compound.

Lesson 8: Give Them a Brain

Every lesson so far is useless if your agents forget everything overnight.

This was my biggest frustration. I'd spend an hour working through a strategy with Q. The next session, gone. Atlas would write content that contradicted a decision we made two days ago because he had no idea it happened. Bolt would fix the same problem twice because he didn't remember fixing it the first time.

SOPs help. Shared files help. But the core problem is that AI agents are stateless. Every session starts from zero. The institutional knowledge that makes a real organization run, all the decisions, beliefs, and context that everyone "just knows," doesn't exist.

At my agency, when someone new joined, they'd absorb the culture through osmosis. Months of overhearing conversations, sitting in meetings, reading old documents. Over time they'd just know how we think and why we make the decisions we make.

AI agents can't do that. Unless you build them a brain.

So I built one. Instead of dumping raw text into files and hoping agents find what they need, I crystallize knowledge into structured snapshots. Not summaries. Not transcripts. Actual structured knowledge with types, confidence scores, connections to related knowledge, and semantic search.

When Atlas needs to write content, he doesn't start from zero anymore. He searches the brain and finds my actual beliefs about the topic. The real stories. The real numbers. The decisions I've made and why.

Now every agent in the network has access to the same brain. When one learns something, all of them know it. When I make a decision, it gets crystallized and becomes part of the institutional knowledge.

This is what was missing. Not smarter models. Not better prompts. Persistent, structured, searchable knowledge that makes the whole team smarter over time.

Lesson 9: The Problems Change Shape, Not Size

When I went from 82 employees to 8 AI agents, I expected the problems to disappear.

They didn't. They changed shape.

People problems became compute problems. Instead of managing sick days and vacation schedules, I'm managing rate limits. Had to stagger agent work schedules like shift rotations at a factory.

Communication problems became context problems. Instead of "Sarah didn't tell Mike about the design change," it's "Atlas doesn't know about the decision Q and I made yesterday."

Management overhead became SOP overhead. Instead of spending time in 1-on-1s and performance reviews, I spend time rewriting SOPs and refining processes. Different work. Similar time commitment, at least in the beginning.

The total cost dropped from 82 salaries to $550 a month. But the management thinking? Nearly identical.

Anyone who tells you AI agents are "set it and forget it" is lying. It's a team. You manage it like a team. It just costs dramatically less and improves dramatically faster.

The Part Nobody Else Can Tell You

I built this entire network while moving to a new house.

My wife is a doctor. She was on shift. I had a 4-year-old and a 1-year-old. I was packing boxes, managing movers, painting walls, closing out the old apartment.

And I was building the entire infrastructure from my phone.

Voice messages on Telegram. Each agent has their own voice. I'd send Q a message while driving. Assign Bolt a fix while painting. Ask Atlas for a research summary while carrying my kid with one hand.

The whole system was built in about a week. From my phone. From voice. While my house was literally in boxes around me.

That's not a flex. That's the point.

If you've managed people, you already know how to manage AI agents. The thinking is the same. Roles. Processes. Tools. Accountability. Trust gradients. Network design. A shared brain. Feedback loops.

The skill that makes this work isn't technical. It's operational.

And if you've ever run a team, you already have it.

Start Sleeping Better Tonight

Join 14,500+ people who've transformed their sleep with healing frequencies, delta wave entrainment, and our progressive 21-night program.