Build Like a Team: Leveraging Sub-Agents for Software Development

I've recently worked on a few small side projects, some of them are quite simple like a calculator app. Nothing complicated, just a few phases: do basic arithmetic, then scientific functions (just like the iPhone app), and then eventually a graphing calculator (oof it's been awhile since I've seen my TI-86). Standard stuff... but I built it all with AI sub-agents, each assigned a specific role.

What happened next taught me something I didn't expect: good software paradigms don't disappear when you bring in AI. They matter even more.

What are Sub-Agents, Exactly? 

If you've been following the AI space, you've seen the term "agentic" thrown around a lot. I wrote about this when covering MCPs, but the short version is this: an agent is an AI that can reason, take actions, and work toward a goal wiht some autonomy.

Sub-agents take this a step further. Instead of one AI trying to do everything, the work is broken down and distributed. Just like a real engineering team, AI can take a large task, break it down, and distribute the work. Each sub-agent can even have a defined responsibility, scope, and set of tools! 

For my calculator project, I defined a relatively simple team: 

  • Team Lead - Plans the work, breaks it into subtasks, delegates. Never writes code directly.
  • Frontend Developer - Owns the UI and user experience.
  • Backend Developer - Designs the APIs and business logic.
  • QA Engineer - Writes tests and ensures good code coverage before anything ships.

Sound familiar? We have all the right makings of a basic engineering team.

Why Specialization Matters

One of the things that's tempting when using AI is to just ask it to "build the whole thing". And sure, you might get something. But you'll also get the AI equivalent of a developer who's been up for 30 hours. It will cut corners, skip tests, and sometimes make architectural decisions that don't scale.

Assigning sub-agents to specific roles forces modularity that mirrors real software design. The backend engineer can focus on API design and system stability. You can even give it the whole fundamentals of software design textbook if you wanted (though maybe that's too much context)! The QA agent also isn't just rubber-stamping work, but finding gaps and providing feedback to the other "team members".

This maps directly to one of the key principles I learned at Google: define the interface first. It drives velocity at strong engineering organizations.

The Team Lead Problem

Here's something I didn't anticipate: the team lead role is surprisingly hard to get right.

A good tech lead doesn't assign tickets, but understands the whole system, anticipates dependencies, and makes judgement calls about what to build next. Getting the most out of an agentic setup starts with a solid brief. In Claude, that lives in the Claude.md file (think of it like a project brief). If you have a PRD, that's a great starting point to build from, but also helps to include a bit of solution-ing (what languages to choose, any specific frameworks, etc.). The more context you give upfront, the less your agents have to guess. Claude can also help iterate and build this out.

The lesson? Orchestration matters as much as your individual agents. This mirrors the real world: a team with great engineers and a weak vision still ships slowly. Spending time to iterate on the plan, refine it, and then build it still matters.... Claude just is able to help speed it up! 

What This Means for How We Build

I think we're at an inflection point. AI coding tools aren't replacing software engineers, but they are changing what it means to lead a software project. The skills that matter are shifting towards:

  • Defining clear interfaces and roles
  • Knowing when to delegate or step in
  • Writing good requirements, not just code

I think it's powerful to think that engineers today can collaborate with AI tools and if structured properly can ensure quality of the output.

The calculator project reinforced something I believe deeply: the paradigms we developed as an industry exist for good reasons. API design, test coverage, and modularity aren't relics of a pre-AI world. They're the vocabulary that makes delegation possible, whether you're delegating to a human or an agent.

The tools are changing, but the principles aren't.

Have questions about multi-agent setups or want to share how you're structuring AI-assisted projects? Drop a comment or reach out — I'd love to hear what you're building.