1 Human Directing 5 AIs Debating Bugs: Claude Agent Teams Explained

Reposted from 向阳乔木 (@vista8)’s Twitter.


Have you ever imagined AI working like a real team?

Not the “you ask, it answers” mode, but you state a requirement, and then a group of AIs discuss among themselves, divide the work, challenge each other’s viewpoints, and finally deliver a thoroughly reasoned result.

This sounds like science fiction, but Anthropic just made it real.


One Person Working vs A Team Working

Let me paint a picture.

Say your code has a bug—users report “the program exits immediately after starting.”

The traditional way: you ask AI to investigate, and it goes down one path—guess a cause, try it, if it doesn’t work, try another.

But what if you have an AI team?

You can simultaneously send 5 AIs to investigate 5 different hypotheses:

  • One checks memory issues
  • One checks config files
  • One checks dependency versions
  • One checks network connections
  • One checks the logging system

Even more interesting: these 5 AIs will debate each other.

Zhang San says “I think it’s a memory issue,” Li Si responds “No, your evidence is insufficient—I found an obvious config error in the logs.”

Just like a group of engineers discussing at a whiteboard, questioning each other, validating, and eventually converging on the real answer.

This is Claude Code’s new Agent Teams feature.


How Is This Different from Subagents?

You might ask: didn’t Claude Code already have Subagents? Having AI spawn multiple subtasks in parallel?

Yes, but that’s completely different.

Subagents are like scouts you send out

You give them a task, they finish and report back, end of mission. Scouts don’t chat with each other or question each other’s intel.

Agent Teams are like a real project group

Each member is an independent entity with their own context memory. They can message each other directly, hold meetings, refute each other’s points.

ComparisonSubagentAgent Teams
InteractionOne-way reportingMutual communication
ContextShared with parentIndependent
LifecycleDestroyed after taskPersistent
CollaborationParallel but isolatedCan challenge each other
CostLowerHigher

When to use which?

  • Simple tasks: Subagent—cheap and efficient
  • Complex tasks needing deep discussion and mutual challenging: Agent Teams

How to Use It: 3-Minute Quickstart

First, how to enable it. This is experimental, requiring manual activation in config:

{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

Then, just use natural language to create a team:

Create a team to review this PR. I want three reviewers:

  • One specializing in security vulnerabilities
  • One focused on performance impact
  • One checking test coverage

That’s it.

After creation, you’ll see a Team Lead and three Teammates. They share a task list, can message each other, and work on their own parts.

There’s a cool feature called delegate mode—press Shift+Tab to toggle. When enabled, you (Team Lead) only coordinate, doing no actual work—everything goes to teammates. Like a real project manager.


My Favorite Use: Scientific Debate for Debugging

There’s a case in the docs that caught my attention.

User reports the app exits immediately after startup. You’re not sure why.

Traditional method: guess one, try one—might take half a day.

The Agent Teams approach:

Users report the app exits after sending one message. Create 5 teammates, each investigating a different hypothesis. Have them communicate, try to refute each other’s theories—like a scientific debate.

Then you can grab a coffee and watch 5 AIs “argue” with each other.

Imagined conversation:

  • A says: “I think the exit code is wrong.”
  • B says: “No, I checked the code, exit logic is fine, but I found message queue anomalies.”
  • C says: “Wait, my logs show network timeout triggered this…”

Eventually, they converge on the most convincing conclusion.

Way more efficient than staring at the screen guessing alone.


Of Course, There Are Gotchas

This is experimental. Official docs clearly state several limitations:

LimitationDescription
No session recoveryClose terminal, teammates are gone
One team at a timeCan’t run multiple project groups
Higher costEach teammate is a separate Claude instance
File conflictsTwo teammates editing same file causes issues
Requires patienceMay need multiple attempts to get it right

Official advice: Beginners should start with “no-code” tasks—like having the team do code reviews or research analysis. Get familiar before attempting multi-agent collaborative development.


My Thoughts

This feature shows me the next stage of AI-assisted programming.

Before, using AI was essentially “you ask, it answers.” AI is a super assistant, but passive.

Agent Teams gives AI “active collaboration” ability. They can divide work themselves, discuss themselves, challenge each other. You’re more like a director or product manager—state requirements, then watch the AI team deliver.

This makes me wonder: Will future software development become “1 human + N AIs” as standard?

Humans define problems, make key decisions, set direction; AI teams research, implement, test, and review each other.

Sounds sci-fi, but Agent Teams is already taking the first step in that direction.


Quick Start

  1. Add CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS: "1" to ~/.claude/settings.json
  2. Open Claude Code, type “create a 3-person code review team”
  3. Avoid having multiple teammates edit the same file
  4. Give teammates enough context (they don’t inherit your conversation history)

Official docs: Claude Agent Teams


Original author: 向阳乔木 (@vista8)

If you found this helpful, consider buying me a coffee to support more content like this.

Buy me a coffee