· 7 min read
A single-agent prompt tells one AI what to do. A multi-agent prompt defines a system: roles, responsibilities, coordination rules, and a shared understanding of what the final output should look like. Getting this right in Claude Code is the difference between a pile of disconnected paragraphs and a coherent deliverable that reads like it came from a well-run team.
This guide covers the specific techniques for writing multi-agent prompts in Claude Code. If you are familiar with basic prompt engineering but have struggled to get consistent results from agent teams, the patterns here will close that gap.
Every agent in a multi-agent team needs a prompt that answers four questions: Who are you? What do you do? What do you not do? What does your output look like?
Start each agent's prompt with a tight identity statement. This is not just a name -- it is a scope definition.
Weak identity:
You are a research agent.
Strong identity:
You are a B2B SaaS Competitive Intelligence Analyst. You specialize in identifying direct competitors, mapping their feature sets, analyzing their pricing models, and estimating their market position. Your analysis covers companies with $1M-$100M ARR in the enterprise software space.
The strong version constrains the agent's focus. When paired with other agents in a team, these constraints prevent overlap and ensure each agent stays in its lane.
Explicit exclusions are just as important as inclusions. Without them, agents will expand into neighboring territory and produce redundant work.
You do NOT cover: market sizing (handled by the Market Sizing Analyst), customer sentiment analysis (handled by the Voice of Customer Analyst), or strategic recommendations (handled by the Strategy Synthesizer). If your research surfaces information relevant to those domains, note it briefly but do not elaborate.
This pattern works because Claude respects explicit boundaries. When an agent encounters information outside its scope, the negative scope instruction tells it to flag it rather than chase it.
Each agent needs to know exactly what its deliverable looks like. Specify structure, length, format, and required sections.
Your output is a Competitive Landscape Report in markdown format. Required sections:
- Executive Summary (150-200 words)
- Competitor Profiles (one subsection per competitor, minimum 5 competitors)
- Feature Comparison Matrix (markdown table, minimum 8 feature dimensions)
- Pricing Analysis (by tier, with annual and monthly pricing where available)
- Market Positioning Map (describe each competitor's positioning in 2-3 sentences)
- Sources (minimum 8 citations with URLs)
Without this level of detail, agent output varies wildly between runs. With it, you get consistent, comparable deliverables every time.
The coordination layer is what transforms independent agents into a functioning team. In Claude Code, you express coordination through the prompt of a supervisor agent or through explicit instructions in each agent's prompt about how to handle inputs and outputs.
When agents work in a pipeline, each one needs to know what it receives and what it passes forward.
For the upstream agent (producer):
After completing your analysis, format your output as a structured document with clearly labeled sections. The next agent in the pipeline (Strategy Analyst) will use your output as their primary input. Ensure all data points include sources so the downstream agent can verify claims.
For the downstream agent (consumer):
You will receive a Competitive Landscape Report from the Competitive Intelligence Analyst. Begin by reviewing the report for completeness. If any section is missing or contains fewer entries than specified, note this gap in your analysis. Base your strategic recommendations on the data provided -- do not invent competitor data or market statistics.
This two-sided contract prevents the most common multi-agent failure: downstream agents hallucinating data because the upstream agent did not provide it.
When multiple agents work simultaneously on independent tasks, the coordination challenge shifts from handoffs to synthesis. The key prompt is for the agent that merges the parallel outputs.
You will receive outputs from three analysts: Competitive Intelligence, Market Sizing, and Customer Sentiment. Your job is to synthesize these into a unified Strategic Assessment. Where analyst findings agree, state the consensus clearly. Where they conflict, present both perspectives and explain the likely reason for the discrepancy. Do not simply concatenate the reports -- produce a new document that tells a coherent story.
The instruction to identify conflicts rather than ignore them is critical. Without it, synthesis agents tend to cherry-pick from each input, producing an output that sounds confident but papers over genuine disagreements in the data.
When using a supervisor agent to coordinate workers, the supervisor's prompt needs a roster of available agents and guidelines for when to use each one.
You are a Project Coordinator managing a research team. Your available team members are:
- Competitive Analyst: Identifies and profiles competitors. Use when the task involves understanding the competitive landscape.
- Market Researcher: Sizes markets and identifies trends. Use when the task requires TAM/SAM/SOM calculations or growth projections.
- Financial Analyst: Builds financial models and projections. Use when the task requires revenue modeling, unit economics, or investment analysis.
- Report Writer: Produces executive-ready documents. Always use as the final step to ensure consistent formatting and tone.
For each subtask, specify: which team member handles it, what their input is, and what output you expect. Review each team member's output before passing it to the next phase. If output quality is insufficient, request a revision with specific feedback.
Give the agent team a "project brief" at the top of the prompt that describes the final deliverable the same way a client would describe it to a consulting team.
PROJECT BRIEF: Produce a Market Entry Assessment for the Nordic fintech market. The audience is a Series B startup CEO deciding whether to expand internationally. The document should be 8-12 pages, cover market opportunity, competitive landscape, regulatory requirements, and go-to-market options. It should end with a clear go/no-go recommendation supported by the analysis.
This brief sits above the individual agent prompts and ensures every agent understands the end goal, even if they are only responsible for one section.
Specify exact acceptance criteria for the output. This is particularly useful when you need to evaluate whether an agent team's output meets a quality bar.
ACCEPTANCE CRITERIA:
- Market size data includes both top-down and bottom-up estimates
- Minimum 5 competitors profiled with pricing data
- Regulatory section covers at least 3 Nordic countries
- Financial projections include best-case, base-case, and worst-case scenarios
- All factual claims include a source citation
- Executive summary is readable in under 2 minutes (approximately 400 words)
When these criteria are explicit in the prompt, agents self-check their output against the list before finalizing. This reduces the need for revision cycles.
Multi-agent output often suffers from inconsistent formatting. One agent uses bullet points, another uses numbered lists. One writes in first person, another in third. Solve this with a shared style guide embedded in each agent's prompt.
STYLE REQUIREMENTS:
- Write in third person ("The analysis shows..." not "I found...")
- Use markdown headers (## for sections, ### for subsections)
- Use bullet points for lists of 3 or fewer items, numbered lists for 4 or more
- Bold key findings on first mention
- Tables use standard markdown format with header row
- All monetary values in USD unless otherwise specified
- Percentages to one decimal place (e.g., 14.3%, not 14%)
This level of detail might feel excessive, but it eliminates the formatting inconsistencies that make multi-agent output feel disjointed.
Each agent starts fresh. Do not write prompts that say "as discussed earlier" or "building on the previous analysis" without actually providing that previous analysis as input. Every piece of context an agent needs must be explicitly included in its prompt or provided as input at runtime.
If your team has a "Research Agent" and an "Analysis Agent," who handles competitive pricing research -- is that research or analysis? Ambiguous boundaries lead to either duplication or gaps. The fix is to define ownership at the subtask level, not the category level. Instead of "research" and "analysis," define "data gathering" and "interpretation of gathered data."
Running a pipeline of agents without any quality review step is like publishing a document without editing it. Always include a review step, whether it is a supervisor agent that checks worker output or a final synthesis agent that reads the combined output and flags inconsistencies.
If one agent's prompt is three times longer than the others, that agent is doing too much. Split it. A focused agent with a clear, manageable scope produces better output than an overloaded agent juggling multiple responsibilities. The overhead of coordination between two focused agents is almost always less than the quality cost of one overwhelmed agent.
Without length constraints, agents tend toward either extreme brevity or excessive verbosity. Specify word counts or section lengths to keep output balanced across the team. When one agent produces 2,000 words and another produces 200, the final document will feel lopsided regardless of quality.
The best multi-agent prompts follow a predictable structure: identity, scope, boundaries, input contract, output specification, and style requirements. When every agent in a team has these six elements clearly defined, the coordination almost takes care of itself.
Start by writing the project brief. Then define each agent's role and negative scope. Specify the handoff contracts between sequential agents. Add the shared style guide. Finally, define the synthesis step that produces the final deliverable. This sequence ensures nothing falls through the cracks.
If you want to skip the manual prompt engineering and generate complete multi-agent team configurations with roles, coordination patterns, and ready-to-use prompts, try the generator.