How to Build a Research Team with Claude Agents

· 6 min read

What You'll Build

By the end of this guide, you'll have a 4-agent research team that takes a business question, breaks it into parallel research tracks, investigates each with a specialist agent, and synthesizes everything into a structured report. The whole process runs in minutes, not days.

The team uses the Supervisor-Worker pattern with four agents: a Research Director who coordinates the workflow, a Data Gatherer who collects raw evidence, a Domain Analyst who interprets findings, and a Synthesis Writer who produces the final deliverable. This configuration works for market entry assessments, technology evaluations, policy analysis, and any scenario where you need structured answers to open-ended questions.

Prerequisites

You'll need Claude Code (Anthropic's CLI tool) or the Claude Agent SDK (for programmatic control and recurring workflows). You'll also need a clear research question. "Tell me about the market" is too vague. "What is the current state of enterprise adoption of large language models in the financial services sector, and what are the primary barriers to deployment?" gives your agents something specific to work with.

Step 1: Define Your Agent Roles

Every effective research team starts with clear role boundaries. Overlap between agents wastes compute and produces redundant output. Gaps between agents mean blind spots in your research.

Here are the four roles:

Agent 1: Research Director (Supervisor)

Mission: Decompose the research question into discrete sub-tasks, assign each to the appropriate specialist, review intermediate output, and ensure the final synthesis addresses the original question.

The Research Director never does primary research itself. Its job is coordination — reading the initial question, deciding what needs investigation, monitoring quality, and sending work back for revision when it falls short.

Prompt guidance: Give the Director an explicit list of available workers and their capabilities. Specify the output format you expect and the success criteria for "done."

Agent 2: Data Gatherer

Mission: Collect raw evidence, statistics, data points, named sources, and factual claims relevant to each assigned sub-question.

This agent prioritizes breadth over depth. It gathers quantitative data (market sizes, growth rates, adoption percentages), identifies key players, and surfaces studies or reports that support or contradict hypotheses. It does not interpret the data — that's the Domain Analyst's job.

Prompt guidance: Instruct this agent to cite source types (industry report, academic paper, company announcement). Ask it to flag low-confidence data and conflicting numbers. Require structured output — tables and lists, not prose.

Agent 3: Domain Analyst

Mission: Interpret the gathered data within the specific industry or domain context, identify patterns, surface non-obvious connections, and flag gaps in the evidence.

The Domain Analyst takes raw data and turns it into meaning. Where the Data Gatherer reports "Enterprise LLM adoption grew 340% year-over-year," the Domain Analyst explains why that happened, what it means for the specific sector in question, and whether that trajectory is likely to continue based on structural factors.

Prompt guidance: Give this agent your industry context — what sector you operate in, what you already know, what your working hypotheses are. Ask it to explicitly state assumptions, challenge conventional wisdom where warranted, and flag where the available evidence is thin.

Agent 4: Synthesis Writer

Mission: Combine all research outputs into a single, coherent deliverable that directly answers the original question with clear reasoning and evidence.

The Synthesis Writer is not a summarizer. It makes an argument — taking data from the Gatherer, interpretation from the Analyst, and framing from the Director to produce a document someone could read cold and walk away with a clear understanding of the question, the evidence, and the implications.

Prompt guidance: Specify the exact format — executive summary, key findings, supporting evidence, limitations, and next steps. Set a tone appropriate for your audience.

Step 2: Set Up the Coordination Pattern

This team uses Supervisor-Worker because research is inherently dynamic. The Director might discover after the Data Gatherer's first pass that a line of inquiry is a dead end and pivot toward a more productive angle. That mid-stream adjustment separates good research from mechanical data collection.

Define the workflow order:

  1. The Director receives the research question and decomposes it into 3-5 sub-questions.
  2. The Data Gatherer works each sub-question in parallel, producing structured evidence for each.
  3. The Domain Analyst reviews the gathered data and produces interpretive analysis, calling out patterns and gaps.
  4. The Director reviews intermediate outputs and decides whether any sub-question needs deeper investigation. If so, it sends the Data Gatherer back for a targeted second pass.
  5. The Synthesis Writer receives all outputs and produces the final deliverable.
  6. The Director performs a final quality check against the original question and success criteria.

Step 3: Write Your Agent Prompts

Each agent prompt needs three components:

Role definition — Who this agent is and what it specializes in. Be specific: "You are a quantitative data researcher. You collect numbers, statistics, percentages, and named sources. You do not interpret data or draw conclusions."

Task instructions — What to do with the input it receives. Include the format of expected input (a sub-question from the Director) and the format of expected output (a structured evidence table, an analytical brief, a final report section).

Constraints — What not to do. Tell the Data Gatherer not to editorialize. Tell the Domain Analyst not to fabricate statistics. Tell the Synthesis Writer not to introduce new claims unsupported by earlier agents' outputs.

Step 4: Run the Team

With your agents defined, run the team against your research question. Start with a moderately scoped question — something that should produce a 3-5 page report, not a 50-page thesis. Watch the intermediate outputs as they come through. The first run is a calibration exercise as much as a research exercise.

Step 5: Review and Iterate

After the first run, evaluate the output against your expectations:

Completeness — Did the final report address every aspect of the original question? If entire dimensions are missing, your Director's decomposition step needs refinement.

Evidence quality — Are the Data Gatherer's claims specific and sourced, or vague and generic? If the output reads like it could apply to any industry, tighten the instructions to demand specificity.

Analytical depth — Did the Domain Analyst surface insights you didn't already know? If the analysis is obvious, give the Analyst more context about what you already understand so it can push beyond baseline knowledge.

Synthesis coherence — Does the final document tell a story, or is it a collection of disconnected sections? If the latter, give the Synthesis Writer explicit instructions to thread a narrative through the evidence.

Refine your prompts based on what you observe and run again. Two to three iterations typically produce a research workflow you can rely on repeatedly.

Expected Output

A well-tuned 4-agent research team produces a structured report containing:

Total execution time is typically 5-15 minutes depending on the complexity of the question and the number of revision loops the Director initiates.

Tips and Variations

Narrower teams for focused questions. If your research question is tightly scoped, drop to three agents: Data Gatherer, Analyst, and Writer. The Director overhead is only worth it when the question requires dynamic task allocation.

Add a Fact Checker. For high-stakes research (investor presentations, regulatory filings), add a fifth agent whose sole job is to verify claims in the Synthesis Writer's output against the Data Gatherer's raw evidence. This catches fabricated or exaggerated claims before they reach your audience.

Recurring research cadences. If you're running the same research monthly, save your agent configuration and feed the previous month's output as context. This lets the team track changes over time rather than producing a fresh snapshot each cycle.

Domain-specific customization. The generic roles described here work across industries, but you'll get better results by tuning the Domain Analyst's prompt to your sector. A healthcare team needs an Analyst that understands regulatory pathways and reimbursement models. A fintech team needs one that understands compliance frameworks and capital requirements.

Generate the full prompt automatically →