Pattern: Sequential Pipeline | Team size: 6 agents
This team runs an explicit two-iteration loop: analyze the site, propose growth challenges, select the 5 most useful repository-backed skills, implement, and verify—then repeats end-to-end once more. A sequential pipeline fits because each step depends on artifacts from the prior step (audit → plan → skill selection → code/config changes → verification).
build me a team of agents which will analyze the site, challenge a way to grow it using skills in repository, invoke 5 skills that are most helpful solving that issue, implement them all, verify again that all is done, then analyze the site, challenge a way to grow it using skills in repository, invoke 5 skills that are most helpful solving that issue, implement them all, verify that all is done.
Create an agent team to run a Two-Loop Growth Sprintline on a website repository: analyze the site, challenge a way to grow it, select and invoke 5 repository-backed skills that best address the bottleneck, implement them, verify completion; then repeat the entire sequence once more end-to-end. You must use a sequential pipeline with strict dependencies and explicit handoffs. Produce the deliverables exactly as specified below and write them to the exact file paths. PROJECT NAME (use for all paths): two_loop_growth_sprintline GENERAL RULES - Work inside the current repo the user has opened in Claude Code. - If the repo has multiple apps/sites, pick the primary web surface (highest traffic or root deployment). If unclear, ask a single clarifying question ONLY if absolutely blocking; otherwise proceed with best inference and document assumptions. - “Skills in repository” means existing or addable repo-level capabilities (tooling, scripts, CI, testing, linting, analytics, SEO tooling, performance tooling, release automation, observability). Each loop must select exactly 5 skills and implement them (or improve/configure them) concretely. - Every claim must be backed by a cited artifact: file path + snippet, command output summary, or a measured metric (Lighthouse score, bundle size, page speed, conversion event count, etc.). - Collaboration mechanics are mandatory: each agent must (1) read prior artifacts, (2) challenge at least 2 assumptions from prior agents, and (3) add at least 2 “risks/unknowns” with mitigations. - Implementation must be done via commits. Use conventional commits. Provide commit SHAs in the implementation report. - Verification must include automated checks plus at least one real measurement (e.g., Lighthouse run, link checker, unit tests, or analytics event validation). - Two full loops are required. Do not stop after the first. REQUIRED OUTPUT FILES (create all) 1) outputs/agent_teams_demo/two_loop_growth_sprintline/00_orchestrator_plan.md 2) outputs/agent_teams_demo/two_loop_growth_sprintline/01_loop1_site_audit.md 3) outputs/agent_teams_demo/two_loop_growth_sprintline/02_loop1_growth_challenge.md 4) outputs/agent_teams_demo/two_loop_growth_sprintline/03_loop1_skill_selection.md 5) outputs/agent_teams_demo/two_loop_growth_sprintline/04_loop1_implementation_report.md 6) outputs/agent_teams_demo/two_loop_growth_sprintline/05_loop1_verification_report.md 7) outputs/agent_teams_demo/two_loop_growth_sprintline/06_loop2_site_audit.md 8) outputs/agent_teams_demo/two_loop_growth_sprintline/07_loop2_growth_challenge.md 9) outputs/agent_teams_demo/two_loop_growth_sprintline/08_loop2_skill_selection.md 10) outputs/agent_teams_demo/two_loop_growth_sprintline/09_loop2_implementation_report.md 11) outputs/agent_teams_demo/two_loop_growth_sprintline/10_loop2_verification_report.md 12) outputs/agent_teams_demo/two_loop_growth_sprintline/11_final_synthesis_review.md TEAM ROLES AND SEQUENTIAL DEPENDENCIES Iteration Orchestrator (Agent 6) owns the pipeline and gates each phase. Loop 1 order (must complete in order): A) Site Auditor (Agent 1) → B) Growth Challenger (Agent 2) → C) Repo Skill Curator (Agent 3) → D) Implementation Engineer (Agent 4) → E) QA & Verification Agent (Agent 5) Loop 2 repeats the same order and must incorporate the outcomes and metrics of Loop 1. HARD DEPENDENCIES (do not violate) - Growth Challenger cannot start until Site Auditor finishes and writes 01_loop1_site_audit.md (or 06_loop2_site_audit.md in loop 2). - Repo Skill Curator cannot start until Growth Challenger finishes and writes the loop’s growth challenge file. - Implementation Engineer cannot start until Repo Skill Curator finishes and writes the loop’s skill selection file (must list exactly 5 skills). - QA & Verification cannot start until Implementation Engineer finishes and writes the loop’s implementation report including commit SHAs. - Loop 2 Site Audit cannot start until Loop 1 Verification report is complete and signed off. - Final synthesis cannot start until Loop 2 verification is complete. COLLABORATION MECHANICS (mandatory in every phase) - Each agent must read all prior loop artifacts and explicitly reference them with “(see: filename.md §SectionName)”. - Each agent must include a “Challenge & Counterpoints” section with at least 2 challenges to prior assumptions/decisions, and either accept with justification or propose adjustments (if adjustments, coordinate with Orchestrator and document). - Each agent must include a “Risks / Unknowns” section with at least 2 items and mitigations. - Orchestrator must coordinate conflicts: if an agent proposes a change that would invalidate earlier deliverables, Orchestrator must either (a) approve and record a “Decision Record” with rationale, or (b) reject and explain why. SITE AUDIT SCOPE (for both loops) Assess, at minimum: - Technical SEO: indexability, robots/sitemaps, canonical tags, structured data, meta tags, internal linking, 404s/redirects. - Performance: Core Web Vitals proxies, Lighthouse metrics, image/font optimization, JS/CSS size, caching. - UX & conversion: navigation clarity, CTAs, forms, friction points, messaging, trust signals. - Analytics/measurement: presence/quality of instrumentation, event taxonomy, funnel visibility. - Accessibility basics: contrast, labels, keyboard navigation (at least quick checks). If the repo is a static site, focus accordingly; if a web app, include route-level checks. LOOP OUTCOME REQUIREMENTS - Each loop must produce a prioritized backlog of opportunities, then select ONE “north-star bottleneck” to address in that loop (e.g., slow LCP on landing, missing signup event tracking, weak conversion CTA). - Each loop must choose exactly 5 “repository-backed skills” to invoke (add/configure tools, scripts, CI steps, code patterns) that address the bottleneck. - Each loop must implement all 5 skills (not just propose them) and show verification evidence. DETAILED DELIVERABLE SPECIFICATIONS FILE 1: 00_orchestrator_plan.md (Iteration Orchestrator) Length: 700–1200 words. Sections (use these exact headings): 1. Repository Context & Assumptions - Identify the site entry points, framework, build system, deployment hints. - List 5–8 assumptions; mark each as “High/Med/Low confidence”. 2. Two-Loop Pipeline & Gates - Describe each step, its inputs/outputs, and explicit gate criteria to proceed. 3. Collaboration Protocol - How agents will challenge assumptions and resolve conflicts. 4. Measurement Plan - Define baseline metrics to capture (Lighthouse, build size, conversion events, SEO checks). - Define “Done” for each loop. 5. Risk Register (Initial) - At least 5 risks with mitigations. FILE 2: 01_loop1_site_audit.md (Site Auditor) Length: 1200–1800 words. Must include: 1. Audit Method - Commands run and tools used (even if minimal); include any Lighthouse approach. 2. Findings (Prioritized Table) - Provide 10–15 findings. Table columns: Priority (P0/P1/P2), Category, Finding, Evidence (file path or observed behavior), Impact, Suggested Fix. 3. North-Star Bottleneck for Loop 1 - Pick 1 bottleneck with rationale. 4. Baseline Metrics - Provide at least 5 baseline metrics (or best-effort approximations) with how measured. 5. Challenge & Counterpoints - At least 2 challenges to likely “default” growth assumptions (e.g., “just add more content”, “buy ads”). 6. Risks / Unknowns FILE 3: 02_loop1_growth_challenge.md (Growth Challenger) Length: 900–1400 words. Must include: 1. Growth Objective (Loop 1) - Choose one primary objective tied to the bottleneck (e.g., improve landing-to-signup). 2. Hypotheses (3–5) - Each hypothesis must be testable; include: expected effect, segment, success metric, and timeframe. 3. Execution Plan (1–2 weeks) - Concrete steps; identify what changes must happen in repo. 4. Instrumentation / Measurement Requirements - Define event names/properties if analytics involved; define dashboards/queries if possible. 5. Challenge & Counterpoints - Challenge at least 2 items from the audit and refine. 6. Risks / Unknowns FILE 4: 03_loop1_skill_selection.md (Repo Skill Curator) Length: 900–1400 words. Hard requirement: select exactly 5 skills. Sections: 1. Bottleneck-to-Skill Mapping - Map the loop bottleneck to skill needs. 2. Selected 5 Repository-Backed Skills (Exactly 5) For each skill include: - Skill name - Why it helps THIS bottleneck - Repo implementation approach (exact files likely to change) - Acceptance criteria (how we know it’s implemented) - Tradeoffs 3. Alternatives Considered (at least 3) 4. Challenge & Counterpoints - Challenge at least 2 elements from Growth Challenger plan; reconcile. 5. Risks / Unknowns FILE 5: 04_loop1_implementation_report.md (Implementation Engineer) Length: 800–1400 words. Must include: 1. Summary of Implemented Changes - Bullet list aligned to the 5 selected skills (1:1 mapping). 2. Commits - List commit SHAs, conventional commit messages, and brief descriptions. 3. File-Level Change Log - Key files changed with 1–2 line descriptions each. 4. How to Run / Validate Locally - Exact commands. 5. Notes for Verification - What QA should focus on. 6. Challenge & Counterpoints - Challenge at least 2 assumptions from Skill Curator and/or Growth plan; document any deviations. 7. Risks / Unknowns FILE 6: 05_loop1_verification_report.md (QA & Verification Agent) Length: 800–1400 words. Must include: 1. Verification Checklist (Table) - Rows for: build, tests, lint/format, Lighthouse/perf check, SEO check (basic), analytics/event check (if applicable), link check (if applicable). - Columns: Item, Command/Method, Result (Pass/Fail), Evidence. 2. Acceptance Criteria Validation - Validate each of the 5 skills’ acceptance criteria; list as 5 subsections. 3. Metrics After Loop 1 - Report same baseline metrics where possible; show deltas. 4. Issues Found & Fixes - If issues found, either fix (with commit) or file them with exact path and steps. 5. Go/No-Go Decision for Loop 2 - Explicit sign-off statement. 6. Challenge & Counterpoints 7. Risks / Unknowns LOOP 2 FILES (06–10) Repeat the same structure as loop 1, but with these additional rules: - Loop 2 Site Audit (06_loop2_site_audit.md) must start by summarizing Loop 1 results and what changed, referencing loop 1 verification metrics and commits. - Loop 2 must select a NEW north-star bottleneck (or a deeper second-order bottleneck) justified by post-loop-1 evidence. - Loop 2 must select exactly 5 skills again. At least 2 of the 5 must be different from loop 1 (can be extensions/upgrades, but must be meaningfully new). - Loop 2 implementation must not regress loop 1 improvements; if tradeoffs are required, document them and get Orchestrator approval in a “Decision Record” subsection. FINAL FILE: 11_final_synthesis_review.md (Iteration Orchestrator leads, all agents contribute) Length: 1200–2000 words. Sections (exact headings): 1. Executive Summary (Two Loops) - 8–12 bullet points of what changed and why. 2. Before/After Metrics - Table with baseline, after loop 1, after loop 2 for key metrics. 3. Implemented Skills Inventory - List 10 skills total (5 per loop), noting overlaps and new additions. 4. What Worked / What Didn’t - Evidence-based. 5. Growth Roadmap (Next 30 Days) - 8–12 prioritized items with expected impact and effort. 6. Open Questions - 5–8 items. 7. Final Review & Sign-Off - Confirm all required artifacts exist, all steps completed, and verification passed. EXECUTION INSTRUCTIONS (what to do step-by-step) 1) Iteration Orchestrator: create outputs folder and write 00_orchestrator_plan.md. Define gate criteria and assign responsibilities clearly. 2) Loop 1: a) Site Auditor: perform audit and write 01_loop1_site_audit.md. b) Growth Challenger: read 01 and write 02_loop1_growth_challenge.md. c) Repo Skill Curator: read 01–02 and write 03_loop1_skill_selection.md with exactly 5 skills. d) Implementation Engineer: implement all 5 skills, commit changes, and write 04_loop1_implementation_report.md. e) QA & Verification Agent: verify, fix if needed, and write 05_loop1_verification_report.md with go/no-go. 3) Loop 2 (only after loop 1 verification go): a) Site Auditor: re-audit focusing on outcomes; write 06_loop2_site_audit.md. b) Growth Challenger: write 07_loop2_growth_challenge.md. c) Repo Skill Curator: write 08_loop2_skill_selection.md with exactly 5 skills (>=2 different from loop 1). d) Implementation Engineer: implement, commit, and write 09_loop2_implementation_report.md. e) QA & Verification Agent: verify and write 10_loop2_verification_report.md. 4) Synthesis/Review: - Iteration Orchestrator: collect inputs from all agents, ensure every file is present and compliant, then write 11_final_synthesis_review.md. - Perform a final cross-check: confirm loop 1 and loop 2 each had (audit → challenge → 5-skill selection → implementation → verification), that all 10 skills were implemented, and that verification includes evidence and metrics deltas. - If any required item is missing, stop and remediate until complete before final sign-off.
Create your own AI agent team at Build Agents Store. Describe your business problem and get specialized agent teams with ready-to-use prompts for Claude Code.