Repo-Guided Skill Council

Pattern: Advisory Debate | Team size: 4 agents

This team uses structured critique to ensure the skill list is both high-quality and aligned to Claude skill best practices, while remaining original relative to any root-folder examples. An advisory debate pattern works well when tradeoffs exist (coverage vs. specificity, safety vs. utility) and when you need robust review before finalizing.

Business Challenge

build me a team of agents that will build me a list of claude skills for a category or niche or business problem, that will all be properly written according best practices of claude skill creation. If there is a folder, skills in the root of project, the agents should study them, but not replicate, only use as example.

Agent Roles

Generated Prompt

Create an agent team to produce an original, best-practices list of Claude skills for a user-provided category/niche/business problem, using any existing `skills/` folder in the repo only as non-copyable reference examples.

PROJECT NAME (use exactly): repo_guided_skill_council

PRIMARY OBJECTIVE
- Generate a curated list of Claude skills (skill specifications) tailored to the user’s chosen category/niche/business problem.
- Each skill must be written according to Claude skill creation best practices: clear purpose, scope, constraints, interaction design, inputs/outputs, safety considerations, and evaluation criteria.
- If a `skills/` folder exists at the project root, study it for patterns and quality bar, but DO NOT replicate language, structure verbatim, or unique concepts. Use it only to infer best practices and expected formatting.

TEAM DESIGN: Advisory Debate (Repo-Guided Skill Council)
Agents:
1) Best-Practices Advocate
2) User Value Advocate
3) Repo Similarity Watchdog
4) Moderator & Final Editor

GLOBAL RULES (ALL AGENTS)
- Do not copy text from any existing repo `skills/` files. No paraphrasing close to original either. Treat them as “style inspiration,” not content.
- Keep skills original, tailored to the user’s specified category/niche/problem.
- Each skill must include measurable success criteria and explicit constraints.
- Prefer concrete, testable behaviors over vague promises.
- Use consistent terminology and headings across skills.
- If user input is missing (category/niche/problem), produce a short question set first and a “default assumptions” section; do not block progress—draft a placeholder set based on reasonable assumptions, clearly labeled.

REPO INSPECTION (MUST HAPPEN FIRST; DEPENDENCY FOR EVERYTHING ELSE)
- Agent: Moderator & Final Editor coordinates; Repo Similarity Watchdog participates.
- Task: Check whether `skills/` exists at repo root. If it exists:
  - Read all files inside `skills/` (and subfolders if any).
  - Extract only generalizable best practices: formatting patterns, section headings, level of specificity, safety tone, etc.
  - Create a “Do-Not-Replicate” list: distinctive phrases, unique frameworks, and specific skill ideas present in the repo.
- Output file (required, even if folder doesn’t exist):
  - outputs/agent_teams_demo/repo_guided_skill_council/repo_audit.md
  - Length: 500–900 words.
  - Sections (use exactly these headings):
    1. Repo Scan Summary
    2. Observed Best-Practice Patterns (Non-Copyable)
    3. Do-Not-Replicate List (Concepts + Phrases)
    4. Implications for Our New Skill Set

USER CONTEXT INTAKE (MUST HAPPEN AFTER REPO INSPECTION; DEPENDENCY FOR SKILL IDEATION)
- Agent: User Value Advocate leads; Best-Practices Advocate supports.
- Task: Determine what category/niche/business problem the skill list is for.
  - If the user provided it already, restate it precisely.
  - If not provided, generate 6–10 targeted clarifying questions that influence skill design (audience, maturity, tools, constraints, compliance, tone, output formats, etc.).
  - Define default assumptions to proceed if answers are not available.
- Output file:
  - outputs/agent_teams_demo/repo_guided_skill_council/context_intake.md
  - Length: 300–600 words.
  - Sections:
    1. Interpreted Target Category/Niche/Problem
    2. Clarifying Questions (6–10)
    3. Default Assumptions (if unanswered)
    4. Skill Design Implications

SKILL LIST TARGETS (APPLY TO FINAL OUTPUT)
- Total skills: 10–14 skills.
- Each skill must be distinct, non-overlapping, and mapped to real workflows.
- Coverage requirements:
  - At least 3 “core workflow” skills (daily/weekly repeated tasks).
  - At least 2 “decision support” skills (tradeoff analysis, prioritization, strategy).
  - At least 2 “quality assurance / review” skills (audit, critique, validation).
  - At least 1 “risk/safety/compliance” skill appropriate to the niche.
  - At least 1 “automation/templating” skill (repeatable format, reusable template output).
- Each skill must include:
  - Name (unique, action-oriented)
  - One-sentence Purpose
  - Ideal User + When to Use
  - Inputs (explicit fields)
  - Outputs (explicit artifacts)
  - Interaction Flow (step-by-step, 4–8 steps)
  - Guardrails & Constraints (what it will/won’t do)
  - Quality Checklist (5–10 bullet criteria)
  - Example Prompt (1) + Example Output Outline (brief, structured)
  - Evaluation Rubric (3–6 scored dimensions, with what “good” looks like)

ADVISORY DEBATE WORKFLOW (STRICT ORDER WITH DEPENDENCIES)
Phase 1: Skill Ideation Draft (MUST COMPLETE BEFORE DEBATE)
- Agent: User Value Advocate drafts the initial candidate list of 14–18 skill ideas (just titles + 1–2 sentence descriptions each), ensuring workflow relevance.
- Agent: Best-Practices Advocate appends notes on required constraints/sections and flags common best-practice gaps.
- Output file:
  - outputs/agent_teams_demo/repo_guided_skill_council/skill_ideation.md
  - Length: 400–800 words.
  - Sections:
    1. Candidate Skill Titles (14–18)
    2. Rationale by Cluster (group into 3–5 clusters)
    3. Risks of Overlap + How We’ll Differentiate

Phase 2: Advisory Debate Round 1 (MUST COMPLETE BEFORE ANY FULL SKILL WRITING)
- Agent: Best-Practices Advocate argues for structure, constraints, evaluability; proposes which candidates to cut/merge for best-practice compliance.
- Agent: User Value Advocate argues for utility, adoption likelihood, and real-world outputs; proposes which candidates to keep.
- Agent: Repo Similarity Watchdog checks ideation against repo examples (from repo_audit.md) and flags anything too close; demands replacements.
- Moderator resolves conflicts into a final shortlist size: 10–14.
- Output file:
  - outputs/agent_teams_demo/repo_guided_skill_council/debate_round_1.md
  - Length: 700–1200 words.
  - Sections:
    1. Best-Practices Advocate: Keep/Cut/Merge Recommendations
    2. User Value Advocate: Keep/Cut/Merge Recommendations
    3. Repo Similarity Watchdog: Similarity Flags + Required Changes
    4. Moderator Decision: Final Shortlist (10–14) + Rationale

Phase 3: Full Skill Specification Drafting (MUST USE THE FINAL SHORTLIST; DEPENDENCY: debate_round_1.md)
- Agent: Best-Practices Advocate writes 5–7 skills.
- Agent: User Value Advocate writes 5–7 skills.
- Repo Similarity Watchdog reviews each drafted skill for closeness to repo examples and forces rewrites where needed (must provide explicit rewrite directives).
- Moderator ensures consistent formatting and naming across all skills.
- Output file (single consolidated draft):
  - outputs/agent_teams_demo/repo_guided_skill_council/skills_draft.md
- Length: 3000–6000 words total (depending on number of skills).
- Formatting requirements:
  - Start with a table of contents listing all skills with anchor links.
  - Each skill must be a level-2 heading: “## Skill X: [Name]”
  - Inside each skill, use level-3 headings exactly matching:
    - ### Purpose
    - ### Ideal User + When to Use
    - ### Inputs
    - ### Outputs
    - ### Interaction Flow
    - ### Guardrails & Constraints
    - ### Quality Checklist
    - ### Example Prompt
    - ### Example Output Outline
    - ### Evaluation Rubric

Phase 4: Advisory Debate Round 2 (QUALITY + ORIGINALITY REVIEW; DEPENDENCY: skills_draft.md)
- Best-Practices Advocate: Identify missing best-practice elements, ambiguity, or non-testable outputs; require edits.
- User Value Advocate: Identify low-value/low-adoption skills; require improvements to workflow fit and artifacts.
- Repo Similarity Watchdog: Perform a second pass for similarity; must cite the “Do-Not-Replicate” list entries implicated and require rewrites.
- Moderator: produce an edit plan and apply it.
- Outputs:
  1) outputs/agent_teams_demo/repo_guided_skill_council/debate_round_2.md
     - Length: 700–1400 words.
     - Sections:
       1. Best-Practices Advocate: Issues + Required Fixes
       2. User Value Advocate: Issues + Required Fixes
       3. Repo Similarity Watchdog: Similarity Findings + Rewrite Orders
       4. Moderator: Final Edit Plan (bullet list, ordered)
  2) outputs/agent_teams_demo/repo_guided_skill_council/skills_final.md
     - Length: 3000–6500 words total.
     - Must incorporate all Round 2 fixes.
     - Must preserve the exact heading structure defined in Phase 3.

FINAL SYNTHESIS / REVIEW (MUST BE LAST; DEPENDENCY: skills_final.md)
- Agent: Moderator & Final Editor leads, with brief sign-off notes from other agents.
- Task:
  - Verify coverage requirements and counts.
  - Verify each skill has all required sections and measurable rubrics.
  - Verify originality relative to repo examples and compliance with Do-Not-Replicate list.
  - Provide a concise “How to use this skill set” guide.
- Output file:
  - outputs/agent_teams_demo/repo_guided_skill_council/final_review.md
  - Length: 600–1000 words.
  - Sections:
    1. Coverage Checklist (explicitly confirm each coverage requirement)
    2. Best-Practices Compliance Checklist (pass/fail bullets)
    3. Originality Attestation (how we avoided replication)
    4. Recommended Next Steps (how a user should adopt/test these skills)
  - Include “Agent Sign-offs” at the end with 2–4 bullets from each agent (what they verified and remaining risks).

COLLABORATION MECHANICS (ENFORCE THROUGHOUT)
- Share findings by referencing specific sections of the produced files (e.g., “See repo_audit.md §3”).
- Challenge assumptions explicitly: each debate round must include at least 3 assumption challenges (clearly labeled “Assumption Challenge #1…”).
- Maintain a “Change Log” within debate_round_2.md Moderator section: list skill names changed, what changed, and why.
- If any skill is flagged by Repo Similarity Watchdog, it must be rewritten before moving to final outputs; do not defer.

EXECUTION NOTE
- If the user did not specify the target category/niche/problem, proceed using default assumptions but keep the skills written in a way that is easy to retarget (parameterized inputs).

Build Your Own

Create your own AI agent team at Build Agents Store. Describe your business problem and get specialized agent teams with ready-to-use prompts for Claude Code.