From bb7506328daaee47ba384e6812a71274e0dd90f6 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 09:50:29 +0100 Subject: [PATCH 1/9] V1 --- instructions/agents.instructions.md | 828 ++++++++++++++++++++++++++++ 1 file changed, 828 insertions(+) create mode 100644 instructions/agents.instructions.md diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md new file mode 100644 index 000000000..9e5f1215c --- /dev/null +++ b/instructions/agents.instructions.md @@ -0,0 +1,828 @@ +--- +description: 'Guidelines for creating high-quality custom agent files for GitHub Copilot' +applyTo: '**/*.agent.md' +--- + +# Custom Agent File Guidelines + +Instructions for creating effective and maintainable custom agent files that provide specialized expertise for specific development tasks in GitHub Copilot. + +## Project Context + +- Target audience: Developers creating custom agents for GitHub Copilot +- File format: Markdown with YAML frontmatter +- File naming convention: lowercase with hyphens (e.g., `test-specialist.agent.md`) +- Location: `.github/agents/` directory (repository-level) or `agents/` directory (organization/enterprise-level) +- Purpose: Define specialized agents with tailored expertise, tools, and instructions for specific tasks +- Official documentation: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents + +## Required Frontmatter + +Every agent file must include YAML frontmatter with the following fields: + +```yaml +--- +description: 'Brief description of the agent purpose and capabilities' +name: 'Agent Display Name' +tools: ['read', 'edit', 'search'] +model: 'claude-4.5-sonnet' +target: 'vscode' +infer: true +--- +``` + +### Core Frontmatter Properties + +#### **description** (REQUIRED) +- Single-quoted string, clearly stating the agent's purpose and domain expertise +- Should be concise (50-150 characters) and actionable +- Example: `'Focuses on test coverage, quality, and testing best practices'` + +#### **name** (OPTIONAL) +- Display name for the agent in the UI +- If omitted, defaults to filename (without `.md` or `.agent.md`) +- Use title case and be descriptive +- Example: `'Testing Specialist'` + +#### **tools** (OPTIONAL) +- List of tool names or aliases the agent can use +- Supports comma-separated string or YAML array format +- If omitted, agent has access to all available tools +- See "Tool Configuration" section below for details + +#### **model** (STRONGLY RECOMMENDED) +- Specifies which AI model the agent should use +- Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode +- Example: `'claude-4.5-sonnet'`, `'gpt-4'`, `'gpt-4o'` +- Choose based on agent complexity and required capabilities + +#### **target** (OPTIONAL) +- Specifies target environment: `'vscode'` or `'github-copilot'` +- If omitted, agent is available in both environments +- Use when agent has environment-specific features + +#### **infer** (OPTIONAL) +- Boolean controlling whether Copilot can automatically use this agent based on context +- Default: `true` if omitted +- Set to `false` to require manual agent selection + +#### **metadata** (OPTIONAL, GitHub.com only) +- Object with name-value pairs for agent annotation +- Example: `metadata: { category: 'testing', version: '1.0' }` +- Not supported in VS Code + +#### **mcp-servers** (OPTIONAL, Organization/Enterprise only) +- Configure MCP servers available only to this agent +- Only supported for organization/enterprise level agents +- See "MCP Server Configuration" section below + +## Tool Configuration + +### Tool Specification Strategies + +**Enable all tools** (default): +```yaml +# Omit tools property entirely, or use: +tools: ['*'] +``` + +**Enable specific tools**: +```yaml +tools: ['read', 'edit', 'search', 'execute'] +``` + +**Enable MCP server tools**: +```yaml +tools: ['read', 'edit', 'github/*', 'playwright/navigate'] +``` + +**Disable all tools**: +```yaml +tools: [] +``` + +### Standard Tool Aliases + +All aliases are case-insensitive: + +| Alias | Alternative Names | Category | Description | +|-------|------------------|----------|-------------| +| `execute` | shell, Bash, powershell | Shell execution | Execute commands in appropriate shell | +| `read` | Read, NotebookRead, view | File reading | Read file contents | +| `edit` | Edit, MultiEdit, Write, NotebookEdit | File editing | Edit and modify files | +| `search` | Grep, Glob, search | Code search | Search for files or text in files | +| `agent` | custom-agent, Task | Agent invocation | Invoke other custom agents | +| `web` | WebSearch, WebFetch | Web access | Fetch web content and search | +| `todo` | TodoWrite | Task management | Create and manage task lists (VS Code only) | + +### Built-in MCP Server Tools + +**GitHub MCP Server**: +```yaml +tools: ['github/*'] # All GitHub tools +tools: ['github/get_file_contents', 'github/search_repositories'] # Specific tools +``` +- All read-only tools available by default +- Token scoped to source repository + +**Playwright MCP Server**: +```yaml +tools: ['playwright/*'] # All Playwright tools +tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools +``` +- Configured to access localhost only +- Useful for browser automation and testing + +### Tool Selection Best Practices + +- **Principle of Least Privilege**: Only enable tools necessary for the agent's purpose +- **Security**: Limit `execute` access unless explicitly required +- **Focus**: Fewer tools = clearer agent purpose and better performance +- **Documentation**: Comment why specific tools are required for complex configurations + +## Sub-Agent Invocation (Agent Orchestration) + +Agents can invoke other agents using the `runSubagent` function. This enables complex workflows where a coordinator agent orchestrates multiple specialized agents to accomplish multi-step tasks. + +### When to Use Sub-Agents + +**Use sub-agents when**: +- Breaking complex tasks into specialized subtasks +- Creating orchestration/workflow agents +- Delegating specific responsibilities to expert agents +- Enabling agent composition and reusability +- Implementing pipeline or sequential workflows + +**Examples**: +- An orchestrator agent managing a multi-step question generation pipeline +- A CI/CD coordinator delegating tasks to specialized agents +- A project manager agent breaking down work across technical specialists +- A code review system with different agents for security, performance, style + +### Sub-Agent Configuration + +To enable sub-agent invocation, include `agent` in the tools list: + +```yaml +--- +description: 'Orchestrates specialized agents for multi-step workflows' +name: 'Orchestrator Agent' +tools: ['read', 'edit', 'search', 'agent'] +--- +``` + +### Sub-Agent Invocation Syntax + +Use `runSubagent` to invoke another agent with a detailed prompt: + +```javascript +const result = await runSubagent({ + description: 'Brief description of what this sub-task accomplishes', + prompt: `You are the [Agent Name] specialist. + +Your task: +1. [Step 1 description] +2. [Step 2 description] +3. [Step 3 description] + +Input parameters: +- Parameter 1: [value] +- Parameter 2: [value] + +Expected output: +[Specify format and location of results] + +Quality standards: +- [Standard 1] +- [Standard 2] +- [Standard 3]` +}); +``` + +### Best Practices for Sub-Agent Orchestration + +#### 1. **Clear Task Definition** +- Provide explicit, unambiguous instructions +- Define inputs and outputs clearly +- Specify expected file locations and formats +- Include quality standards and validation criteria + +```javascript +// Good: Clear and specific +const result = await runSubagent({ + description: 'Generate course materials from transcripts', + prompt: `Process certification: ${certName} + +Read all .transcript files from: certifications/${certName}/formation/transcripts/ +Create organized course document at: certifications/${certName}/formation/course.md + +Structure requirements: +- Use ## for sections, ### for subsections +- Include code blocks for formulas and examples +- Preserve all technical terminology +- No additional information beyond transcripts + +Return: Summary of sections created and files processed` +}); +``` + +#### 2. **Parameter Passing** +- Pass all necessary context to sub-agents +- Use consistent naming conventions +- Include file paths, configuration details, and constraints +- Specify output format and location explicitly + +```javascript +// Pattern: Context-aware parameters +const step1Result = await runSubagent({ + description: 'Step 1: Data preparation', + prompt: `Parameters: +- Project: ${projectName} +- Base path: ${basePath} +- Input files: ${inputDir} +- Output location: ${outputDir} + +Task: [detailed instructions]` +}); +``` + +#### 3. **Error Handling and Recovery** +- Capture and log results from each sub-agent call +- Handle failures gracefully (continue if non-critical) +- Log timestamps and durations for troubleshooting +- Provide fallback strategies + +```javascript +// Pattern: Error handling with logging +try { + const result = await runSubagent({ + description: 'Process data', + prompt: buildPromptForSubAgent(params) + }); + + const duration = calculateDuration(startTime, new Date()); + logStepCompletion(stepNumber, 'SUCCESS', duration, result); + return { status: 'success', result }; + +} catch (error) { + const duration = calculateDuration(startTime, new Date()); + logStepCompletion(stepNumber, 'FAILED', duration, error.message); + + if (isComposingStep) throw error; // Critical step + return { status: 'failed', error: error.message }; +} +``` + +#### 4. **Sequential vs Parallel Execution** +- Execute critical steps sequentially +- Ensure each step receives correct input from previous step +- Log completion before proceeding to next step +- Wait for results before using them as input to next agent + +```javascript +// Sequential execution pattern +async function executeWorkflow(params) { + // Step 1: Requires no prior output + const step1 = await runSubagent({ + description: 'Step 1: Data ingestion', + prompt: buildStep1Prompt(params) + }); + logStep(1, step1); + + // Step 2: Depends on Step 1 + const step2 = await runSubagent({ + description: 'Step 2: Data processing', + prompt: buildStep2Prompt(params, step1.result) + }); + logStep(2, step2); + + // Step 3: Depends on Step 2 + const step3 = await runSubagent({ + description: 'Step 3: Validation', + prompt: buildStep3Prompt(params, step2.result) + }); + logStep(3, step3); + + return { step1, step2, step3 }; +} +``` + +#### 5. **Result Logging and Tracking** +- Create detailed logs for pipeline execution +- Track timestamps (start, complete, duration) +- Document agent responses and outputs +- Include statistics and summaries + +```javascript +// Pattern: Comprehensive logging +function logStepCompletion(stepNum, stepName, status, startTime, endTime, + result, error = null) { + const duration = calculateDuration(startTime, endTime); + + const logEntry = ` +## Step ${stepNum}: ${stepName} +**Status:** ${status} +**Started:** ${startTime.toISOString()} +**Completed:** ${endTime.toISOString()} +**Duration:** ${duration} +**Output:** ${result ? result.summary : 'N/A'} +${error ? `**Error:** ${error}` : ''} +`; + + appendToLog(logEntry); +} +``` + +#### 6. **Conditional Sub-Agent Invocation** +- Check prerequisites before invoking sub-agents +- Skip optional steps if conditions not met +- Verify input files/folders exist +- Log skip reasons + +```javascript +// Pattern: Conditional execution +async function conditionalPipelineStep(stepNumber, params) { + const inputPath = `${params.basePath}/${params.inputFolder}`; + + // Check if input exists + if (!await fileExists(inputPath)) { + logStep(stepNumber, 'SKIPPED', 'Input folder does not exist'); + return { status: 'skipped', reason: 'Input not found' }; + } + + // Proceed with sub-agent invocation + return await runSubagent({ + description: `Execute Step ${stepNumber}`, + prompt: buildPromptForStep(stepNumber, params) + }); +} +``` + +### Sub-Agent Communication Pattern + +**Orchestrator Agent Structure**: + +```markdown +# Orchestrator Agent + +You are a workflow coordinator that manages specialized agents. + +## Responsibilities +- Break complex tasks into focused subtasks +- Invoke specialized agents via runSubagent +- Log and track each step +- Handle errors and recovery +- Generate summary reports + +## Sub-agents Managed +- @specialized-agent-1: Handles task X +- @specialized-agent-2: Handles task Y +- @specialized-agent-3: Handles task Z + +## Workflow Pattern + +For each task: +1. Validate prerequisites +2. Invoke appropriate sub-agent with detailed prompt +3. Log results with timestamps +4. Proceed to next step or handle failure +5. Generate final summary +``` + +### Real-World Example: Question Generation Pipeline + +Based on the Orchestrator agent example: + +```javascript +// Orchestrator invokes multiple specialized agents +async function questionGenerationPipeline(certificationName) { + const basePath = `certifications/${certificationName}`; + + // Step 1: Resume Transcripts + const transcriptResult = await runSubagent({ + description: 'Generate course from transcripts', + prompt: `You are the Resume Transcript specialist. + +Process: ${certificationName} +Input: ${basePath}/formation/transcripts/ +Output: ${basePath}/formation/course.md + +Task: +1. Read all .transcript files +2. Organize by topic (##) and subtopic (###) +3. Create structured course document +4. Preserve all technical accuracy + +Return: Summary of sections and files processed` + }); + + // Step 2: Create Question Sets + const setResult = await runSubagent({ + description: 'Generate CSV question sets', + prompt: `You are the Create Set specialist. + +Process: ${certificationName} +Input: ${basePath}/dumps/ and ${basePath}/formation/ +Output: ${basePath}/imports/ (CSV files) + +Task: +1. Convert dumps to CSV format (max 30 questions per file) +2. Generate questions from course material +3. Apply proper formatting +4. Save all to imports folder + +Return: List of CSV files created with counts` + }); + + // Step 3: Add Explanations + const explResult = await runSubagent({ + description: 'Add explanations to all questions', + prompt: `You are the Add Explanation specialist. + +Process: ${certificationName} +Input: ${basePath}/imports/ (CSV files) +Output: ${basePath}/imports/ (updated CSV files) + +Task: +1. Read all CSV files +2. For each question, use MCP Context7 to find official documentation +3. Write detailed explanations with references +4. Update CSV files + +Return: Summary of explanations added` + }); + + // Step 4: Verify Questions + const verifyResult = await runSubagent({ + description: 'Validate all questions and generate reports', + prompt: `You are the Verify Question specialist. + +Process: ${certificationName} +Input: ${basePath}/imports/ (CSV files with explanations) +Output: ${basePath}/imports/report/ (validation reports) + +Task: +1. Validate each question against official documentation +2. Check answer accuracy +3. Verify explanation quality +4. Generate detailed validation reports + +Return: Summary of issues found and reports generated` + }); + + // Generate final summary + generatePipelineSummary(certificationName, { + transcriptResult, + setResult, + explResult, + verifyResult + }); +} +``` + +### Common Sub-Agent Patterns + +**Pattern 1: Sequential Data Processing** +``` +Input → Agent1 (transform) → Agent2 (enrich) → Agent3 (validate) → Output +``` + +**Pattern 2: Parallel Task Distribution** +``` +Input → Agent1 (task A) → Agent3 (combine) + → Agent2 (task B) → Output +``` + +**Pattern 3: Conditional Branching** +``` +Input → Check condition → Agent1 or Agent2 → Output +``` + +**Pattern 4: Error Recovery** +``` +Input → Agent1 (primary) → Success? → Output + ↓ Failure + Agent2 (fallback) → Output +``` + +## Agent Prompt Structure + +The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Maximum length: 30,000 characters. + +### Recommended Sections + +#### 1. Agent Identity and Role +```markdown +# Agent Name + +Brief introduction explaining who the agent is and its primary role. +``` + +#### 2. Core Responsibilities +```markdown +## Core Responsibilities + +Clear list of what the agent does: +- Primary task 1 +- Primary task 2 +- Primary task 3 +``` + +#### 3. Approach and Methodology +```markdown +## Approach + +Step-by-step methodology: +1. First step +2. Second step +3. Third step +``` + +#### 4. Guidelines and Constraints +```markdown +## Guidelines + +- What the agent should always do +- What the agent should avoid +- Quality standards to maintain +``` + +#### 5. Output Expectations +```markdown +## Output Format + +Specify expected output structure, format, and quality criteria. +``` + +### Prompt Writing Best Practices + +**Be Specific and Direct**: +- Use imperative mood ("Analyze", "Generate", "Focus on") +- Avoid ambiguous terms ("should", "might", "possibly") +- Provide concrete examples when appropriate + +**Define Boundaries**: +- Clearly state what the agent should and shouldn't do +- Define scope limits explicitly +- Specify when to ask for clarification + +**Include Context**: +- Explain the agent's domain expertise +- Reference relevant frameworks, standards, or methodologies +- Provide technical context when necessary + +**Focus on Behavior**: +- Describe how the agent should think and work +- Include decision-making criteria +- Specify quality standards and validation steps + +**Use Structured Format**: +- Break content into clear sections with headers +- Use bullet points and numbered lists +- Make instructions scannable and hierarchical + +## MCP Server Configuration (Organization/Enterprise Only) + +MCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents. + +### Configuration Format + +```yaml +--- +name: my-custom-agent +description: 'Agent with MCP integration' +tools: ['read', 'edit', 'custom-mcp/tool-1'] +mcp-servers: + custom-mcp: + type: 'local' + command: 'some-command' + args: ['--arg1', '--arg2'] + tools: ["*"] + env: + ENV_VAR_NAME: ${{ secrets.API_KEY }} +--- +``` + +### MCP Server Properties + +- **type**: Server type (`'local'` or `'stdio'`) +- **command**: Command to start the MCP server +- **args**: Array of command arguments +- **tools**: Tools to enable from this server (`["*"]` for all) +- **env**: Environment variables (supports secrets) + +### Environment Variables and Secrets + +Secrets must be configured in repository settings under "copilot" environment. + +**Supported syntax**: +```yaml +env: + # Environment variable only + VAR_NAME: COPILOT_MCP_ENV_VAR_VALUE + + # Variable with header + VAR_NAME: $COPILOT_MCP_ENV_VAR_VALUE + VAR_NAME: ${COPILOT_MCP_ENV_VAR_VALUE} + + # GitHub Actions-style (YAML only) + VAR_NAME: ${{ secrets.COPILOT_MCP_ENV_VAR_VALUE }} + VAR_NAME: ${{ var.COPILOT_MCP_ENV_VAR_VALUE }} +``` + +## File Organization and Naming + +### Repository-Level Agents +- Location: `.github/agents/` +- Scope: Available only in the specific repository +- Access: Uses repository-configured MCP servers + +### Organization/Enterprise-Level Agents +- Location: `.github-private/agents/` (then move to `agents/` root) +- Scope: Available across all repositories in org/enterprise +- Access: Can configure dedicated MCP servers + +### Naming Conventions +- Use lowercase with hyphens: `test-specialist.agent.md` +- Name should reflect agent purpose +- Filename becomes default agent name (if `name` not specified) +- Allowed characters: `.`, `-`, `_`, `a-z`, `A-Z`, `0-9` + +## Agent Processing and Behavior + +### Versioning +- Based on Git commit SHAs for the agent file +- Create branches/tags for different agent versions +- Instantiated using latest version for repository/branch +- PR interactions use same agent version for consistency + +### Name Conflicts +Priority (highest to lowest): +1. Repository-level agent +2. Organization-level agent +3. Enterprise-level agent + +Lower-level configurations override higher-level ones with the same name. + +### Tool Processing +- `tools` list filters available tools (built-in and MCP) +- No tools specified = all tools enabled +- Empty list (`[]`) = all tools disabled +- Specific list = only those tools enabled +- Unrecognized tool names are ignored (allows environment-specific tools) + +### MCP Server Processing Order +1. Out-of-the-box MCP servers (e.g., GitHub MCP) +2. Custom agent MCP configuration (org/enterprise only) +3. Repository-level MCP configurations + +Each level can override settings from previous levels. + +## Agent Creation Checklist + +### Frontmatter +- [ ] `description` field present and descriptive (50-150 chars) +- [ ] `description` wrapped in single quotes +- [ ] `name` specified (optional but recommended) +- [ ] `tools` configured appropriately (or intentionally omitted) +- [ ] `model` specified for optimal performance +- [ ] `target` set if environment-specific +- [ ] `infer` set to `false` if manual selection required + +### Prompt Content +- [ ] Clear agent identity and role defined +- [ ] Core responsibilities listed explicitly +- [ ] Approach and methodology explained +- [ ] Guidelines and constraints specified +- [ ] Output expectations documented +- [ ] Examples provided where helpful +- [ ] Instructions are specific and actionable +- [ ] Scope and boundaries clearly defined +- [ ] Total content under 30,000 characters + +### File Structure +- [ ] Filename follows lowercase-with-hyphens convention +- [ ] File placed in correct directory (`.github/agents/` or `agents/`) +- [ ] Filename uses only allowed characters +- [ ] File extension is `.agent.md` + +### Quality Assurance +- [ ] Agent purpose is unique and not duplicative +- [ ] Tools are minimal and necessary +- [ ] Instructions are clear and unambiguous +- [ ] Agent has been tested with representative tasks +- [ ] Documentation references are current +- [ ] Security considerations addressed (if applicable) + +## Common Agent Patterns + +### Testing Specialist +**Purpose**: Focus on test coverage and quality +**Tools**: All tools (for comprehensive test creation) +**Approach**: Analyze, identify gaps, write tests, avoid production code changes + +### Implementation Planner +**Purpose**: Create detailed technical plans and specifications +**Tools**: Limited to `['read', 'search', 'edit']` +**Approach**: Analyze requirements, create documentation, avoid implementation + +### Code Reviewer +**Purpose**: Review code quality and provide feedback +**Tools**: `['read', 'search']` only +**Approach**: Analyze, suggest improvements, no direct modifications + +### Refactoring Specialist +**Purpose**: Improve code structure and maintainability +**Tools**: `['read', 'search', 'edit']` +**Approach**: Analyze patterns, propose refactorings, implement safely + +### Security Auditor +**Purpose**: Identify security issues and vulnerabilities +**Tools**: `['read', 'search', 'web']` +**Approach**: Scan code, check against OWASP, report findings + +## Common Mistakes to Avoid + +### Frontmatter Errors +- ❌ Missing `description` field +- ❌ Description not wrapped in quotes +- ❌ Invalid tool names without checking documentation +- ❌ Incorrect YAML syntax (indentation, quotes) + +### Tool Configuration Issues +- ❌ Granting excessive tool access unnecessarily +- ❌ Missing required tools for agent's purpose +- ❌ Not using tool aliases consistently +- ❌ Forgetting MCP server namespace (`server-name/tool`) + +### Prompt Content Problems +- ❌ Vague, ambiguous instructions +- ❌ Conflicting or contradictory guidelines +- ❌ Lack of clear scope definition +- ❌ Missing output expectations +- ❌ Overly verbose instructions (exceeding character limits) +- ❌ No examples or context for complex tasks + +### Organizational Issues +- ❌ Filename doesn't reflect agent purpose +- ❌ Wrong directory (confusing repo vs org level) +- ❌ Using spaces or special characters in filename +- ❌ Duplicate agent names causing conflicts + +## Testing and Validation + +### Manual Testing +1. Create the agent file with proper frontmatter +2. Reload VS Code or refresh GitHub.com +3. Select the agent from the dropdown in Copilot Chat +4. Test with representative user queries +5. Verify tool access works as expected +6. Confirm output meets expectations + +### Integration Testing +- Test agent with different file types in scope +- Verify MCP server connectivity (if configured) +- Check agent behavior with missing context +- Test error handling and edge cases +- Validate agent switching and handoffs + +### Quality Checks +- Run through agent creation checklist +- Review against common mistakes list +- Compare with example agents in repository +- Get peer review for complex agents +- Document any special configuration needs + +## Additional Resources + +### Official Documentation +- [Creating Custom Agents](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents) +- [Custom Agents Configuration](https://docs.github.com/en/copilot/reference/custom-agents-configuration) +- [Custom Agents in VS Code](https://code.visualstudio.com/docs/copilot/customization/custom-agents) +- [MCP Integration](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp) + +### Community Resources +- [Awesome Copilot Agents Collection](https://github.com/github/awesome-copilot/tree/main/agents) +- [Customization Library Examples](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents) +- [Your First Custom Agent Tutorial](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents/your-first-custom-agent) + +### Related Files +- [Prompt Files Guidelines](./prompt.instructions.md) - For creating prompt files +- [Instructions Guidelines](./instructions.instructions.md) - For creating instruction files + +## Version Compatibility Notes + +### GitHub.com (Coding Agent) +- ✅ Fully supports all standard frontmatter properties +- ✅ Repository and org/enterprise level agents +- ✅ MCP server configuration (org/enterprise) +- ❌ Does not support `model`, `argument-hint`, `handoffs` properties + +### VS Code / JetBrains / Eclipse / Xcode +- ✅ Supports `model` property for AI model selection +- ✅ Supports `argument-hint` and `handoffs` properties +- ✅ User profile and workspace-level agents +- ❌ Cannot configure MCP servers at repository level +- ⚠️ Some properties may behave differently + +When creating agents for multiple environments, focus on common properties and test in all target environments. Use `target` property to create environment-specific agents when necessary. From 16a2477ad49d480aa4504aac2a80cba6ad3fa29f Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 09:58:59 +0100 Subject: [PATCH 2/9] Add variables settings --- instructions/agents.instructions.md | 288 ++++++++++++++++++++++++++++ 1 file changed, 288 insertions(+) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index 9e5f1215c..76479fb16 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -505,6 +505,294 @@ Input → Agent1 (primary) → Success? → Output Agent2 (fallback) → Output ``` +## Variable Definition and Extraction + +Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data. + +### When to Use Variables + +**Use variables when**: +- Agent behavior depends on user input +- Need to pass dynamic values to sub-agents +- Want to make agents reusable across different contexts +- Require parameterized workflows +- Need to track or reference user-provided context + +**Examples**: +- Extract project name from user prompt +- Capture certification name for pipeline processing +- Identify file paths or directories +- Extract configuration options +- Parse feature names or module identifiers + +### Variable Declaration Pattern + +Define variables section early in the agent prompt to document expected parameters: + +```markdown +# Agent Name + +## Dynamic Parameters + +- **Parameter Name**: Description and usage +- **Another Parameter**: How it's extracted and used + +## Your Mission + +Process [PARAMETER_NAME] to accomplish [task]. +``` + +### Variable Extraction Methods + +#### 1. **Explicit User Input** +Ask the user to provide the variable if not detected in the prompt: + +```markdown +## Your Mission + +Process the project by analyzing your codebase. + +### Step 1: Identify Project +If no project name is provided, **ASK THE USER** for: +- Project name or identifier +- Base path or directory location +- Configuration type (if applicable) + +Use this information to contextualize all subsequent tasks. +``` + +#### 2. **Implicit Extraction from Prompt** +Automatically extract variables from the user's natural language input: + +```javascript +// Example: Extract certification name from user input +const userInput = "Process My Certification"; + +// Extract key information +const certificationName = extractCertificationName(userInput); +// Result: "My Certification" + +const basePath = `certifications/${certificationName}`; +// Result: "certifications/My Certification" +``` + +#### 3. **Contextual Variable Resolution** +Use file context or workspace information to derive variables: + +```markdown +## Variable Resolution Strategy + +1. **From User Prompt**: First, look for explicit mentions in user input +2. **From File Context**: Check current file name or path +3. **From Workspace**: Use workspace folder or active project +4. **From Settings**: Reference configuration files +5. **Ask User**: If all else fails, request missing information +``` + +### Using Variables in Agent Prompts + +#### Variable Substitution in Instructions + +Use template variables in agent prompts to make them dynamic: + +```markdown +# Agent Name + +## Dynamic Parameters +- **Project Name**: ${projectName} +- **Base Path**: ${basePath} +- **Output Directory**: ${outputDir} + +## Your Mission + +Process the **${projectName}** project located at `${basePath}`. + +## Process Steps + +1. Read input from: `${basePath}/input/` +2. Process files according to project configuration +3. Write results to: `${outputDir}/` +4. Generate summary report + +## Quality Standards + +- Maintain project-specific coding standards for **${projectName}** +- Follow directory structure: `${basePath}/[structure]` +``` + +#### Passing Variables to Sub-Agents + +Use extracted variables when invoking sub-agents: + +```javascript +// Example: Pass projectName and basePath to sub-agent +const basePath = `projects/${projectName}`; + +// Pass to sub-agent with variable +const result = await runSubagent({ + description: 'Process project files', + prompt: `You are the Project Processor specialist. + +Process: ${projectName} +Location: ${basePath} + +Task: +1. Read all files from ${basePath}/src/ +2. Analyze project structure +3. Generate documentation +4. Save to ${basePath}/docs/ + +Return: Summary of analysis` +}); +``` + +### Real-World Example: Parameterized Orchestrator Agent + +```markdown +# Orchestrator Agent - Question Generation Pipeline + +## Dynamic Parameters + +- **Certification Name**: Extracted from user prompt (e.g., "Platform Sharing Architect") +- **Base Path**: Derived as `certifications/[Certification Name]/` +- **Log File**: Set to `certifications/[Certification Name]/.pipeline-log.md` + +## Your Mission + +Execute the complete question generation pipeline for the **${certificationName}** certification by invoking specialized agents without human validation between steps. + +### Initial Setup + +If no certification name is provided in your request, **ASK THE USER** which certification to process. + +## Pre-flight Checks + +Verify the certification structure at `${basePath}`: +- ✅ presentation.md exists +- ✅ formation/transcripts/ exists (checking for .transcript files) +- ℹ️ formation/course.md will be created +- ✅ dumps/ exists +- ℹ️ imports/ directory will be created + +## Pipeline Execution + +### Step 1: Resume Transcript +Invoke `@resume-transcript` agent: + +Parameters to pass: +- Certification: ${certificationName} +- Input: ${basePath}/formation/transcripts/ +- Output: ${basePath}/formation/course.md + +### Step 2: Dump Transform +Invoke `@dump-transform` agent: + +Parameters to pass: +- Certification: ${certificationName} +- Input: ${basePath}/dumps/original/ +- Output: ${basePath}/dumps/transform/ + +### Step 3: Create Question Sets +Invoke `@create-set` agent: + +Parameters to pass: +- Certification: ${certificationName} +- Inputs: ${basePath}/dumps/transform/ and ${basePath}/formation/ +- Output: ${basePath}/imports/ + +## Logging + +All operations logged to: `${logFile}` + +Track for each step: +- Status (✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED) +- Timestamps for start and completion +- Duration of execution +- Summary of results +``` + +### Variable Best Practices + +#### 1. **Clear Documentation** +Always document what variables are expected: + +```markdown +## Required Variables +- **projectName**: The name of the project (string, required) +- **basePath**: Root directory for project files (path, required) + +## Optional Variables +- **mode**: Processing mode - quick/standard/detailed (enum, default: standard) +- **outputFormat**: Output format - markdown/json/html (enum, default: markdown) + +## Derived Variables +- **outputDir**: Automatically set to ${basePath}/output +- **logFile**: Automatically set to ${basePath}/.log.md +``` + +#### 2. **Consistent Naming** +Use consistent variable naming conventions: + +```javascript +// Good: Clear, descriptive naming +const variables = { + projectName, // What project to work on + basePath, // Where project files are located + outputDirectory, // Where to save results + processingMode, // How to process (detail level) + configurationPath // Where config files are +}; + +// Avoid: Ambiguous or inconsistent +const bad_variables = { + name, // Too generic + path, // Unclear which path + mode, // Too short + config // Too vague +}; +``` + +#### 3. **Validation and Constraints** +Document valid values and constraints: + +```markdown +## Variable Constraints + +**projectName**: +- Type: string (alphanumeric, hyphens, underscores allowed) +- Length: 1-100 characters +- Required: yes +- Pattern: `/^[a-zA-Z0-9_-]+$/` + +**processingMode**: +- Type: enum +- Valid values: "quick" (< 5min), "standard" (5-15min), "detailed" (15+ min) +- Default: "standard" +- Required: no +``` + +#### 4. **Variable Scope** +Be clear about variable scope and lifetime: + +```markdown +## Variable Scope + +### Global Variables (used throughout agent execution) +- ${projectName}: Available in all prompts and sub-agents +- ${basePath}: Used for all file operations +- ${timestamp}: Available for logging + +### Local Variables (used in specific sections) +- ${currentStep}: Only in step-specific prompts +- ${stepResult}: Only after step completion +- ${errorMessage}: Only in error handling + +### Sub-Agent Variables (passed to child agents) +- ${projectName}: Always pass to maintain context +- ${basePath}: Critical for file operations +- ${mode}: Inherit from parent agent +``` + ## Agent Prompt Structure The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Maximum length: 30,000 characters. From 074a76d0f4ef49e241a1edf53da8b029f7d98e5a Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 10:03:58 +0100 Subject: [PATCH 3/9] Edit condition using --- instructions/agents.instructions.md | 124 ++++++++++++++++++++++++---- 1 file changed, 108 insertions(+), 16 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index 76479fb16..e550a31d7 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -339,23 +339,115 @@ ${error ? `**Error:** ${error}` : ''} - Verify input files/folders exist - Log skip reasons +Define conditions in markdown format before invoking sub-agents: + +```markdown +#### Step X: Agent Name + +**Condition:** Check if specific files or folders exist +- Only if `formation/transcripts/` folder exists and contains `.transcript` files +- Only if `formation/course.md` file already exists +- At least ONE of these exists: `dumps/transform/` or `formation/course.md` + +**Action:** Invoke `@agent-name` agent via `runSubagent` (if condition met) +**Priority:** Specify if critical, optional, or has dependencies + +**Execution:** ```javascript -// Pattern: Conditional execution -async function conditionalPipelineStep(stepNumber, params) { - const inputPath = `${params.basePath}/${params.inputFolder}`; - - // Check if input exists - if (!await fileExists(inputPath)) { - logStep(stepNumber, 'SKIPPED', 'Input folder does not exist'); - return { status: 'skipped', reason: 'Input not found' }; - } - - // Proceed with sub-agent invocation - return await runSubagent({ - description: `Execute Step ${stepNumber}`, - prompt: buildPromptForStep(stepNumber, params) - }); -} +const stepResult = await runSubagent({ + description: 'Step description', + prompt: `You are the Agent Name specialist...` +}); + +logStepCompletion(stepNumber, 'Agent Name', stepResult); +``` + +**Log Entry:** +```markdown +## Step X: Agent Name +**Status:** ✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED +**Reason:** [Why skipped if applicable] +**Started:** [timestamp] +**Completed:** [timestamp] +**Duration:** [HH:mm:ss] +**Output:** [Summary of results] +``` + +**Real-world example from pipeline:** + +```markdown +#### Step 1: Resume Transcript +**Condition:** Only if `formation/transcripts/` folder exists and contains `.transcript` files +**Action:** Invoke `@resume-transcript` agent via `runSubagent` + +#### Step 2: Course Add +**Condition:** Only if `formation/` folder exists AND `formation/course.md` exists +**Action:** Invoke `@course-add` agent via `runSubagent` (OPTIONAL) + +#### Step 3: Create Set Questions +**Condition:** At least ONE of these exists: +- `dumps/transform/` with `.dump.txt` files +- `formation/course.md` file + +**Action:** Invoke `@create-set` agent via `runSubagent` +**Priority:** Process dumps first, then course + +#### Step 4: Add Explanation +**Condition:** `imports/` folder contains `.csv` files +**Action:** Invoke `@add-explanation` agent via `runSubagent` + +#### Step 5: Verify Question +**Condition:** `imports/` folder contains `.csv` files with explanations +**Action:** Invoke `@verify-question` agent via `runSubagent` +``` + +**Orchestration flow pattern:** + +```markdown +## Pipeline Execution + +Execute steps sequentially, checking conditions before each invocation: + +For each step: +1. **Check Condition**: Verify if prerequisites are met +2. **Log Decision**: Document if executing or skipping +3. **Execute or Skip**: Invoke sub-agent if condition true +4. **Log Result**: Track success/failure with timestamp and duration +5. **Continue**: Proceed to next step + +### Critical vs Non-Critical Steps + +**Critical steps** (must succeed): +- Step 4 (Create Set Questions) - at least 1 CSV file generated +- Step 5 (Add Explanation) - explanations added to questions +- Step 6 (Verify Question) - validation completed + +**Non-critical steps** (can be skipped): +- Step 1 (Resume Transcript) - if course.md already exists +- Step 2 (Course Add) - optional enrichment +- Step 3 (Dump Transform) - if no original dumps available + +### Status Tracking + +For each step, log the outcome: + +```markdown +## Step 1: Resume Transcript +**Status:** ✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED +**Condition Check:** ✅ Input folder exists (20 files) +**Started:** 2025-12-17T10:30:15Z +**Completed:** 2025-12-17T10:32:48Z +**Duration:** 2m 33s +**Output:** formation/course.md created with 3 sections, 24 subsections +**Issues:** None + +## Step 2: Course Add +**Status:** ⚠️ SKIPPED +**Condition Check:** Course file exists but enrichment optional +**Reason:** User chose standard processing mode +**Started:** N/A +**Completed:** N/A +**Duration:** 0s ``` ### Sub-Agent Communication Pattern From 1a68f1d35cefeec87bcba5af3bcaa313b732cf13 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 10:08:39 +0100 Subject: [PATCH 4/9] Edit exemple --- instructions/agents.instructions.md | 262 ++++++++++++++++++---------- 1 file changed, 170 insertions(+), 92 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index e550a31d7..f1195bc36 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -481,97 +481,125 @@ For each task: 5. Generate final summary ``` -### Real-World Example: Question Generation Pipeline +### Real-World Example: API Documentation Pipeline -Based on the Orchestrator agent example: +A practical example of an orchestrator managing multiple specialized agents to auto-generate comprehensive API documentation: ```javascript -// Orchestrator invokes multiple specialized agents -async function questionGenerationPipeline(certificationName) { - const basePath = `certifications/${certificationName}`; +// Orchestrator invokes multiple specialized agents for API documentation +async function apiDocumentationPipeline(projectName) { + const basePath = `projects/${projectName}`; - // Step 1: Resume Transcripts - const transcriptResult = await runSubagent({ - description: 'Generate course from transcripts', - prompt: `You are the Resume Transcript specialist. + // Step 1: Analyze API Endpoints + const analysisResult = await runSubagent({ + description: 'Scan and catalog all API endpoints', + prompt: `You are the API Analyzer specialist. -Process: ${certificationName} -Input: ${basePath}/formation/transcripts/ -Output: ${basePath}/formation/course.md +Project: ${projectName} +Input: ${basePath}/src/api/ +Output: ${basePath}/docs/endpoints.json Task: -1. Read all .transcript files -2. Organize by topic (##) and subtopic (###) -3. Create structured course document -4. Preserve all technical accuracy +1. Scan all route files and controllers +2. Extract endpoints: methods, paths, parameters +3. Identify request/response schemas +4. Catalog authentication requirements +5. Generate structured endpoint catalog -Return: Summary of sections and files processed` +Return: Number of endpoints found and schemas identified` }); - // Step 2: Create Question Sets - const setResult = await runSubagent({ - description: 'Generate CSV question sets', - prompt: `You are the Create Set specialist. + // Step 2: Extract Code Documentation + const docResult = await runSubagent({ + description: 'Extract JSDoc and inline code comments', + prompt: `You are the Documentation Extractor specialist. -Process: ${certificationName} -Input: ${basePath}/dumps/ and ${basePath}/formation/ -Output: ${basePath}/imports/ (CSV files) +Project: ${projectName} +Input: ${basePath}/src/api/ +Output: ${basePath}/docs/raw-comments.md Task: -1. Convert dumps to CSV format (max 30 questions per file) -2. Generate questions from course material -3. Apply proper formatting -4. Save all to imports folder +1. Parse all JSDoc comments from route handlers +2. Extract parameter descriptions and types +3. Collect error scenarios and status codes +4. Gather usage examples if present +5. Compile into structured format -Return: List of CSV files created with counts` +Return: Total endpoints with documentation coverage` }); - // Step 3: Add Explanations - const explResult = await runSubagent({ - description: 'Add explanations to all questions', - prompt: `You are the Add Explanation specialist. + // Step 3: Generate Markdown Documentation + const mdResult = await runSubagent({ + description: 'Create formatted API documentation', + prompt: `You are the Documentation Generator specialist. -Process: ${certificationName} -Input: ${basePath}/imports/ (CSV files) -Output: ${basePath}/imports/ (updated CSV files) +Project: ${projectName} +Input: ${basePath}/docs/endpoints.json and ${basePath}/docs/raw-comments.md +Output: ${basePath}/docs/api-reference.md Task: -1. Read all CSV files -2. For each question, use MCP Context7 to find official documentation -3. Write detailed explanations with references -4. Update CSV files - -Return: Summary of explanations added` +1. Create markdown documentation structure +2. Organize endpoints by resource/module +3. Include request/response examples +4. Add authentication sections +5. Create table of contents and search index +6. Apply consistent formatting and style + +Return: Documentation file created with page count` }); - // Step 4: Verify Questions - const verifyResult = await runSubagent({ - description: 'Validate all questions and generate reports', - prompt: `You are the Verify Question specialist. + // Step 4: Generate OpenAPI/Swagger Spec + const specResult = await runSubagent({ + description: 'Create machine-readable API spec', + prompt: `You are the Spec Generator specialist. -Process: ${certificationName} -Input: ${basePath}/imports/ (CSV files with explanations) -Output: ${basePath}/imports/report/ (validation reports) +Project: ${projectName} +Input: ${basePath}/docs/endpoints.json +Output: ${basePath}/docs/openapi.yaml Task: -1. Validate each question against official documentation -2. Check answer accuracy -3. Verify explanation quality -4. Generate detailed validation reports +1. Generate OpenAPI 3.0 specification +2. Map all endpoints with full details +3. Include request/response schemas +4. Add security definitions +5. Create example requests/responses +6. Validate spec for compliance + +Return: Spec file generated and validated` + }); + + // Step 5: Validate Documentation Completeness + const validateResult = await runSubagent({ + description: 'Check documentation quality and coverage', + prompt: `You are the Documentation Validator specialist. -Return: Summary of issues found and reports generated` +Project: ${projectName} +Input: ${basePath}/docs/ (all generated files) +Output: ${basePath}/docs/validation-report.md + +Task: +1. Compare documented endpoints vs actual code +2. Check for missing parameter descriptions +3. Verify all status codes are documented +4. Validate example request formats +5. Check for outdated or broken links +6. Generate quality report with scores + +Return: Summary of issues found and coverage percentage` }); // Generate final summary - generatePipelineSummary(certificationName, { - transcriptResult, - setResult, - explResult, - verifyResult + generateDocumentationSummary(projectName, { + analysisResult, + docResult, + mdResult, + specResult, + validateResult }); } ``` + ### Common Sub-Agent Patterns **Pattern 1: Sequential Data Processing** @@ -740,69 +768,119 @@ Return: Summary of analysis` ### Real-World Example: Parameterized Orchestrator Agent +Example of a code review orchestrator that validates pull requests across multiple dimensions: + ```markdown -# Orchestrator Agent - Question Generation Pipeline +# Code Review Orchestrator Agent ## Dynamic Parameters -- **Certification Name**: Extracted from user prompt (e.g., "Platform Sharing Architect") -- **Base Path**: Derived as `certifications/[Certification Name]/` -- **Log File**: Set to `certifications/[Certification Name]/.pipeline-log.md` +- **Repository Name**: Extracted from user prompt (e.g., "my-awesome-app") +- **Pull Request ID**: Provided in user request (e.g., "PR #42") +- **Base Path**: Derived as `projects/${repositoryName}/pr-${prNumber}/` +- **Review Report**: Set to `projects/${repositoryName}/pr-${prNumber}/review-report.md` ## Your Mission -Execute the complete question generation pipeline for the **${certificationName}** certification by invoking specialized agents without human validation between steps. +Execute a comprehensive multi-aspect code review for **PR #${prNumber}** on **${repositoryName}** by invoking specialized agents without requiring manual coordination between steps. ### Initial Setup -If no certification name is provided in your request, **ASK THE USER** which certification to process. +If repository or PR details are not provided, **ASK THE USER** for: +- Repository name or identifier +- Pull request number +- Review scope (all aspects or specific areas) ## Pre-flight Checks -Verify the certification structure at `${basePath}`: -- ✅ presentation.md exists -- ✅ formation/transcripts/ exists (checking for .transcript files) -- ℹ️ formation/course.md will be created -- ✅ dumps/ exists -- ℹ️ imports/ directory will be created +Verify the pull request structure at `${basePath}`: +- ✅ PR metadata exists +- ✅ Changed files are accessible +- ℹ️ Review reports will be generated +- ✅ Code quality tools configured -## Pipeline Execution +## Review Pipeline Execution + +### Step 1: Security Analysis +Invoke `@security-reviewer` agent: + +**Condition:** Always execute for code reviews + +Parameters to pass: +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/changes/ +- Output: ${basePath}/security-review.md -### Step 1: Resume Transcript -Invoke `@resume-transcript` agent: +### Step 2: Performance Audit +Invoke `@performance-analyzer` agent: + +**Condition:** Only if code contains computational components or database queries + +Parameters to pass: +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/changes/ +- Output: ${basePath}/performance-report.md + +### Step 3: Test Coverage Analysis +Invoke `@test-coverage-checker` agent: + +**Condition:** Always execute + +Parameters to pass: +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/changes/ and test configuration +- Output: ${basePath}/coverage-analysis.md + +### Step 4: Code Style Validation +Invoke `@style-validator` agent: + +**Condition:** Only if style rules are configured for repository Parameters to pass: -- Certification: ${certificationName} -- Input: ${basePath}/formation/transcripts/ -- Output: ${basePath}/formation/course.md +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/changes/ +- Output: ${basePath}/style-report.md -### Step 2: Dump Transform -Invoke `@dump-transform` agent: +### Step 5: Documentation Validation +Invoke `@documentation-reviewer` agent: + +**Condition:** Only if API/public methods were modified Parameters to pass: -- Certification: ${certificationName} -- Input: ${basePath}/dumps/original/ -- Output: ${basePath}/dumps/transform/ +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/changes/ +- Output: ${basePath}/documentation-review.md + +### Step 6: Compile Final Review +Invoke `@review-aggregator` agent: -### Step 3: Create Question Sets -Invoke `@create-set` agent: +**Condition:** Always execute after individual reviews Parameters to pass: -- Certification: ${certificationName} -- Inputs: ${basePath}/dumps/transform/ and ${basePath}/formation/ -- Output: ${basePath}/imports/ +- Repository: ${repositoryName} +- PR: ${prNumber} +- Input: ${basePath}/ (all review reports) +- Output: ${basePath}/review-report.md ## Logging -All operations logged to: `${logFile}` +All review operations logged to: `${reviewReport}` -Track for each step: -- Status (✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED) +Track for each review aspect: +- Status (✅ APPROVED / ⚠️ NEEDS FIXES / ❌ BLOCKED) +- Critical issues count +- Warnings count +- Recommendations count - Timestamps for start and completion -- Duration of execution -- Summary of results +- Duration of review ``` + ### Variable Best Practices #### 1. **Clear Documentation** From 6c97aa8789f085915cdb892cb72b67bed770d487 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 15:21:23 +0100 Subject: [PATCH 5/9] Fix agent names --- instructions/agents.instructions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index f1195bc36..5e67bce21 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -25,7 +25,7 @@ Every agent file must include YAML frontmatter with the following fields: description: 'Brief description of the agent purpose and capabilities' name: 'Agent Display Name' tools: ['read', 'edit', 'search'] -model: 'claude-4.5-sonnet' +model: 'Claude Sonnet 4.5' target: 'vscode' infer: true --- @@ -53,7 +53,7 @@ infer: true #### **model** (STRONGLY RECOMMENDED) - Specifies which AI model the agent should use - Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode -- Example: `'claude-4.5-sonnet'`, `'gpt-4'`, `'gpt-4o'` +- Example: `'Claude Sonnet 4.5'`, `'gpt-4'`, `'gpt-4o'` - Choose based on agent complexity and required capabilities #### **target** (OPTIONAL) From dcda729e72d268450291e6fec6ec759f977a1dae Mon Sep 17 00:00:00 2001 From: bmaucote Date: Wed, 17 Dec 2025 16:24:08 +0100 Subject: [PATCH 6/9] Update readme --- docs/README.instructions.md | 1 + instructions/agents.instructions.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/README.instructions.md b/docs/README.instructions.md index 4101f66b3..da7b4a7f6 100644 --- a/docs/README.instructions.md +++ b/docs/README.instructions.md @@ -52,6 +52,7 @@ Team and project-specific instructions to enhance GitHub Copilot's behavior for | [Convert Spring JPA project to Spring Data Cosmos](../instructions/convert-jpa-to-spring-data-cosmos.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fconvert-jpa-to-spring-data-cosmos.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fconvert-jpa-to-spring-data-cosmos.instructions.md) | Step-by-step guide for converting Spring Boot JPA applications to use Azure Cosmos DB with Spring Data Cosmos | | [Copilot Process tracking Instructions](../instructions/copilot-thought-logging.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcopilot-thought-logging.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcopilot-thought-logging.instructions.md) | See process Copilot is following where you can edit this to reshape the interaction or save when follow up may be needed | | [Copilot Prompt Files Guidelines](../instructions/prompt.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fprompt.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fprompt.instructions.md) | Guidelines for creating high-quality prompt files for GitHub Copilot | +| [Custom Agent File Guidelines](../instructions/agents.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fagents.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fagents.instructions.md) | Guidelines for creating custom agent files for GitHub Copilot | | [Custom Instructions File Guidelines](../instructions/instructions.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Finstructions.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Finstructions.instructions.md) | Guidelines for creating high-quality custom instruction files for GitHub Copilot | | [Dart and Flutter](../instructions/dart-n-flutter.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fdart-n-flutter.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fdart-n-flutter.instructions.md) | Instructions for writing Dart and Flutter code following the official recommendations. | | [Dataverse SDK for Python - Advanced Features Guide](../instructions/dataverse-python-advanced-features.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fdataverse-python-advanced-features.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fdataverse-python-advanced-features.instructions.md) | Guide specific coding standards and best practices | diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index 5e67bce21..21e19374f 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -1,5 +1,5 @@ --- -description: 'Guidelines for creating high-quality custom agent files for GitHub Copilot' +description: 'Guidelines for creating custom agent files for GitHub Copilot' applyTo: '**/*.agent.md' --- From 24f0c9bde0b61b316321729ba21719affbe42771 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Fri, 19 Dec 2025 08:56:11 +0100 Subject: [PATCH 7/9] Clean sub agent parts --- instructions/agents.instructions.md | 512 +++------------------------- 1 file changed, 50 insertions(+), 462 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index 21e19374f..98f125446 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -142,488 +142,76 @@ tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools ## Sub-Agent Invocation (Agent Orchestration) -Agents can invoke other agents using the `runSubagent` function. This enables complex workflows where a coordinator agent orchestrates multiple specialized agents to accomplish multi-step tasks. +Agents can invoke other agents using `runSubagent` to orchestrate multi-step workflows. -### When to Use Sub-Agents +### How It Works -**Use sub-agents when**: -- Breaking complex tasks into specialized subtasks -- Creating orchestration/workflow agents -- Delegating specific responsibilities to expert agents -- Enabling agent composition and reusability -- Implementing pipeline or sequential workflows - -**Examples**: -- An orchestrator agent managing a multi-step question generation pipeline -- A CI/CD coordinator delegating tasks to specialized agents -- A project manager agent breaking down work across technical specialists -- A code review system with different agents for security, performance, style - -### Sub-Agent Configuration - -To enable sub-agent invocation, include `agent` in the tools list: +Include `agent` in tools list, then invoke other agents with detailed prompts: ```yaml ---- -description: 'Orchestrates specialized agents for multi-step workflows' -name: 'Orchestrator Agent' tools: ['read', 'edit', 'search', 'agent'] ---- -``` - -### Sub-Agent Invocation Syntax - -Use `runSubagent` to invoke another agent with a detailed prompt: - -```javascript -const result = await runSubagent({ - description: 'Brief description of what this sub-task accomplishes', - prompt: `You are the [Agent Name] specialist. - -Your task: -1. [Step 1 description] -2. [Step 2 description] -3. [Step 3 description] - -Input parameters: -- Parameter 1: [value] -- Parameter 2: [value] - -Expected output: -[Specify format and location of results] - -Quality standards: -- [Standard 1] -- [Standard 2] -- [Standard 3]` -}); ``` -### Best Practices for Sub-Agent Orchestration - -#### 1. **Clear Task Definition** -- Provide explicit, unambiguous instructions -- Define inputs and outputs clearly -- Specify expected file locations and formats -- Include quality standards and validation criteria +### Example: Code Review Pipeline ```javascript -// Good: Clear and specific -const result = await runSubagent({ - description: 'Generate course materials from transcripts', - prompt: `Process certification: ${certName} - -Read all .transcript files from: certifications/${certName}/formation/transcripts/ -Create organized course document at: certifications/${certName}/formation/course.md - -Structure requirements: -- Use ## for sections, ### for subsections -- Include code blocks for formulas and examples -- Preserve all technical terminology -- No additional information beyond transcripts - -Return: Summary of sections created and files processed` -}); -``` - -#### 2. **Parameter Passing** -- Pass all necessary context to sub-agents -- Use consistent naming conventions -- Include file paths, configuration details, and constraints -- Specify output format and location explicitly - -```javascript -// Pattern: Context-aware parameters -const step1Result = await runSubagent({ - description: 'Step 1: Data preparation', - prompt: `Parameters: -- Project: ${projectName} -- Base path: ${basePath} -- Input files: ${inputDir} -- Output location: ${outputDir} - -Task: [detailed instructions]` -}); -``` - -#### 3. **Error Handling and Recovery** -- Capture and log results from each sub-agent call -- Handle failures gracefully (continue if non-critical) -- Log timestamps and durations for troubleshooting -- Provide fallback strategies - -```javascript -// Pattern: Error handling with logging -try { - const result = await runSubagent({ - description: 'Process data', - prompt: buildPromptForSubAgent(params) +async function codeReviewPipeline(repositoryName, prNumber) { + const basePath = `projects/${repositoryName}/pr-${prNumber}`; + + // Step 1: Security Review + const security = await runSubagent({ + description: 'Analyze security issues in pull request', + prompt: `You are the Security Reviewer specialist. +Repository: ${repositoryName} +PR: ${prNumber} +Files: ${basePath}/changes/ + +Scan for OWASP Top 10, injection attacks, auth flaws, and report findings in ${basePath}/security-review.md` }); - const duration = calculateDuration(startTime, new Date()); - logStepCompletion(stepNumber, 'SUCCESS', duration, result); - return { status: 'success', result }; - -} catch (error) { - const duration = calculateDuration(startTime, new Date()); - logStepCompletion(stepNumber, 'FAILED', duration, error.message); - - if (isComposingStep) throw error; // Critical step - return { status: 'failed', error: error.message }; -} -``` - -#### 4. **Sequential vs Parallel Execution** -- Execute critical steps sequentially -- Ensure each step receives correct input from previous step -- Log completion before proceeding to next step -- Wait for results before using them as input to next agent - -```javascript -// Sequential execution pattern -async function executeWorkflow(params) { - // Step 1: Requires no prior output - const step1 = await runSubagent({ - description: 'Step 1: Data ingestion', - prompt: buildStep1Prompt(params) + // Step 2: Performance Analysis (if applicable) + if (hasComputationalCode(basePath)) { + const performance = await runSubagent({ + description: 'Check performance and optimization opportunities', + prompt: `You are the Performance Analyzer specialist. +Repository: ${repositoryName} +Files: ${basePath}/changes/ + +Analyze database queries, algorithms, and memory usage. Report in ${basePath}/performance-report.md` + }); + } + + // Step 3: Test Coverage + const coverage = await runSubagent({ + description: 'Verify test coverage for changed code', + prompt: `You are the Test Coverage Checker specialist. +Repository: ${repositoryName} +PR: ${prNumber} +Changes: ${basePath}/changes/ + +Check test coverage and suggest missing tests. Report in ${basePath}/coverage-analysis.md` }); - logStep(1, step1); - // Step 2: Depends on Step 1 - const step2 = await runSubagent({ - description: 'Step 2: Data processing', - prompt: buildStep2Prompt(params, step1.result) - }); - logStep(2, step2); + // Step 4: Compile Results + const finalReport = await runSubagent({ + description: 'Aggregate all review findings', + prompt: `You are the Review Aggregator specialist. +Repository: ${repositoryName} +Reviews: ${basePath}/*.md - // Step 3: Depends on Step 2 - const step3 = await runSubagent({ - description: 'Step 3: Validation', - prompt: buildStep3Prompt(params, step2.result) +Combine all reviews into final report at ${basePath}/final-review.md with verdict (APPROVED/NEEDS_FIXES/BLOCKED)` }); - logStep(3, step3); - return { step1, step2, step3 }; + return finalReport; } ``` -#### 5. **Result Logging and Tracking** -- Create detailed logs for pipeline execution -- Track timestamps (start, complete, duration) -- Document agent responses and outputs -- Include statistics and summaries - -```javascript -// Pattern: Comprehensive logging -function logStepCompletion(stepNum, stepName, status, startTime, endTime, - result, error = null) { - const duration = calculateDuration(startTime, endTime); - - const logEntry = ` -## Step ${stepNum}: ${stepName} -**Status:** ${status} -**Started:** ${startTime.toISOString()} -**Completed:** ${endTime.toISOString()} -**Duration:** ${duration} -**Output:** ${result ? result.summary : 'N/A'} -${error ? `**Error:** ${error}` : ''} -`; - - appendToLog(logEntry); -} -``` - -#### 6. **Conditional Sub-Agent Invocation** -- Check prerequisites before invoking sub-agents -- Skip optional steps if conditions not met -- Verify input files/folders exist -- Log skip reasons - -Define conditions in markdown format before invoking sub-agents: - -```markdown -#### Step X: Agent Name - -**Condition:** Check if specific files or folders exist -- Only if `formation/transcripts/` folder exists and contains `.transcript` files -- Only if `formation/course.md` file already exists -- At least ONE of these exists: `dumps/transform/` or `formation/course.md` - -**Action:** Invoke `@agent-name` agent via `runSubagent` (if condition met) -**Priority:** Specify if critical, optional, or has dependencies - -**Execution:** -```javascript -const stepResult = await runSubagent({ - description: 'Step description', - prompt: `You are the Agent Name specialist...` -}); - -logStepCompletion(stepNumber, 'Agent Name', stepResult); -``` - -**Log Entry:** -```markdown -## Step X: Agent Name -**Status:** ✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED -**Reason:** [Why skipped if applicable] -**Started:** [timestamp] -**Completed:** [timestamp] -**Duration:** [HH:mm:ss] -**Output:** [Summary of results] -``` - -**Real-world example from pipeline:** - -```markdown -#### Step 1: Resume Transcript -**Condition:** Only if `formation/transcripts/` folder exists and contains `.transcript` files -**Action:** Invoke `@resume-transcript` agent via `runSubagent` - -#### Step 2: Course Add -**Condition:** Only if `formation/` folder exists AND `formation/course.md` exists -**Action:** Invoke `@course-add` agent via `runSubagent` (OPTIONAL) - -#### Step 3: Create Set Questions -**Condition:** At least ONE of these exists: -- `dumps/transform/` with `.dump.txt` files -- `formation/course.md` file - -**Action:** Invoke `@create-set` agent via `runSubagent` -**Priority:** Process dumps first, then course - -#### Step 4: Add Explanation -**Condition:** `imports/` folder contains `.csv` files -**Action:** Invoke `@add-explanation` agent via `runSubagent` - -#### Step 5: Verify Question -**Condition:** `imports/` folder contains `.csv` files with explanations -**Action:** Invoke `@verify-question` agent via `runSubagent` -``` +### Key Points -**Orchestration flow pattern:** - -```markdown -## Pipeline Execution - -Execute steps sequentially, checking conditions before each invocation: - -For each step: -1. **Check Condition**: Verify if prerequisites are met -2. **Log Decision**: Document if executing or skipping -3. **Execute or Skip**: Invoke sub-agent if condition true -4. **Log Result**: Track success/failure with timestamp and duration -5. **Continue**: Proceed to next step - -### Critical vs Non-Critical Steps - -**Critical steps** (must succeed): -- Step 4 (Create Set Questions) - at least 1 CSV file generated -- Step 5 (Add Explanation) - explanations added to questions -- Step 6 (Verify Question) - validation completed - -**Non-critical steps** (can be skipped): -- Step 1 (Resume Transcript) - if course.md already exists -- Step 2 (Course Add) - optional enrichment -- Step 3 (Dump Transform) - if no original dumps available - -### Status Tracking - -For each step, log the outcome: - -```markdown -## Step 1: Resume Transcript -**Status:** ✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED -**Condition Check:** ✅ Input folder exists (20 files) -**Started:** 2025-12-17T10:30:15Z -**Completed:** 2025-12-17T10:32:48Z -**Duration:** 2m 33s -**Output:** formation/course.md created with 3 sections, 24 subsections -**Issues:** None - -## Step 2: Course Add -**Status:** ⚠️ SKIPPED -**Condition Check:** Course file exists but enrichment optional -**Reason:** User chose standard processing mode -**Started:** N/A -**Completed:** N/A -**Duration:** 0s -``` - -### Sub-Agent Communication Pattern - -**Orchestrator Agent Structure**: - -```markdown -# Orchestrator Agent - -You are a workflow coordinator that manages specialized agents. - -## Responsibilities -- Break complex tasks into focused subtasks -- Invoke specialized agents via runSubagent -- Log and track each step -- Handle errors and recovery -- Generate summary reports - -## Sub-agents Managed -- @specialized-agent-1: Handles task X -- @specialized-agent-2: Handles task Y -- @specialized-agent-3: Handles task Z - -## Workflow Pattern - -For each task: -1. Validate prerequisites -2. Invoke appropriate sub-agent with detailed prompt -3. Log results with timestamps -4. Proceed to next step or handle failure -5. Generate final summary -``` - -### Real-World Example: API Documentation Pipeline - -A practical example of an orchestrator managing multiple specialized agents to auto-generate comprehensive API documentation: - -```javascript -// Orchestrator invokes multiple specialized agents for API documentation -async function apiDocumentationPipeline(projectName) { - const basePath = `projects/${projectName}`; - - // Step 1: Analyze API Endpoints - const analysisResult = await runSubagent({ - description: 'Scan and catalog all API endpoints', - prompt: `You are the API Analyzer specialist. - -Project: ${projectName} -Input: ${basePath}/src/api/ -Output: ${basePath}/docs/endpoints.json - -Task: -1. Scan all route files and controllers -2. Extract endpoints: methods, paths, parameters -3. Identify request/response schemas -4. Catalog authentication requirements -5. Generate structured endpoint catalog - -Return: Number of endpoints found and schemas identified` - }); - - // Step 2: Extract Code Documentation - const docResult = await runSubagent({ - description: 'Extract JSDoc and inline code comments', - prompt: `You are the Documentation Extractor specialist. - -Project: ${projectName} -Input: ${basePath}/src/api/ -Output: ${basePath}/docs/raw-comments.md - -Task: -1. Parse all JSDoc comments from route handlers -2. Extract parameter descriptions and types -3. Collect error scenarios and status codes -4. Gather usage examples if present -5. Compile into structured format - -Return: Total endpoints with documentation coverage` - }); - - // Step 3: Generate Markdown Documentation - const mdResult = await runSubagent({ - description: 'Create formatted API documentation', - prompt: `You are the Documentation Generator specialist. - -Project: ${projectName} -Input: ${basePath}/docs/endpoints.json and ${basePath}/docs/raw-comments.md -Output: ${basePath}/docs/api-reference.md - -Task: -1. Create markdown documentation structure -2. Organize endpoints by resource/module -3. Include request/response examples -4. Add authentication sections -5. Create table of contents and search index -6. Apply consistent formatting and style - -Return: Documentation file created with page count` - }); - - // Step 4: Generate OpenAPI/Swagger Spec - const specResult = await runSubagent({ - description: 'Create machine-readable API spec', - prompt: `You are the Spec Generator specialist. - -Project: ${projectName} -Input: ${basePath}/docs/endpoints.json -Output: ${basePath}/docs/openapi.yaml - -Task: -1. Generate OpenAPI 3.0 specification -2. Map all endpoints with full details -3. Include request/response schemas -4. Add security definitions -5. Create example requests/responses -6. Validate spec for compliance - -Return: Spec file generated and validated` - }); - - // Step 5: Validate Documentation Completeness - const validateResult = await runSubagent({ - description: 'Check documentation quality and coverage', - prompt: `You are the Documentation Validator specialist. - -Project: ${projectName} -Input: ${basePath}/docs/ (all generated files) -Output: ${basePath}/docs/validation-report.md - -Task: -1. Compare documented endpoints vs actual code -2. Check for missing parameter descriptions -3. Verify all status codes are documented -4. Validate example request formats -5. Check for outdated or broken links -6. Generate quality report with scores - -Return: Summary of issues found and coverage percentage` - }); - - // Generate final summary - generateDocumentationSummary(projectName, { - analysisResult, - docResult, - mdResult, - specResult, - validateResult - }); -} -``` - - -### Common Sub-Agent Patterns - -**Pattern 1: Sequential Data Processing** -``` -Input → Agent1 (transform) → Agent2 (enrich) → Agent3 (validate) → Output -``` - -**Pattern 2: Parallel Task Distribution** -``` -Input → Agent1 (task A) → Agent3 (combine) - → Agent2 (task B) → Output -``` - -**Pattern 3: Conditional Branching** -``` -Input → Check condition → Agent1 or Agent2 → Output -``` - -**Pattern 4: Error Recovery** -``` -Input → Agent1 (primary) → Success? → Output - ↓ Failure - Agent2 (fallback) → Output -``` +- Pass all context via `${variables}` in the prompt +- Use `try/catch` for error handling +- Sequential execution (await each step) when results depend on prior steps +- Log results with timestamps for troubleshooting ## Variable Definition and Extraction From 195f1558eb26bdb6b34d987388f9ae3be6e63151 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Sun, 21 Dec 2025 14:16:01 +0100 Subject: [PATCH 8/9] Clean + reorg parts --- instructions/agents.instructions.md | 479 +++++++++++++--------------- 1 file changed, 227 insertions(+), 252 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index 98f125446..ca76a4e68 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -140,78 +140,195 @@ tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools - **Focus**: Fewer tools = clearer agent purpose and better performance - **Documentation**: Comment why specific tools are required for complex configurations +## Agent Prompt Structure + +The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. + +### Recommended Sections + +#### 1. Agent Identity and Role +```markdown +# Agent Name + +Brief introduction explaining who the agent is and its primary role. +``` + +#### 2. Core Responsibilities +```markdown +## Core Responsibilities + +Clear list of what the agent does: +- Primary task 1 +- Primary task 2 +- Primary task 3 +``` + +#### 3. Approach and Methodology +```markdown +## Approach + +Step-by-step methodology: +1. First step +2. Second step +3. Third step +``` + +#### 4. Guidelines and Constraints +```markdown +## Guidelines + +- What the agent should always do +- What the agent should avoid +- Quality standards to maintain +``` + +#### 5. Output Expectations +```markdown +## Output Format + +Specify expected output structure, format, and quality criteria. +``` + +### Prompt Writing Best Practices + +**Be Specific and Direct**: +- Use imperative mood ("Analyze", "Generate", "Focus on") +- Avoid ambiguous terms ("should", "might", "possibly") +- Provide concrete examples when appropriate + +**Define Boundaries**: +- Clearly state what the agent should and shouldn't do +- Define scope limits explicitly +- Specify when to ask for clarification + +**Include Context**: +- Explain the agent's domain expertise +- Reference relevant frameworks, standards, or methodologies +- Provide technical context when necessary + +**Focus on Behavior**: +- Describe how the agent should think and work +- Include decision-making criteria +- Specify quality standards and validation steps + +**Use Structured Format**: +- Break content into clear sections with headers +- Use bullet points and numbered lists +- Make instructions scannable and hierarchical + ## Sub-Agent Invocation (Agent Orchestration) Agents can invoke other agents using `runSubagent` to orchestrate multi-step workflows. ### How It Works -Include `agent` in tools list, then invoke other agents with detailed prompts: +Include `agent` in tools list to enable sub-agent invocation: ```yaml tools: ['read', 'edit', 'search', 'agent'] ``` -### Example: Code Review Pipeline +Then invoke other agents with `runSubagent`: ```javascript -async function codeReviewPipeline(repositoryName, prNumber) { - const basePath = `projects/${repositoryName}/pr-${prNumber}`; +const result = await runSubagent({ + description: 'What this step does', + prompt: `You are the [Specialist] specialist. - // Step 1: Security Review - const security = await runSubagent({ - description: 'Analyze security issues in pull request', - prompt: `You are the Security Reviewer specialist. -Repository: ${repositoryName} -PR: ${prNumber} -Files: ${basePath}/changes/ +Context: +- Parameter: ${parameterValue} +- Input: ${inputPath} +- Output: ${outputPath} -Scan for OWASP Top 10, injection attacks, auth flaws, and report findings in ${basePath}/security-review.md` - }); +Task: +1. Do the specific work +2. Write results to output location +3. Return summary of completion` +}); +``` - // Step 2: Performance Analysis (if applicable) - if (hasComputationalCode(basePath)) { - const performance = await runSubagent({ - description: 'Check performance and optimization opportunities', - prompt: `You are the Performance Analyzer specialist. -Repository: ${repositoryName} -Files: ${basePath}/changes/ +### Basic Pattern -Analyze database queries, algorithms, and memory usage. Report in ${basePath}/performance-report.md` - }); - } +Structure each sub-agent call with: - // Step 3: Test Coverage - const coverage = await runSubagent({ - description: 'Verify test coverage for changed code', - prompt: `You are the Test Coverage Checker specialist. -Repository: ${repositoryName} -PR: ${prNumber} -Changes: ${basePath}/changes/ +1. **description**: Clear one-line purpose of the sub-agent invocation +2. **prompt**: Detailed instructions with substituted variables -Check test coverage and suggest missing tests. Report in ${basePath}/coverage-analysis.md` - }); +The prompt should include: +- Who the sub-agent is (specialist role) +- What context it needs (parameters, paths) +- What to do (concrete tasks) +- Where to write output +- What to return (summary) - // Step 4: Compile Results - const finalReport = await runSubagent({ - description: 'Aggregate all review findings', - prompt: `You are the Review Aggregator specialist. -Repository: ${repositoryName} -Reviews: ${basePath}/*.md +### Example: Multi-Step Processing -Combine all reviews into final report at ${basePath}/final-review.md with verdict (APPROVED/NEEDS_FIXES/BLOCKED)` - }); +```javascript +// Step 1: Process data +const processing = await runSubagent({ + description: 'Transform raw input data', + prompt: `You are the Data Processor specialist. - return finalReport; -} +Project: ${projectName} +Input: ${basePath}/raw/ +Output: ${basePath}/processed/ + +Task: +1. Read all files from input directory +2. Apply transformations +3. Write processed files to output +4. Create summary: ${basePath}/processed/summary.md + +Return: Number of files processed and any issues found` +}); + +// Step 2: Analyze (depends on Step 1) +const analysis = await runSubagent({ + description: 'Analyze processed data', + prompt: `You are the Data Analyst specialist. + +Project: ${projectName} +Input: ${basePath}/processed/ +Output: ${basePath}/analysis/ + +Task: +1. Read processed files from input +2. Generate analysis report +3. Write to: ${basePath}/analysis/report.md + +Return: Key findings and identified patterns` +}); ``` ### Key Points -- Pass all context via `${variables}` in the prompt -- Use `try/catch` for error handling -- Sequential execution (await each step) when results depend on prior steps -- Log results with timestamps for troubleshooting +- **Pass variables in prompts**: Use `${variableName}` for all dynamic values +- **Keep prompts focused**: Clear, specific tasks for each sub-agent +- **Return summaries**: Each sub-agent should report what it accomplished +- **Sequential execution**: Use `await` to maintain order when steps depend on each other +- **Error handling**: Check results before proceeding to dependent steps + +### ⚠️ Tool Availability Requirement + +**Critical**: If a sub-agent requires specific tools (e.g., `edit`, `execute`, `search`), the orchestrator must include those tools in its own `tools` list. Sub-agents cannot access tools that aren't available to their parent orchestrator. + +**Example**: +```yaml +# If your sub-agents need to edit files, execute commands, or search code +tools: ['read', 'edit', 'search', 'execute', 'agent'] +``` + +The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Plan your tool list carefully to ensure all sub-agents have the tools they need. + +### ⚠️ Important Limitation + +**Sub-agent orchestration is NOT suitable for large-scale data processing.** Avoid using `runSubagent` when: +- Processing hundreds or thousands of files +- Handling large datasets +- Performing bulk transformations on big codebases +- Orchestrating more than 5-10 sequential steps + +Each sub-agent call adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets. ## Variable Definition and Extraction @@ -330,144 +447,100 @@ Process the **${projectName}** project located at `${basePath}`. #### Passing Variables to Sub-Agents -Use extracted variables when invoking sub-agents: +When invoking a sub-agent, pass all context through template variables in the prompt: ```javascript -// Example: Pass projectName and basePath to sub-agent +// Extract and prepare variables const basePath = `projects/${projectName}`; +const inputPath = `${basePath}/src/`; +const outputPath = `${basePath}/docs/`; -// Pass to sub-agent with variable +// Pass to sub-agent with all variables substituted const result = await runSubagent({ - description: 'Process project files', - prompt: `You are the Project Processor specialist. + description: 'Generate project documentation', + prompt: `You are the Documentation specialist. -Process: ${projectName} -Location: ${basePath} +Project: ${projectName} +Input: ${inputPath} +Output: ${outputPath} Task: -1. Read all files from ${basePath}/src/ -2. Analyze project structure -3. Generate documentation -4. Save to ${basePath}/docs/ +1. Read source files from ${inputPath} +2. Generate comprehensive documentation +3. Write to ${outputPath}/index.md +4. Include code examples and usage guides -Return: Summary of analysis` +Return: Summary of documentation generated (file count, word count)` }); ``` -### Real-World Example: Parameterized Orchestrator Agent +The sub-agent receives all necessary context embedded in the prompt. Variables are resolved before sending the prompt, so the sub-agent works with concrete paths and values, not variable placeholders. -Example of a code review orchestrator that validates pull requests across multiple dimensions: +### Real-World Example: Code Review Orchestrator -```markdown -# Code Review Orchestrator Agent +Example of a simple orchestrator that validates code through multiple specialized agents: -## Dynamic Parameters - -- **Repository Name**: Extracted from user prompt (e.g., "my-awesome-app") -- **Pull Request ID**: Provided in user request (e.g., "PR #42") -- **Base Path**: Derived as `projects/${repositoryName}/pr-${prNumber}/` -- **Review Report**: Set to `projects/${repositoryName}/pr-${prNumber}/review-report.md` - -## Your Mission - -Execute a comprehensive multi-aspect code review for **PR #${prNumber}** on **${repositoryName}** by invoking specialized agents without requiring manual coordination between steps. - -### Initial Setup - -If repository or PR details are not provided, **ASK THE USER** for: -- Repository name or identifier -- Pull request number -- Review scope (all aspects or specific areas) - -## Pre-flight Checks - -Verify the pull request structure at `${basePath}`: -- ✅ PR metadata exists -- ✅ Changed files are accessible -- ℹ️ Review reports will be generated -- ✅ Code quality tools configured - -## Review Pipeline Execution - -### Step 1: Security Analysis -Invoke `@security-reviewer` agent: - -**Condition:** Always execute for code reviews - -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/changes/ -- Output: ${basePath}/security-review.md - -### Step 2: Performance Audit -Invoke `@performance-analyzer` agent: - -**Condition:** Only if code contains computational components or database queries - -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/changes/ -- Output: ${basePath}/performance-report.md - -### Step 3: Test Coverage Analysis -Invoke `@test-coverage-checker` agent: - -**Condition:** Always execute +```javascript +async function reviewCodePipeline(repositoryName, prNumber) { + const basePath = `projects/${repositoryName}/pr-${prNumber}`; -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/changes/ and test configuration -- Output: ${basePath}/coverage-analysis.md + // Step 1: Security Review + const security = await runSubagent({ + description: 'Scan for security vulnerabilities', + prompt: `You are the Security Reviewer specialist. -### Step 4: Code Style Validation -Invoke `@style-validator` agent: +Repository: ${repositoryName} +PR: ${prNumber} +Code: ${basePath}/changes/ -**Condition:** Only if style rules are configured for repository +Task: +1. Scan code for OWASP Top 10 vulnerabilities +2. Check for injection attacks, auth flaws +3. Write findings to ${basePath}/security-review.md -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/changes/ -- Output: ${basePath}/style-report.md +Return: List of critical, high, and medium issues found` + }); -### Step 5: Documentation Validation -Invoke `@documentation-reviewer` agent: + // Step 2: Test Coverage Check + const coverage = await runSubagent({ + description: 'Verify test coverage for changes', + prompt: `You are the Test Coverage specialist. -**Condition:** Only if API/public methods were modified +Repository: ${repositoryName} +PR: ${prNumber} +Changes: ${basePath}/changes/ -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/changes/ -- Output: ${basePath}/documentation-review.md +Task: +1. Analyze code coverage for modified files +2. Identify untested critical paths +3. Write report to ${basePath}/coverage-report.md -### Step 6: Compile Final Review -Invoke `@review-aggregator` agent: +Return: Current coverage percentage and gaps` + }); -**Condition:** Always execute after individual reviews + // Step 3: Aggregate Results + const finalReport = await runSubagent({ + description: 'Compile all review findings', + prompt: `You are the Review Aggregator specialist. -Parameters to pass: -- Repository: ${repositoryName} -- PR: ${prNumber} -- Input: ${basePath}/ (all review reports) -- Output: ${basePath}/review-report.md +Repository: ${repositoryName} +Reports: ${basePath}/*.md -## Logging +Task: +1. Read all review reports from ${basePath}/ +2. Synthesize findings into single report +3. Determine overall verdict (APPROVE/NEEDS_FIXES/BLOCK) +4. Write to ${basePath}/final-review.md -All review operations logged to: `${reviewReport}` +Return: Final verdict and executive summary` + }); -Track for each review aspect: -- Status (✅ APPROVED / ⚠️ NEEDS FIXES / ❌ BLOCKED) -- Critical issues count -- Warnings count -- Recommendations count -- Timestamps for start and completion -- Duration of review + return finalReport; +} ``` +This pattern applies to any orchestration scenario: extract variables, call sub-agents with clear context, await results. + ### Variable Best Practices @@ -529,104 +602,6 @@ Document valid values and constraints: - Required: no ``` -#### 4. **Variable Scope** -Be clear about variable scope and lifetime: - -```markdown -## Variable Scope - -### Global Variables (used throughout agent execution) -- ${projectName}: Available in all prompts and sub-agents -- ${basePath}: Used for all file operations -- ${timestamp}: Available for logging - -### Local Variables (used in specific sections) -- ${currentStep}: Only in step-specific prompts -- ${stepResult}: Only after step completion -- ${errorMessage}: Only in error handling - -### Sub-Agent Variables (passed to child agents) -- ${projectName}: Always pass to maintain context -- ${basePath}: Critical for file operations -- ${mode}: Inherit from parent agent -``` - -## Agent Prompt Structure - -The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Maximum length: 30,000 characters. - -### Recommended Sections - -#### 1. Agent Identity and Role -```markdown -# Agent Name - -Brief introduction explaining who the agent is and its primary role. -``` - -#### 2. Core Responsibilities -```markdown -## Core Responsibilities - -Clear list of what the agent does: -- Primary task 1 -- Primary task 2 -- Primary task 3 -``` - -#### 3. Approach and Methodology -```markdown -## Approach - -Step-by-step methodology: -1. First step -2. Second step -3. Third step -``` - -#### 4. Guidelines and Constraints -```markdown -## Guidelines - -- What the agent should always do -- What the agent should avoid -- Quality standards to maintain -``` - -#### 5. Output Expectations -```markdown -## Output Format - -Specify expected output structure, format, and quality criteria. -``` - -### Prompt Writing Best Practices - -**Be Specific and Direct**: -- Use imperative mood ("Analyze", "Generate", "Focus on") -- Avoid ambiguous terms ("should", "might", "possibly") -- Provide concrete examples when appropriate - -**Define Boundaries**: -- Clearly state what the agent should and shouldn't do -- Define scope limits explicitly -- Specify when to ask for clarification - -**Include Context**: -- Explain the agent's domain expertise -- Reference relevant frameworks, standards, or methodologies -- Provide technical context when necessary - -**Focus on Behavior**: -- Describe how the agent should think and work -- Include decision-making criteria -- Specify quality standards and validation steps - -**Use Structured Format**: -- Break content into clear sections with headers -- Use bullet points and numbered lists -- Make instructions scannable and hierarchical - ## MCP Server Configuration (Organization/Enterprise Only) MCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents. From 7d635ca093551abe10e2546365d5d3ce9c99a047 Mon Sep 17 00:00:00 2001 From: bmaucote Date: Sun, 21 Dec 2025 14:20:56 +0100 Subject: [PATCH 9/9] Clean parts --- instructions/agents.instructions.md | 94 ++++++----------------------- 1 file changed, 18 insertions(+), 76 deletions(-) diff --git a/instructions/agents.instructions.md b/instructions/agents.instructions.md index ca76a4e68..8d602c881 100644 --- a/instructions/agents.instructions.md +++ b/instructions/agents.instructions.md @@ -140,82 +140,6 @@ tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools - **Focus**: Fewer tools = clearer agent purpose and better performance - **Documentation**: Comment why specific tools are required for complex configurations -## Agent Prompt Structure - -The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. - -### Recommended Sections - -#### 1. Agent Identity and Role -```markdown -# Agent Name - -Brief introduction explaining who the agent is and its primary role. -``` - -#### 2. Core Responsibilities -```markdown -## Core Responsibilities - -Clear list of what the agent does: -- Primary task 1 -- Primary task 2 -- Primary task 3 -``` - -#### 3. Approach and Methodology -```markdown -## Approach - -Step-by-step methodology: -1. First step -2. Second step -3. Third step -``` - -#### 4. Guidelines and Constraints -```markdown -## Guidelines - -- What the agent should always do -- What the agent should avoid -- Quality standards to maintain -``` - -#### 5. Output Expectations -```markdown -## Output Format - -Specify expected output structure, format, and quality criteria. -``` - -### Prompt Writing Best Practices - -**Be Specific and Direct**: -- Use imperative mood ("Analyze", "Generate", "Focus on") -- Avoid ambiguous terms ("should", "might", "possibly") -- Provide concrete examples when appropriate - -**Define Boundaries**: -- Clearly state what the agent should and shouldn't do -- Define scope limits explicitly -- Specify when to ask for clarification - -**Include Context**: -- Explain the agent's domain expertise -- Reference relevant frameworks, standards, or methodologies -- Provide technical context when necessary - -**Focus on Behavior**: -- Describe how the agent should think and work -- Include decision-making criteria -- Specify quality standards and validation steps - -**Use Structured Format**: -- Break content into clear sections with headers -- Use bullet points and numbered lists -- Make instructions scannable and hierarchical - ## Sub-Agent Invocation (Agent Orchestration) Agents can invoke other agents using `runSubagent` to orchestrate multi-step workflows. @@ -330,6 +254,24 @@ The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Each sub-agent call adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets. +## Agent Prompt Structure + +The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Well-structured prompts typically include: + +1. **Agent Identity and Role**: Who the agent is and its primary role +2. **Core Responsibilities**: What specific tasks the agent performs +3. **Approach and Methodology**: How the agent works to accomplish tasks +4. **Guidelines and Constraints**: What to do/avoid and quality standards +5. **Output Expectations**: Expected output format and quality + +### Prompt Writing Best Practices + +- **Be Specific and Direct**: Use imperative mood ("Analyze", "Generate"); avoid vague terms +- **Define Boundaries**: Clearly state scope limits and constraints +- **Include Context**: Explain domain expertise and reference relevant frameworks +- **Focus on Behavior**: Describe how the agent should think and work +- **Use Structured Format**: Headers, bullets, and lists make prompts scannable + ## Variable Definition and Extraction Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data.