Skip to main content

Before You Dive In

This guide is intentionally dense. If you need a custom workflow built, a skill developed, or just a second set of eyes — reach out. Amlan Das, Founder — DAS Audience Development amlan@madebydas.com

What Are Claude Cowork Plugins?

Overview

Claude Cowork plugins are modular extensions that transform Claude from a conversational AI into a specialized automation platform tailored to specific roles, teams, and workflows. Launched on January 30, 2026, the plugin system enables users to bundle multiple customization types into single installable packages. Plugins bring the same agentic architecture from Claude Code (the developer-focused tool) to Claude Cowork (the knowledge work productivity tool), without requiring terminal access or coding expertise.

Core Concept

Instead of configuring Claude from scratch for each task, plugins provide ready-made bundles containing:
  • Slash Commands: Quick shortcuts for common workflows (e.g., /rfp:extract, /legal:discover)
  • Sub-agents: Specialized AI workers with isolated contexts for parallel task execution
  • MCP Servers: Connections to external tools and data sources (Slack, Salesforce, Google Workspace)
  • Hooks: Automation triggers at lifecycle events (e.g., auto-format after file edits)
  • Skills: Instructions Claude reads automatically based on task context
Real-World Analogy: Think of plugins as “app stores” for Claude — instead of using a generic assistant, you install plugins that turn Claude into a specialized RFP analyst, financial auditor, or legal document reviewer.

Key Capabilities

CapabilityDescriptionExample
Direct File AccessRead/write local files without manual uploadsProcess 200 resumes in a folder into a ranked Excel spreadsheet
Sub-agent CoordinationBreak complex work into parallel workstreams5-year financial analysis: one sub-agent per fiscal year
Professional OutputsGenerate formatted deliverablesExcel with formulas, PowerPoint decks, formatted Word docs
Long-Running TasksExtended execution without timeoutsAnalyze 50 contracts for risk clauses (30+ minute task)
Browser IntegrationControl web browsers via Chrome extensionScrape competitor pricing from websites
Evidence: NASA used custom Claude plugins to generate Mars rover driving instructions, achieving 50% time savings on a previously manual process.

System Requirements and Setup

Requirements

RequirementSpecificationNotes
Operating SystemmacOS onlyWindows support coming (timeline TBD)
ApplicationClaude Desktop appNot available on web or mobile
SubscriptionPaid plan requiredPro ($20/month), Max, Team, or Enterprise
InternetActive connectionRequired throughout session
PermissionsFolder access grantsYou control which folders Claude can access

Installation Process

Step 1: Install Claude Desktop

1. Download Claude Desktop for macOS from https://claude.com/download
2. Install the application
3. Sign in with your Claude account (Pro, Max, Team, or Enterprise)

Step 2: Access Cowork

1. Open Claude Desktop
2. Look for the mode selector showing "Chat" and "Cowork" tabs
3. Click the "Cowork" tab to switch to Tasks mode

Step 3: Grant File Access (First Time)

1. Cowork will prompt you to grant folder access
2. Best practice: Create a dedicated work folder first
   mkdir ~/cowork-workspace
3. Click "Choose Folder" and select ~/cowork-workspace
4. Grant permissions when prompted
Security Note: Claude can only access files in folders you explicitly grant. Nothing is accessible by default.

Step 4: Verify Setup

Test with a simple task:
Organize the files in this folder by type and create a summary report.
If Claude responds with a plan and begins working, you’re set up correctly.

Plugin Architecture Deep Dive

The Five Plugin Components

Plugins bundle up to five distinct extension types, each serving a specific automation purpose:
my-plugin/
├── .claude-plugin/
│   └── plugin.json          # Required: Plugin manifest
├── commands/                 # Slash commands (Markdown files)
│   ├── extract-rfp.md
│   └── analyze-contracts.md
├── agents/                   # Sub-agent definitions
│   ├── requirement-extractor.md
│   ├── risk-analyzer.md
│   └── financial-parser.md
├── skills/                   # Auto-invoked instructions
│   ├── legal-taxonomy/
│   │   └── SKILL.md
│   └── financial-ratios/
│       └── SKILL.md
├── hooks/
│   └── hooks.json           # Lifecycle automation
├── .mcp.json                # External tool integrations
└── scripts/                 # Helper utilities
    └── format-output.sh

Component 1 — Slash Commands

Purpose: Reusable text-based shortcuts for common workflows File Format: Markdown files in commands/ directory Invocation: Manual (/plugin-name:command-name)

Example: /rfp:extract

---
description: Extract all requirements from RFP documents
arguments: <folder-path>
---
# RFP Requirements Extraction
Analyze all RFP documents in the specified folder and extract:
1. **Functional Requirements**: Features, capabilities, specifications
2. **Technical Requirements**: Platforms, integrations, security standards
3. **Compliance Requirements**: Certifications, regulatory mandates
4. **Evaluation Criteria**: Scoring rubrics, decision factors, weighting
For each requirement:
- Assign unique ID (REQ-001, REQ-002...)
- Categorize as Mandatory or Optional
- Note RFP page number for citation
Output: Excel workbook with requirements matrix, coverage tracking
formulas, and risk flags.
Arguments provided by user: $ARGUMENTS
How It Works:
  • User types /rfp:extract /path/to/rfp-folder/
  • $ARGUMENTS placeholder is replaced with /path/to/rfp-folder/
  • Claude executes the workflow described in the command

Component 2 — Sub-agents

Purpose: Specialized AI workers with isolated contexts for parallel execution Architecture: Each sub-agent has separate context window, custom system prompt, scoped tool permissions File Format: Markdown with YAML frontmatter in agents/ directory

Example: Requirement Extractor Sub-agent

---
name: requirement-extractor
description: Extract structured requirements from RFP sections
model: sonnet-4.5
tools: [Read, Grep]
permissions:
  file_access: read_only
---
You are a requirements extraction specialist. Your role is to:
1. Read RFP sections assigned to you
2. Identify all requirements (functional, technical, compliance)
3. Extract exact requirement text with page citations
4. Classify each as mandatory vs. optional
5. Return structured JSON to orchestrator
DO NOT:
- Make assumptions about unstated requirements
- Editorialize or paraphrase requirement language
- Access files outside your assigned scope
Return findings in this format:
{
  "requirements": [
    {
      "id": "REQ-001",
      "category": "Functional",
      "text": "System must support SSO via SAML 2.0",
      "mandatory": true,
      "page": 23
    }
  ]
}
Key Innovation: Sub-agents report only summary findings back to main orchestrator, preventing context pollution when processing large documents.

Component 3 — MCP Servers (Model Context Protocol)

Purpose: Connect Claude to external tools and data sources File Format: .mcp.json at plugin root

Example: CRM Integration

{
  "mcpServers": {
    "salesforce": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-salesforce"],
      "env": {
        "SALESFORCE_USERNAME": "${SALESFORCE_USERNAME}",
        "SALESFORCE_TOKEN": "${SALESFORCE_TOKEN}"
      }
    },
    "google-sheets": {
      "command": "${CLAUDE_PLUGIN_ROOT}/servers/sheets-server.js",
      "args": ["--credentials", "${GOOGLE_CREDENTIALS_PATH}"],
      "cwd": "${CLAUDE_PLUGIN_ROOT}"
    }
  }
}

Available Integrations (via MCP ecosystem)

  • Productivity: Slack, Asana, Linear, Jira, Notion
  • Cloud Storage: Google Drive, Dropbox, Box
  • CRM: Salesforce, HubSpot
  • Development: GitHub, GitLab, Sentry
  • Data: PostgreSQL, MongoDB, Redis
  • Analytics: Amplitude, Mixpanel, Google Analytics
How It Works:
  • MCP servers expose tools (functions), resources (data), and prompts (templates) through standardized JSON-RPC interface
  • Claude invokes these tools seamlessly alongside built-in capabilities
  • Servers start automatically when plugin is enabled

Component 4 — Hooks

Purpose: Lifecycle event automation (validation, enrichment, blocking) File Format: hooks/hooks.json

Available Events

  • PreToolUse: Before Claude uses any tool
  • PostToolUse: After successful tool execution
  • SessionStart: At beginning of session
  • UserPromptSubmit: When user submits prompt
  • SubagentStart: When sub-agent launches

Example: Auto-Format Hook

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          {
            "type": "command",
            "command": "${CLAUDE_PLUGIN_ROOT}/scripts/format-excel.sh",
            "description": "Auto-format Excel files after creation"
          }
        ]
      }
    ]
  }
}

Hook Decision Flow

Hooks execute shell scripts that return JSON:
{
  "decision": "approve",
  "reason": "File formatted successfully",
  "systemMessage": "Added conditional formatting and formulas"
}

Component 5 — Skills

Purpose: Instructions Claude reads automatically when relevant to task File Format: SKILL.md files in subdirectories of skills/
---
name: legal-clause-taxonomy
description: Classification system for contract clauses.
  Use when analyzing legal documents.
---
# Legal Clause Taxonomy
When analyzing contracts, identify these clause categories:
## Risk Clauses
- **Indemnification**: Who compensates whom for damages
  (look for "indemnify", "hold harmless")
- **Limitation of Liability**: Caps on damages
  (look for "aggregate liability shall not exceed")
- **Warranty Disclaimers**: Limitation of guarantees
  (look for "AS IS", "WITHOUT WARRANTY")
## Operational Clauses
- **Termination Rights**: How contract can be ended
  (look for "terminate for convenience", notice periods)
- **Auto-Renewal**: Automatic contract extension
  (look for "evergreen", "automatically renew")
- **Governing Law**: Which jurisdiction applies
  (look for "governed by laws of")
## Intellectual Property
- **IP Ownership**: Who owns work product
  (look for "work for hire", "assigns all rights")
- **License Grants**: Permission to use IP
  (look for "non-exclusive license", scope, territory)
Flag as **HIGH RISK** if you find:
- Unlimited liability
- Perpetual licenses with broad scope
- One-sided indemnification (we indemnify them, they don't indemnify us)

Difference from Commands

  • Commands: User manually invokes (/legal:discover)
  • Skills: Claude automatically uses when task context matches (e.g., user uploads contract and Legal Taxonomy skill activates)

Plugin Manifest Schema

The plugin.json file defines plugin metadata. Complete schema:
{
  "name": "rfp-assassin",
  "version": "1.2.0",
  "description": "Extract 100% of requirements from RFP documents",
  "author": {
    "name": "Your Team",
    "email": "team@company.com",
    "url": "https://github.com/yourteam"
  },
  "homepage": "https://docs.yourcompany.com/rfp-assassin",
  "repository": "https://github.com/yourteam/rfp-assassin",
  "license": "MIT",
  "keywords": ["rfp", "sales", "requirements", "proposals"],
  "commands": ["./custom/commands/special.md"],
  "agents": "./custom/agents/",
  "skills": "./custom/skills/",
  "hooks": "./config/hooks.json",
  "mcpServers": "./mcp-config.json",
  "lspServers": "./.lsp.json",
  "dependencies": {
    "plugins": ["base-analytics"],
    "mcpServers": ["salesforce"]
  },
  "configuration": {
    "env": {
      "DEFAULT_OUTPUT_FORMAT": "excel",
      "MAX_RFP_PAGES": "200"
    }
  }
}

Critical Rules

  • Manifest location: .claude-plugin/plugin.json (MUST be in this directory)
  • Component directories: commands/, agents/, skills/, hooks/ MUST be at plugin root (NOT inside .claude-plugin/)
  • Path references: Use $&#123;CLAUDE_PLUGIN_ROOT&#125; variable for portability
  • Naming convention: kebab-case for all files and directories

Step-by-Step Setup Guide

Installing Pre-Built Plugins

Anthropic provides 10+ official plugins for common functions: Official Plugin Library:
  • Productivity: Task management, calendar, workflows
  • Enterprise Search: Find info across company tools
  • Sales: Prospect research, deal prep
  • Finance: Financial analysis, modeling, metrics
  • Data: Query, visualize, interpret datasets
  • Legal: Document review, risk flagging, compliance
  • Marketing: Content drafting, campaign planning
  • Customer Support: Triage issues, draft responses
  • Product Management: Specs, roadmaps, prioritization
  • Biology Research: Literature search, result analysis

Installation Process

1. Open Claude Desktop → Navigate to Cowork tab
2. Click "Plugins" in left sidebar
3. Browse library → Find plugin you need
4. Click plugin → Review description and components
5. Click "Install" button
6. Plugin downloads and activates automatically

Alternative: Upload Custom Plugin

1. Click "Upload plugin" button
2. Select plugin folder or ZIP file from your computer
3. Claude validates manifest and installs

Using Installed Plugins

  • Type / or click + button to see available commands
  • Commands appear as /plugin-name:command-name
  • Example: If you installed “sales” plugin, you will see /sales:research-prospect, /sales:prep-discovery, etc.

Customizing Plugins

After installing, tailor plugins to your workflow:
1. View installed plugin in Plugins sidebar
2. Click "Customize" button (upper right corner)
3. Claude automatically prompts: "How would you like to customize this plugin?"
4. Describe changes:
   - "Add integration with our internal CRM at https://crm.company.com"
   - "Change output format to Google Sheets instead of Excel"
   - "Include our brand voice guidelines when drafting content"
5. Click "Let's go" → Claude modifies plugin configuration
6. Test customized plugin with sample task

Example Customization

Default Sales Plugin:
  • Uses generic prospect research (LinkedIn, Crunchbase)
  • Outputs to Excel
After Customization:
  • Connects to your Salesforce instance (via MCP)
  • Pulls existing customer data to identify upsell opportunities
  • Outputs to Google Sheets shared with sales team
  • Includes your company’s sales methodology (MEDDIC, SPIN, etc.) in Skills

Creating a Simple Plugin from Scratch

Use Case: Create a “Resume Screener” plugin for HR workflows

Step 1: Create Plugin Directory

mkdir -p resume-screener/.claude-plugin
cd resume-screener

Step 2: Create Manifest

Create .claude-plugin/plugin.json:
{
  "name": "resume-screener",
  "version": "1.0.0",
  "description": "Screen and rank job candidates from resume folders",
  "author": {
    "name": "Your HR Team"
  },
  "keywords": ["hr", "recruiting", "resumes", "ats"]
}

Step 3: Create Slash Command

Create commands/screen-resumes.md:
---
description: Screen resumes against job requirements and rank candidates
arguments: <job-description-file> <resume-folder>
---
# Resume Screening Workflow
You are an expert ATS (Applicant Tracking System) performing
initial resume screening.
## Inputs
- **Job Description**: $ARGUMENTS (first argument - path to JD file)
- **Resume Folder**: $ARGUMENTS (second argument - folder of resumes)
## Process
### Step 1: Extract Job Requirements
Read the job description and extract:
- **Required Skills**: Must-have technical and soft skills
- **Required Experience**: Minimum years in relevant domain
- **Required Education**: Degree level and fields
- **Nice-to-Have**: Preferred qualifications
- **Location**: Remote/hybrid/onsite requirements
### Step 2: Parse Resumes (Parallel Sub-agents)
For each resume in the folder:
1. Extract candidate information:
   - Name, contact info, location
   - Work experience (titles, companies, dates, achievements)
   - Skills (technical and soft)
   - Education (degrees, institutions, years)
2. Calculate match score:
   - Required skills match: % of must-haves present
   - Experience: Does candidate meet minimum years?
   - Education: Does candidate have required degree?
   - Nice-to-have bonus: Extra points for preferred qualifications
3. Flag red flags:
   - Employment gaps >12 months
   - Job hopping (<1 year tenures)
   - Skill mismatches
### Step 3: Rank and Output
Generate Excel spreadsheet with:
- **Columns**: Rank, Name, Match %, Years Experience, Key Skills,
  Education, Location, Interview Recommendation
- **Sorting**: Ranked by match score (highest first)
- **Conditional Formatting**:
  - Green: Match >80% (Strong Yes)
  - Yellow: Match 60-80% (Yes)
  - Orange: Match 40-60% (Maybe)
  - Red: Match <40% (No)
- **Formulas**: Auto-calculate match percentage based on skills present
Save output as: candidate-rankings.xlsx

Step 4: Create Sub-agent for Resume Parsing

Create agents/resume-parser.md:
---
name: resume-parser
description: Extract structured data from individual resumes
model: sonnet-4.5
tools: [Read]
permissions:
  file_access: read_only
---
You are a resume parser. Your task is to extract structured
information from a single resume and return it in JSON format.
Extract:
1. **Contact**: name, email, phone, LinkedIn, location
2. **Experience**: List of jobs with title, company, start_date,
   end_date, key_achievements
3. **Skills**: Array of technical and soft skills mentioned
4. **Education**: Degrees with institution, field, graduation_year
Return only valid JSON, no commentary:
{
  "contact": {
    "name": "Jane Smith",
    "email": "jane@email.com",
    "phone": "555-1234",
    "linkedin": "linkedin.com/in/janesmith",
    "location": "San Francisco, CA"
  },
  "experience": [
    {
      "title": "Senior Product Manager",
      "company": "Tech Corp",
      "start_date": "2020-01",
      "end_date": "2025-12",
      "years": 5,
      "achievements": ["Launched 3 products", "Grew ARR 200%"]
    }
  ],
  "skills": ["Product Strategy", "SQL", "Figma", "Stakeholder Management"],
  "education": [
    {
      "degree": "MBA",
      "institution": "Stanford GSB",
      "field": "Business Administration",
      "graduation_year": 2019
    }
  ]
}

Step 5: Create Skills (Optional)

Create skills/ats-criteria/SKILL.md:
---
name: ats-criteria
description: ATS screening best practices and bias mitigation.
  Use when screening candidates.
---
# ATS Screening Criteria
## Best Practices
### Skills Matching
- Exact match: "Python" in JD → look for "Python" in resume
- Synonym awareness: "JavaScript" = "JS" = "ECMAScript"
- Avoid keyword stuffing detection: Flag resumes with >50 skill mentions
### Experience Calculation
- Count only relevant experience (not total career length)
- Startup experience: Weight 1.5x (broader responsibility)
- Internships: Count as 0.5x full-time experience
### Education Evaluation
- Bootcamp graduates: Acceptable for technical roles if strong portfolio
- Self-taught: Evaluate based on project work, not credentials
- Degree field mismatch: Don't auto-reject (career changers can be excellent)
## Bias Mitigation
**Remove from consideration** (to avoid discrimination):
- Age (graduation dates, years of experience can indicate age)
- Gender (inferred from name)
- Race/ethnicity
- Parental status (resume gaps often indicate caregiving)
**Focus on**:
- Demonstrated skills (projects, achievements, results)
- Growth trajectory (promotions, increasing responsibility)
- Cultural adds (diverse perspectives) not "cultural fit"
**Red Flag False Positives**:
- Employment gaps: May indicate caregiving, health, education → ask, don't auto-reject
- Job hopping: Tech industry norm is 2-3 years → recalibrate expectations
- Non-traditional paths: Bootcamp grads, self-taught often outperform
## Legal Compliance
- EEOC guidelines: No discrimination based on protected classes
- Blind screening: Remove names/photos before ranking if possible
- Document decisions: Explain why candidates were rejected (defensible criteria)

Step 6: Test Locally

# Navigate to parent directory containing your plugin
cd ..
# Test plugin before installing
claude --plugin-dir ./resume-screener
# In Claude session, try your command
/resume-screener:screen-resumes job-description.txt /path/to/resumes/

Step 7: Install Plugin

Once tested, install permanently: Option A: Install to User Scope (available across all projects)
claude plugin install ./resume-screener --scope user
Option B: Install to Project Scope (shared with team via git)
claude plugin install ./resume-screener --scope project
Option C: Upload via Cowork UI
1. Open Claude Desktop → Cowork tab
2. Click "Plugins" → "Upload plugin"
3. Select resume-screener folder
4. Confirm installation

Use Case 1 — Sales: The “RFP Assassin”

Mission: Extract 100% of requirements from RFP documents and map to your capabilities

The Problem

Sales teams receive 50-200 page RFPs with requirements scattered across sections. Manual extraction takes 4-8 hours and misses 15-20% of requirements, leading to incomplete proposals and lost deals.

Plugin Architecture

Plugin Name: rfp-assassin

Components:
- Slash Command: /rfp:extract
- Sub-agents:
    - requirement-extractor.md (parallel per section)
    - criteria-parser.md (evaluation rubrics)
    - timeline-mapper.md (milestones, deadlines)
- Skills:
    - rfp-taxonomy/ (SOW structures, compliance matrices)
    - requirement-classification/ (mandatory vs optional logic)
- Hook: PostToolUse → auto-format Excel with conditional formatting
- MCP: Salesforce (pull existing opportunity data)

Workflow Execution

Step 1: User Invokes Command

/rfp:extract /RFPs/Acme-Corp-RFP-2026/

Step 2: Orchestrator Analyzes RFP Structure

Claude reads the RFP folder, identifies document sections:
  • 01-Introduction.pdf (10 pages)
  • 02-Technical-Requirements.pdf (45 pages)
  • 03-Compliance-Security.pdf (30 pages)
  • 04-Pricing-Template.xlsx
  • 05-Evaluation-Criteria.pdf (8 pages)

Step 3: Parallel Sub-agent Deployment

Orchestrator spawns sub-agents:
Sub-agentAssigned DocumentTask
requirement-extractor-1Technical RequirementsExtract all functional/technical requirements
requirement-extractor-2Compliance-SecurityExtract security, privacy, regulatory requirements
criteria-parserEvaluation CriteriaParse scoring rubric, weightings, decision factors
timeline-mapperIntroductionExtract deadlines, milestones, key dates

Step 4: Sub-agents Return Findings

Each sub-agent returns structured JSON:
{
  "section": "Technical Requirements",
  "requirements": [
    {
      "id": "REQ-001",
      "category": "Integration",
      "text": "System must integrate with SAP ERP via REST API",
      "mandatory": true,
      "page": 23,
      "keywords": ["SAP", "REST API", "integration"]
    },
    {
      "id": "REQ-002",
      "category": "Security",
      "text": "All data must be encrypted at rest using AES-256",
      "mandatory": true,
      "page": 24,
      "keywords": ["encryption", "AES-256", "data at rest"]
    }
  ]
}

Step 5: Orchestrator Synthesizes

Main Claude agent:
  1. Merges all requirement lists
  2. De-duplicates (same requirement mentioned in multiple sections)
  3. Cross-references with your capability database (if MCP connected)
  4. Identifies gaps (requirements you can’t meet)
  5. Flags risks (tight timelines, unusual terms)

Step 6: Output Generation

Excel Workbook: Acme-Corp-RFP-Analysis.xlsx Tab 1: Requirements Matrix
Req IDCategoryRequirementMandatoryPageOur CoverageGap/RiskOwner
REQ-001IntegrationSAP ERP via REST APIYes23ImplementedNoneEngineering
REQ-002SecurityAES-256 encryption at restYes24ImplementedNoneSecurity
REQ-018ComplianceHIPAA BAA requiredYes67In progressConfirm timelineLegal
REQ-042FunctionalMobile app with offline syncNo89Not roadmappedCompetitive riskProduct
Tab 2: Evaluation Criteria
CriterionWeightOur StrengthRecommended Emphasis
Technical Fit40%StrongLead with integration capabilities
Price25%ModerateBundle services to increase value
Experience20%StrongHighlight similar customer wins
Support15%StrongEmphasize 24/7 coverage
Tab 3: Timeline
MilestoneDateDays UntilStatus
Questions DueFeb 155 daysPending
Proposal DueMar 119 daysIn Progress
PresentationsMar 1533 daysNot Started
DecisionApr 150 daysNot Started
Tab 4: Risk Summary
RiskSeverityMitigation
HIPAA compliance timingHighAccelerate BAA process, have legal confirm
Mobile offline not roadmappedMediumOffer custom development or partner solution
Aggressive timelineMediumPre-stage resources, parallel workstreams

Evidence

Time Savings: Manual RFP analysis takes 4-8 hours. This plugin completes in 15-30 minutes. Accuracy: Sub-agents catch requirements buried in appendices that humans miss. One customer found 23 requirements in “Exhibit B” that were not in the main document.

Use Case 2 — Finance: The “10-K Analyst”

Mission: Audit 5 years of public company financials from SEC filings

The Problem

Financial analysts spend days manually extracting data from 10-K filings (often 200+ pages each). Data is scattered across narrative sections, footnotes, and exhibits. Year-over-year comparisons require tedious spreadsheet work.

Plugin Architecture

Plugin Name: 10k-analyst

Components:
- Slash Command: /finance:analyze-10k
- Sub-agents (one per fiscal year):
    - 10k-extractor-FY2021.md
    - 10k-extractor-FY2022.md
    - 10k-extractor-FY2023.md
    - 10k-extractor-FY2024.md
    - 10k-extractor-FY2025.md
- Skills:
    - gaap-standards/ (accounting rule interpretations)
    - financial-ratios/ (calculation formulas)
    - sec-filing-structure/ (where to find specific data)
- MCP: SEC EDGAR API (fetch filings automatically)

Workflow Execution

Step 1: User Invokes Command

/finance:analyze-10k TSLA 2021-2025

Step 2: Fetch 10-K Filings

Claude (via SEC EDGAR MCP):
  1. Queries SEC for Tesla 10-K filings (2021-2025)
  2. Downloads all 5 annual reports
  3. Identifies key sections (Item 7: MD&A, Item 8: Financial Statements)

Step 3: Parallel Sub-agent Analysis

5 sub-agents process simultaneously, each handling one fiscal year:
Sub-agent: 10k-extractor-FY2023

Task: Extract all financial data from Tesla FY2023 10-K

Extract:
1. Income Statement (Revenue, COGS, Operating Expenses, Net Income)
2. Balance Sheet (Assets, Liabilities, Equity)
3. Cash Flow (Operating, Investing, Financing)
4. Key Metrics (mentioned in MD&A)
5. Risk Factors (new risks added this year)
6. Segment Data (Automotive, Energy, Services)

Return structured JSON with exact figures and page citations.

Step 4: Orchestrator Synthesis

Main Claude agent:
  1. Validates data consistency (e.g., ending cash FY2022 = starting cash FY2023)
  2. Calculates derived metrics (margins, ratios, growth rates)
  3. Identifies trends and anomalies
  4. Extracts narrative insights from MD&A sections
  5. Compares against industry benchmarks (if available)

Step 5: Output Generation

Excel Workbook: TSLA-5-Year-Analysis.xlsx Tab 1: Income Statement (with YoY formulas)
Line ItemFY2021FY2022FY2023FY2024FY20255Y CAGR
Revenue$53.8B$81.5B$96.8B$112.4B$128.7B=CAGR
Gross Profit$13.6B$20.9B$17.7B$22.1B$26.8B=CAGR
Operating Income$6.5B$13.7B$8.9B$12.3B$15.1B=CAGR
Net Income$5.5B$12.6B$15.0B$14.8B$17.2B=CAGR
Tab 2: Balance Sheet Evolution
CategoryFY2021FY2022FY2023FY2024FY2025Trend
Cash and Equivalents$17.6B$22.2B$29.1B$33.6B$36.5BUp
Total Assets$62.1B$82.3B$106.6B$128.4B$145.2BUp
Total Debt$6.8B$5.7B$5.2B$7.1B$8.3BFlat
Shareholders’ Equity$30.2B$44.7B$62.6B$78.3B$91.5BUp
Tab 3: Cash Flow Analysis
CategoryFY2021FY2022FY2023FY2024FY2025
Operating Cash Flow$11.5B$14.7B$13.3B$15.8B$18.2B
CapEx($6.5B)($7.2B)($8.9B)($10.1B)($11.3B)
Free Cash Flow$5.0B$7.5B$4.4B$5.7B$6.9B
FCF Margin9.3%9.2%4.5%5.1%5.4%
Tab 4: Ratio Dashboard (with conditional formatting)
RatioFY2021FY2022FY2023FY2024FY2025Industry Avg
Gross Margin25.3%25.6%18.3%19.7%20.8%18.5%
Operating Margin12.1%16.8%9.2%10.9%11.7%8.2%
ROE18.2%28.2%24.0%18.9%18.8%12.5%
Current Ratio1.4x1.5x1.7x1.8x1.9x1.2x
Debt/Equity0.23x0.13x0.08x0.09x0.09x0.45x
Tab 5: Narrative Insights (from MD&A)
KEY THEMES FROM MANAGEMENT DISCUSSION:

FY2023 Margin Compression:
- Management cited "aggressive pricing strategy" to maintain market share
- Raw material costs (lithium, nickel) increased 15% YoY
- New factory ramp costs in Austin and Berlin

FY2024-2025 Recovery:
- Pricing stabilized; focus shifted to cost reduction
- Manufacturing efficiency improved (vehicles per employee up 12%)
- Energy Storage segment growing faster than Automotive (45% vs 15%)

Risk Factors Evolution:
- FY2023: Added "increased competition from legacy automakers"
- FY2024: Added "regulatory uncertainty on autonomous driving"
- FY2025: Added "supply chain concentration in China"

Mission: Find risk clauses across 50+ contracts in minutes

The Problem

Legal teams inherit contracts from acquisitions, vendor changes, or simply poor organization. Finding specific clause types (indemnification, liability caps, auto-renewal) across dozens of documents takes weeks of manual review.

Plugin Architecture

Plugin Name: discovery-drone

Components:
- Slash Command: /legal:discover
- Sub-agents (specialized by contract type):
    - contract-analyzer-msa.md (Master Service Agreements)
    - contract-analyzer-nda.md (Non-Disclosure Agreements)
    - contract-analyzer-sow.md (Statements of Work)
    - contract-analyzer-saas.md (SaaS/Subscription Agreements)
    - contract-analyzer-license.md (Software Licenses)
- Skills:
    - legal-clause-taxonomy/ (clause identification patterns)
    - risk-scoring/ (severity classification logic)
    - jurisdiction-rules/ (state/country-specific considerations)

Workflow Execution

Step 1: User Invokes Command

/legal:discover indemnification-liability /contracts/vendor-agreements/

Step 2: Contract Inventory

Claude scans the folder:
  • 12 MSAs (Master Service Agreements)
  • 8 SOWs (Statements of Work)
  • 15 NDAs (Non-Disclosure Agreements)
  • 10 SaaS Agreements
  • 5 License Agreements
  • Total: 50 contracts

Step 3: Parallel Sub-agent Deployment

Sub-agents process by contract type (leveraging specialized prompts):
Sub-agentContractsFocus
MSA Analyzer12 MSAsIndemnification, liability caps, termination
SOW Analyzer8 SOWsScope creep risks, change order terms
NDA Analyzer15 NDAsDefinition of confidential info, term duration
SaaS Analyzer10 SaaSData ownership, SLA penalties, auto-renewal
License Analyzer5 LicensesIP scope, audit rights, termination triggers

Step 4: Clause Extraction

Each sub-agent extracts:
{
  "contract": "Acme-Corp-MSA-2023.pdf",
  "clauses": [
    {
      "type": "indemnification",
      "party_indemnifying": "Us",
      "party_indemnified": "Acme Corp",
      "scope": "IP infringement, third-party claims, data breaches",
      "cap": "Unlimited",
      "page": 15,
      "exact_text": "Vendor shall indemnify, defend, and hold harmless Client from any and all claims...",
      "risk_score": "HIGH",
      "risk_reason": "Unlimited indemnification, one-sided (we indemnify them, they don't indemnify us)"
    },
    {
      "type": "liability_limitation",
      "our_cap": "12 months fees",
      "their_cap": "None specified",
      "consequential_damages": "Excluded for us, not for them",
      "page": 16,
      "risk_score": "HIGH",
      "risk_reason": "Asymmetric liability caps favor counterparty"
    }
  ]
}

Step 5: Risk Aggregation

Orchestrator consolidates findings: Risk Scoring Logic:
  • HIGH: Unlimited liability, one-sided indemnification, perpetual IP grants
  • MEDIUM: Short notice periods (under 30 days), unfavorable auto-renewal, broad audit rights
  • LOW: Standard market terms, balanced risk allocation

Step 6: Output Generation

Excel Workbook: Contract-Risk-Analysis.xlsx Tab 1: Executive Summary
Risk LevelCount% of PortfolioAction Required
HIGH816%Immediate renegotiation
MEDIUM1530%Review at renewal
LOW2754%Standard monitoring
Tab 2: High-Risk Contracts
ContractCounterpartyRisk TypeIssuePageRecommendation
MSA-2023-001Acme CorpIndemnificationUnlimited, one-sided15Negotiate cap, mutual terms
MSA-2023-001Acme CorpLiabilityAsymmetric caps16Add our liability cap
SaaS-2022-007DataCoAuto-renewal90-day notice, 3-year term22Calendar reminder, renegotiate
License-2021-003SoftwareIncIP RightsPerpetual, broad scope8Limit scope to specific use
Tab 3: Indemnification Analysis
ContractWe Indemnify ThemThey Indemnify UsMutualOur CapTheir CapRisk
Acme MSAYes (broad)NoNoUnlimitedN/AHIGH
Beta SOWYes (IP only)Yes (IP only)Yes12 mo fees12 mo feesLOW
Gamma SaaSYes (data breach)Yes (service failure)Yes$1M$1MLOW
Tab 4: Auto-Renewal Tracker
ContractCurrent Term EndAuto-Renews?Renewal TermNotice RequiredNotice DeadlineStatus
DataCo SaaSDec 31, 2026Yes3 years90 daysOct 2, 2026Action needed
CloudHostMar 15, 2026Yes1 year30 daysFeb 13, 2026Past deadline
ToolVendorJun 30, 2026No---No action

Use Case 4 — Product: The “Voice of Customer” Engine

Mission: Cluster 1,000s of support tickets into actionable product themes

The Problem

Product managers drown in unstructured customer feedback. Support tickets, NPS comments, and feature requests pile up. Manually reading thousands of tickets is impossible; sampling misses patterns.

Plugin Architecture

Plugin Name: voc-engine

Components:
- Slash Command: /product:voc-cluster
- Sub-agents:
    - ticket-themer.md (batch extraction via API)
    - cluster-engine.md (pattern synthesis)
    - impact-scorer.md (prioritization by revenue/churn risk)
- Skills:
    - product-taxonomy/ (feature categories, user personas)
    - sentiment-analysis/ (frustration indicators)
- MCP: Zendesk, Intercom, or Jira (live ticket access)

Two-Stage Pipeline

Why Two Stages? Processing 5,000 tickets individually in Cowork would be slow and expensive. Instead:
  1. Stage 1: Use Anthropic’s Batch API for high-volume extraction (50% cheaper, async)
  2. Stage 2: Use Cowork to synthesize, cluster, and prioritize findings

Stage 1: Batch API Extraction

import anthropic
import json
client = anthropic.Anthropic()
# Load tickets from export
with open('zendesk-export.json') as f:
    tickets = json.load(f)
# Create batch request
batch = client.messages.batches.create(
    requests=[
        {
            "custom_id": f"ticket-{ticket['id']}",
            "params": {
                "model": "claude-sonnet-4-5-20250929",
                "max_tokens": 512,
                "messages": [{
                    "role": "user",
                    "content": f"""Analyze this support ticket.
Ticket #{ticket['id']}:
Subject: {ticket['subject']}
Body: {ticket['description']}
Return JSON only:
{{
  "theme": "one of: mobile, web_ux, integrations, billing,
    performance, onboarding, reporting, api, other",
  "sentiment": "satisfied | neutral | frustrated | angry",
  "pain_point": "1-sentence description of core issue",
  "feature_request": "specific feature if mentioned, else null",
  "churn_risk": "low | medium | high"
}}
"""
                }]
            }
        }
        for ticket in tickets
    ]
)
print(f"Batch ID: {batch.id}")
print(f"Status: {batch.processing_status}")
# Poll for completion, then retrieve results
Batch Processing Stats:
  • 5,000 tickets processed
  • Approximately 2 hours async processing
  • 50% cost reduction vs. synchronous API
  • Results saved to ticket-themes.json

Stage 2: Cowork Clustering

/product:voc-cluster ticket-themes.json
Claude in Cowork:
  1. Loads 5,000 pre-extracted ticket summaries
  2. Groups by theme (mobile: 800, integrations: 1,200, etc.)
  3. Within each theme, clusters by specific pain point
  4. Ranks clusters by frequency x sentiment severity x churn risk
  5. Generates actionable recommendations

Output Generation

Excel Workbook: VOC-Analysis-Q4-2025.xlsx Tab 1: Theme Distribution
ThemeTicket Count% of TotalAvg SentimentHigh Churn RiskPriority Score
Integrations1,24725%Frustrated312 (25%)94
Mobile89218%Frustrated267 (30%)89
Performance75615%Angry189 (25%)85
Billing62312%Neutral62 (10%)45
Onboarding51210%Neutral51 (10%)40
Tab 2: Top Pain Points (Ranked)
RankThemePain PointTicketsRecommendation
1IntegrationsSalesforce sync breaks daily423Rebuild sync with retry logic
2MobileCan’t approve requests on mobile356Add approval workflow to mobile app
3PerformanceDashboard takes 30+ seconds289Optimize database queries, add caching
4IntegrationsNo Slack notifications267Build Slack integration
5MobileApp crashes on Android234Investigate Android-specific bugs
Tab 3: Feature Requests (Extracted)
FeatureRequest CountThemeCompetitive PressureRevenue at Risk
Slack integration267IntegrationsHigh (competitors have it)$450K ARR
Mobile approvals356MobileMedium$380K ARR
Custom dashboards198ReportingLow$220K ARR
API rate limit increase145APIMedium$180K ARR
Tab 4: Churn Risk Cohort
CustomerPlanMRRTickets (90d)Avg SentimentPrimary ComplaintRisk Level
Acme CorpEnterprise$15K23AngrySalesforce syncCritical
Beta IncPro$2.5K18FrustratedMobile crashesCritical
Gamma LLCEnterprise$12K15FrustratedDashboard performanceHigh

Use Case 5 — Marketing: The “Voice DNA” Extractor

Mission: Extract brand voice patterns from your content corpus to create a style guide

The Problem

Brands struggle to maintain consistent voice across teams, agencies, and AI tools. Existing style guides are vague (“be professional but friendly”). New writers and AI assistants produce off-brand content.

Plugin Architecture

Plugin Name: voice-dna

Components:
- Slash Command: /marketing:extract-voice
- Sub-agents:
    - tone-analyzer.md (emotional register, formality level)
    - vocabulary-mapper.md (word choices, phrases to use/avoid)
    - pattern-extractor.md (sentence structures, paragraph patterns)
    - example-curator.md (best examples per category)
- Skills:
    - linguistics-frameworks/ (tone dimensions, readability metrics)
    - brand-voice-templates/ (B2B, B2C, technical, casual)

Workflow Execution

Step 1: User Invokes Command

/marketing:extract-voice /content/approved-2024/

Step 2: Content Inventory

Claude scans the folder:
  • 45 blog posts
  • 12 case studies
  • 8 white papers
  • 200 social media posts
  • 30 email campaigns
  • 15 product pages
  • Total: 310 content pieces

Step 3: Parallel Analysis

Sub-agentFocusExtracts
Tone AnalyzerEmotional registerConfidence level, warmth, formality, humor usage
Vocabulary MapperWord choicesPreferred terms, avoided terms, jargon handling
Pattern ExtractorStructureSentence length, paragraph structure, CTA patterns
Example CuratorBest samplesTop examples of each voice element

Step 4: Output Generation

Document: Brand-Voice-DNA-Guide.docx Executive Summary Based on analysis of 310 approved content pieces, your brand voice can be characterized as: “Approachable Authority” — Expert knowledge delivered with warmth and clarity, avoiding both stiff formality and excessive casualness. Core Voice Pillars Pillar 1: Confident but Not Arrogant
DODON’T
”Our research shows…""Obviously, anyone would know…"
"We’ve found that customers see 3x improvement""We’re the best in the industry"
"Here’s what we recommend""You must do this”
Pillar 2: Clear Over Clever
DODON’T
”This feature helps you save time""Revolutionize your temporal efficiency"
"Connect your CRM in 3 clicks""Seamlessly integrate your customer relationship ecosystem”
Explain jargon on first useAssume reader knows acronyms
Pillar 3: Empathetic Problem-Solver
DODON’T
”We know onboarding can be overwhelming""Onboarding is simple"
"Many teams struggle with…""If you’re having trouble…”
Acknowledge pain before presenting solutionJump straight to features
Vocabulary Signatures Preferred Terms:
Instead ofUse
UtilizeUse
LeverageUse, apply
SynergyCollaboration, teamwork
DisruptChange, improve, transform
RevolutionarySignificant, meaningful
Cutting-edgeModern, current, new
Technical Jargon Handling Rule: Define on first use, then use freely. Emotional Vocabulary
EmotionOur WordsAvoid
ExcitementExcited, thrilled, can’t waitStoked, pumped, psyched
ConfidenceConfident, certain, provenGuaranteed, foolproof, unbeatable
EmpathyUnderstand, recognize, appreciateSorry (overused), unfortunately
Sentence and Paragraph Patterns Sentence Length:
  • Target: 15-20 words average
  • Mix: Alternate short (8-12) and medium (18-25) sentences
  • Avoid: Sentences over 30 words
Paragraph Structure:
  • Blog posts: 2-4 sentences per paragraph
  • Product pages: 1-2 sentences per paragraph
  • White papers: 3-5 sentences per paragraph
AI Prompt Snippet Use this when prompting AI tools to write in your voice:
Write in [Company]'s brand voice:
- Tone: Confident but approachable, expert but not arrogant
- Avoid: Jargon without explanation, superlatives (revolutionary,
  game-changing), passive voice
- Prefer: Short sentences (15-20 words), concrete examples,
  "you" over "users"
- Structure: Lead with benefit, explain how, end with CTA
- Sample: "Our research shows that teams using automated workflows
  save 10 hours per week. Here's how to set one up in 5 minutes."

Use Case 6 — HR: The “Resume Radar”

Mission: Screen hundreds of resumes and rank candidates against job requirements

The Problem

HR teams receive 200-500 resumes per open role. Manual screening takes 30+ hours and introduces inconsistency. Good candidates get buried; bias affects decisions.

Plugin Architecture

Plugin Name: resume-radar

Components:
  - Slash Command: /hr:screen-resumes <job-desc> <resume-folder>
  - Sub-agents:
      - resume-parser.md (parallel, one per resume)
      - skills-matcher.md (compare to requirements)
      - red-flag-detector.md (gaps, inconsistencies)
  - Skills:
      - ats-criteria/ (matching algorithms, scoring weights)
      - bias-mitigation/ (blind screening guidance)
      - skill-taxonomy/ (synonym matching, equivalent skills)

Workflow Execution

Step 1: User Invokes Command

/hr:screen-resumes senior-pm-job-description.pdf /candidates/PM-role-2026/

Step 2: Requirements Extraction

Claude parses job description:
Extracted Requirements:

REQUIRED SKILLS (Must-Have):
- Product management: 5+ years
- B2B SaaS experience
- SQL proficiency
- Roadmap ownership
- Stakeholder management
- Agile/Scrum certification or experience

REQUIRED EDUCATION:
- Bachelor's degree (any field)
- MBA preferred but not required

NICE-TO-HAVE:
- Fintech industry experience
- Technical background (engineering degree or coding experience)
- Public speaking / thought leadership
- Experience with AI/ML products

LOCATION:
- Remote OK (US timezone overlap required)

Step 3: Parallel Resume Processing

200 sub-agents process simultaneously:
{
  "name": "Jane Smith",
  "email": "jane@email.com",
  "location": "San Francisco, CA",
  "linkedin": "linkedin.com/in/janesmith",
  "experience": [
    {
      "title": "Senior Product Manager",
      "company": "Stripe",
      "duration": "2021-2025 (4 years)",
      "achievements": [
        "Launched 3 payment products generating $50M ARR",
        "Led team of 8 engineers, 2 designers",
        "Reduced checkout abandonment 23%"
      ]
    },
    {
      "title": "Product Manager",
      "company": "Plaid",
      "duration": "2018-2021 (3 years)",
      "achievements": [
        "Built bank connection API used by 2000+ apps",
        "Managed $10M product budget"
      ]
    }
  ],
  "skills": ["Product Strategy", "SQL", "Figma", "Jira", "Amplitude", "Python"],
  "education": [
    {"degree": "MBA", "school": "Stanford GSB", "year": 2018},
    {"degree": "BS Computer Science", "school": "MIT", "year": 2014}
  ],
  "certifications": ["Certified Scrum Product Owner (CSPO)"]
}

Step 4: Scoring Algorithm

Match Score = (
  Required Skills Match    x 40%  +
  Experience Threshold     x 30%  +
  Education Fit            x 15%  +
  Nice-to-Have Bonus       x 15%
)

Jane Smith Calculation:
- Required Skills: 6/6 = 100% x 0.40 = 40 points
- Experience: 7 years (exceeds 5+) = 100% x 0.30 = 30 points
- Education: MBA + BS CS = 100% x 0.15 = 15 points
- Nice-to-Have: Fintech (yes), Technical (yes), AI (no) = 67% x 0.15 = 10 points

TOTAL: 95 points → Strong Yes

Step 5: Output Generation

Excel Workbook: PM-Candidates-Ranked.xlsx Tab 1: Candidate Rankings
RankNameMatch %Years ExpRequired SkillsNice-to-HaveRed FlagsRecommendation
1Jane Smith95%76/6Fintech, TechnicalNoneStrong Yes
2John Chen92%86/6Technical, AI/MLNoneStrong Yes
3Sarah Johnson88%65/6FintechMissing SQLYes
4Mike Williams82%55/6NoneNoneYes
5Emily Brown78%45/6TechnicalBelow exp thresholdPhone Screen
45Alex Turner45%22/6NoneLow experienceNo
Tab 2: Skills Gap Analysis
SkillCandidates WithCandidates WithoutGap Severity
SQL156 (78%)44 (22%)Low
Agile/Scrum134 (67%)66 (33%)Medium
B2B SaaS112 (56%)88 (44%)High
Stakeholder Mgmt178 (89%)22 (11%)Low
Tab 3: Diversity Metrics (Anonymized)
CategoryPool CompositionAdvancing (Top 20)Notes
Technical Background45% of pool60% advancingPositive signal
Non-Traditional Path22% of pool25% advancingInclusive screening
Career Changers15% of pool10% advancingReview borderline cases

Use Case 7 — Strategy: The “Board Whisperer”

Mission: Turn scattered notes into a board-ready presentation

The Problem

Executives accumulate strategy notes across documents, emails, Slack threads, and meeting notes. Synthesizing into a coherent board deck takes days. The result often lacks narrative flow.

Plugin Architecture

Plugin Name: board-whisperer

Components:
  - Slash Command: /strategy:board-deck <quarter> <source-folder>
  - Sub-agents:
      - metrics-compiler.md (pull KPIs from multiple sources)
      - narrative-builder.md (synthesize storyline)
      - risk-identifier.md (extract concerns, mitigations)
      - comparison-engine.md (vs. plan, vs. prior period)
  - Skills:
      - board-deck-templates/ (SaaS metrics, growth-stage, enterprise)
      - executive-communication/ (pyramid principle, MECE)
      - financial-modeling/ (ARR calculations, cohort analysis)
  - MCP: Google Slides (direct deck creation)

Workflow Execution

Step 1: User Invokes Command

/strategy:board-deck Q4-2025 /board-prep/Q4-2025/

Step 2: Source Material Inventory

Claude scans the folder:
  • Q4-financials-draft.xlsx (CFO’s numbers)
  • sales-pipeline-update.docx (CRO’s commentary)
  • product-roadmap-Q4.pdf (CPO’s update)
  • customer-health-scores.csv (CS team data)
  • competitive-intel-notes.txt (Strategy team)
  • CEO-board-notes.md (bullet points from CEO)
  • slack-export-exec-channel.json (relevant threads)

Step 3: Parallel Extraction

Sub-agentSourcesExtracts
Metrics CompilerFinancials, Pipeline, Health ScoresARR, NRR, Pipeline, Churn
Narrative BuilderCEO Notes, SlackKey messages, storyline
Risk IdentifierAll sourcesConcerns mentioned, mitigations
Comparison EngineFinancialsActual vs. Plan, QoQ, YoY

Step 4: Output Generation

PowerPoint Deck: Q4-2025-Board-Deck.pptx Slide 1: Executive Summary Q4 2025: Revenue Growth Accelerating, Unit Economics Improving 3 Key Messages for the Board:
  1. ARR hit $42M (+35% YoY), enterprise segment now 40% of new bookings
  2. Unit economics at all-time best: LTV:CAC ratio improved to 3.6x (from 3.2x)
  3. Recommendation: Double down on enterprise GTM; request $5M incremental investment
Slide 2: Financial Performance
MetricQ4 ActualQ4 PlanVarianceQoQYoY
ARR$42.0M$45.0M-7%+9%+35%
Net New ARR$3.5M$4.0M-13%+5%+28%
Gross Margin78%75%+3pp+1pp+4pp
Operating Margin-12%-15%+3pp+2pp+8pp
Commentary: Slight miss on ARR driven by 2 enterprise deals slipping to Q1 (signed Jan 5). Margin improvement ahead of plan due to infrastructure optimization. Slide 3: ARR Bridge
Q3 Ending ARR:     $38.5M
+ New Business:    +$2.8M
+ Expansion:       +$1.4M
- Churn:           -$0.5M
- Contraction:     -$0.2M
= Q4 Ending ARR:   $42.0M

Net Revenue Retention: 118%
Gross Revenue Retention: 94%
Logo Retention: 92%
Slide 4: Sales Pipeline
StageDealsWeighted ValueExpected Close
Discovery45$2.3MQ2 2026
Evaluation28$4.1MQ1 2026
Negotiation12$2.8MQ1 2026
Verbal Commit5$1.2MJan 2026
Total Pipeline90$10.4M-
Coverage Ratio: 2.6x (Target: 3.0x) — Need more top-of-funnel Slide 5: Product and Roadmap Shipped in Q4:
  • Enterprise SSO (unblocked 3 deals worth $800K)
  • Advanced Analytics Dashboard (top feature request)
  • Salesforce Integration v2 (resolved sync reliability issues)
Q1 2026 Roadmap:
  • Mobile App v2 (approval workflows)
  • AI-powered insights (beta)
  • SOC 2 Type II certification (enterprise blocker)
Slide 6: Risks and Mitigations
RiskSeverityMitigationOwnerStatus
Enterprise concentration: Top 10 = 35% ARRMediumAccelerate mid-market motionCROIn progress
Key engineer attrition (2 departures)HighCounter-offers, backfill recruitingCTOMitigated
Competitor launched similar featureMediumAccelerate AI roadmap, differentiate on UXCPOMonitoring
Sales cycle lengthening (90 to 120 days)MediumAdd sales engineers, improve demoCROIn progress
Slide 7: The Ask Board Approval Requested:
  1. $5M incremental investment in enterprise sales (4 AEs, 2 SEs, 1 SA)
    • Expected ROI: $8M incremental ARR by end of 2026
    • Payback: 18 months
  2. Stock option refresh pool (500K shares) for retention
    • Competitive pressure from well-funded startups
    • 3 key engineers received outside offers in Q4
  3. M&A exploration authorization for complementary analytics startup
    • Target identified, early conversations
    • Potential acqui-hire of 8-person team

Use Case 8 — Operations: The “Process Surgeon”

Mission: Find bottlenecks across SOPs and recommend optimizations

The Problem

Operations teams maintain dozens of SOPs (Standard Operating Procedures) that evolve independently. Bottlenecks hide in handoffs, approval gates, and manual steps. Process mining tools are expensive and complex.

Plugin Architecture

Plugin Name: process-surgeon

Components:
  - Slash Command: /ops:audit-process <sop-folder>
  - Sub-agents:
      - process-mapper.md (extract steps, owners, durations)
      - bottleneck-detector.md (identify delays, waste)
      - dependency-analyzer.md (find blocking relationships)
      - benchmark-comparator.md (vs. industry standards)
  - Skills:
      - lean-six-sigma/ (waste categories, improvement frameworks)
      - process-mining/ (cycle time analysis, wait time identification)
      - automation-patterns/ (RPA opportunities, integration points)

Workflow Execution

Step 1: User Invokes Command

/ops:audit-process /sops/customer-onboarding/

Step 2: SOP Inventory

Claude scans the folder:
  • 1-sales-handoff.docx
  • 2-legal-review.docx
  • 3-security-assessment.docx
  • 4-technical-setup.docx
  • 5-training-scheduling.docx
  • 6-go-live-checklist.docx
  • metrics-dashboard.xlsx

Step 3: Process Mapping

Claude extracts and visualizes:
CURRENT STATE PROCESS MAP:

Step 1: Sales Handoff (Day 1-2)
  → Sales sends intro email to CS
  → CS reviews deal notes in Salesforce
  → CS schedules kickoff call
  Duration: 2 days | Owner: Sales → CS

Step 2: Kickoff Call (Day 3)
  → Intro meeting with customer
  → Gather requirements
  → Set expectations
  Duration: 1 day | Owner: CS

Step 3: Legal Review (Day 4-10)
  → Customer sends DPA/security questionnaire
  → Legal reviews and responds
  → Back-and-forth negotiations
  Duration: 7 days | Owner: Legal
  BOTTLENECK IDENTIFIED

Step 4: Security Assessment (Day 11-17)
  → Customer IT sends security checklist
  → Our security team responds
  → Remediation if needed
  Duration: 7 days | Owner: Security
  BOTTLENECK IDENTIFIED

Step 5: Technical Setup (Day 18-22)
  → Provision tenant
  → Configure SSO
  → Data migration
  Duration: 5 days | Owner: Implementation

Step 6: Training (Day 23-25)
  → Schedule training sessions
  → Conduct admin training
  → Conduct end-user training
  Duration: 3 days | Owner: CS

Step 7: Go-Live (Day 26)
  → Cutover to production
  → Monitor for issues
  Duration: 1 day | Owner: CS + Support

TOTAL CYCLE TIME: 26 days

Step 4: Bottleneck Analysis

Excel Workbook: Onboarding-Process-Audit.xlsx Tab 1: Bottleneck Summary
StepDescriptionDuration% of CycleTypeRecommendationSavings
3Legal Review7 days27%Manual gatePre-approved DPA template, self-serve portal5 days
4Security Assessment7 days27%Manual gatePublished security whitepaper, pre-filled questionnaire4 days
1Sales Handoff2 days8%CoordinationAutomated handoff trigger in Salesforce1 day
5Technical Setup5 days19%Manual taskSelf-serve provisioning, automated SSO config2 days
Tab 2: Waste Analysis (Lean Categories)
Waste TypeExamples FoundImpactRecommendation
WaitingLegal queue (avg 3 days before review starts)3 daysAdd legal headcount or pre-approve templates
Defects40% of security questionnaires have errors2 days reworkPre-fill with known answers
Over-processingCustom DPA negotiation for standard deals2 daysTier customers: standard DPA for deals under $50K
MotionCS manually copies data between systems1 dayIntegrate Salesforce with onboarding tool
Tab 3: Optimized Process
StepCurrentOptimizedSavingsHow
Sales Handoff2 days1 day1 dayAuto-trigger on deal close
Legal Review7 days2 days5 daysPre-approved DPA, self-serve portal
Security Assessment7 days3 days4 daysPublished security pack, pre-filled responses
Technical Setup5 days3 days2 daysSelf-serve provisioning
Training3 days3 days0 daysKeep as-is (value-add)
Go-Live1 day1 day0 daysKeep as-is
TOTAL26 days14 days12 days (46%)-

Use Case 9 — Support: The “Escalation Tamer”

Mission: Summarize angry customer threads and draft de-escalation responses

The Problem

Escalated tickets are emotionally charged and context-heavy. Agents spend 20-30 minutes reading thread history before responding. Poor responses make things worse.

Plugin Architecture

Plugin Name: escalation-tamer

Components:
  - Slash Command: /support:deescalate <ticket-id>
  - Sub-agents:
      - thread-summarizer.md (extract key events, timeline)
      - sentiment-tracker.md (emotion trajectory)
      - issue-extractor.md (root cause, customer ask)
      - response-drafter.md (empathetic reply)
  - Skills:
      - crisis-communication/ (de-escalation frameworks)
      - empathy-frameworks/ (acknowledgment patterns)
      - compensation-guidelines/ (when/what to offer)
  - MCP: Zendesk, Intercom, Freshdesk (fetch ticket data)

Workflow Execution

Step 1: User Invokes Command

/support:deescalate TICKET-5678

Step 2: Thread Retrieval

Claude (via Zendesk MCP) fetches:
  • 23 messages over 12 days
  • 5 different agents involved
  • Customer escalated to VP Support on day 10

Step 3: Analysis Output

Document: TICKET-5678-Analysis.docx Escalation Summary Customer: Acme Corp (Enterprise, $180K ARR) Contact: John Smith, VP Operations Severity: Critical (executive escalation) Duration: 12 days unresolved Sentiment Trajectory: Frustrated then Angry then Threatening Churn Timeline of Events
DayEventOwnerCustomer Sentiment
1Customer reports data sync failuresAgent AFrustrated
2Agent A requests logs, customer providesAgent ACooperative
3Agent A escalates to Tier 2Agent BNeutral
5Agent B says “known issue, no ETA”Agent BAngry
7Customer asks for workaround, no response-Very Angry
9Customer emails support managerAgent CThreatening
10Customer emails VP SupportVP SupportChurn Risk
12Current state: awaiting response--
Root Cause Analysis Technical Issue: Data sync fails when records exceed 10K rows (undocumented limit) Service Failures:
  1. 2-day gap with no response (Day 5-7)
  2. “No ETA” response without workaround or escalation path
  3. No proactive communication after identifying known issue
  4. Customer had to escalate twice to get attention
What Customer Wants
  1. Immediate: Workaround to unblock their team
  2. Short-term: Fix for the sync limit issue
  3. Long-term: Confidence this won’t happen again
  4. Unstated: Acknowledgment that we dropped the ball
Draft Response Subject: Personal Apology and Resolution Path for Sync Issues Dear John, I’m Sarah Chen, VP of Customer Success at [Company]. I’ve personally reviewed your case and want to apologize for the experience you’ve had over the past 12 days. You’re right to be frustrated. When you reported the sync failures on [date], you deserved a faster, more transparent response. Instead, you received delays and a “no ETA” that left your team stuck. That’s not the service level you signed up for, and I take full responsibility. Here’s what we’re doing: Immediate (Today): Our engineering team has identified the root cause (a 10K row limit that wasn’t documented). We’ve deployed a hotfix that increases the limit to 100K rows. Your syncs should now complete successfully. Can you confirm on your end? This Week:
  • We’re assigning you a dedicated Technical Account Manager (TAM) who will be your single point of contact going forward
  • Your TAM will conduct a health check on your implementation to identify any other potential issues before they impact you
Service Credit: To acknowledge the disruption to your operations, we’re applying a 2-month service credit ($30K) to your account. This will appear on your next invoice. Ongoing:
  • We’re updating our documentation to clearly state system limits
  • We’re implementing automated alerts when customers approach limits
  • I’m personally reviewing our escalation process to prevent similar gaps
I’d welcome a call to discuss this directly. I’m available [times] this week — please let me know what works for you. Again, I’m sorry we let you down. We’ll earn back your trust. Sincerely, Sarah Chen VP Customer Success [Direct phone] De-escalation Framework Used
  1. Acknowledge emotion: “You’re right to be frustrated”
  2. Take ownership: “I take full responsibility”
  3. Explain root cause (without excuses): Technical limit + service gaps
  4. Present solution with timeline and accountability
  5. Rebuild trust: Credit + TAM + process improvements
  6. Open door: Direct contact, call offer

Use Case 10 — Data: The “Dashboard Translator”

Mission: Turn data visualizations into executive-ready narratives

The Problem

Data teams create beautiful dashboards, but executives want narratives, not charts. Translating “what the data shows” into “what it means and what to do” requires business context that analysts often lack.

Plugin Architecture

Plugin Name: dashboard-translator

Components:
  - Slash Command: /data:translate-dashboard <image or data file>
  - Sub-agents:
      - chart-reader.md (extract data from visualizations)
      - trend-detector.md (identify patterns, anomalies)
      - insight-synthesizer.md (so-what, now-what)
      - benchmark-comparator.md (vs. targets, vs. industry)
  - Skills:
      - executive-communication/ (pyramid principle, BLUF)
      - industry-benchmarks/ (SaaS metrics, e-commerce, fintech)
      - statistical-significance/ (when changes matter)

Workflow Execution

Step 1: User Invokes Command

/data:translate-dashboard Q4-metrics-dashboard.png
(Or with raw data: /data:translate-dashboard Q4-metrics.csv)

Step 2: Data Extraction

Claude analyzes the dashboard image:
  • 4 line charts (ARR, MRR, NRR, Churn)
  • 2 bar charts (New vs. Expansion revenue, Sales by segment)
  • 3 KPI cards (ARR, Growth Rate, LTV:CAC)
  • 1 cohort table (retention by signup month)

Step 3: Pattern Detection

TRENDS IDENTIFIED:

1. ARR Growth: Accelerating
   - Q1: +8% QoQ
   - Q2: +10% QoQ
   - Q3: +11% QoQ
   - Q4: +12% QoQ
   Pattern: Consistent acceleration over 4 quarters

2. Segment Mix Shift:
   - Enterprise: 25% → 40% of new bookings
   - Mid-market: 50% → 45%
   - SMB: 25% → 15%
   Pattern: Successful upmarket motion

3. Cohort Retention:
   - 2023 cohorts: 85% 12-month retention
   - 2024 cohorts: 91% 12-month retention
   Pattern: Product improvements driving stickiness

4. ANOMALY: Q4 churn spike
   - October: 1.2% monthly churn (normal)
   - November: 2.8% monthly churn (elevated)
   - December: 1.4% monthly churn (normal)
   Investigation needed: November outlier

Step 4: Output Generation

Document: Q4-Dashboard-Insights.docx Executive Summary Bottom Line: Q4 demonstrates strong momentum with accelerating growth and improving unit economics. One anomaly (November churn spike) requires investigation but doesn’t change the overall positive trajectory. Headline: ARR growth accelerating; enterprise motion working; product stickiness improving. Key Insights 1. Revenue Growth Accelerating What the data shows: ARR grew 12% QoQ in Q4, up from 8% in Q1. This is the fourth consecutive quarter of acceleration. What it means: Growth is compounding, not plateauing. The business is finding new levers (enterprise segment, expansion revenue) that supplement the core SMB motion. What to do: Continue investing in enterprise GTM. Consider raising growth targets for 2026 planning. Supporting data:
QuarterQoQ GrowthYoY Growth
Q1 20258%28%
Q2 202510%31%
Q3 202511%33%
Q4 202512%35%
2. Enterprise Motion Working What the data shows: Enterprise deals now represent 40% of new bookings, up from 25% a year ago. Average deal size increased from $45K to $78K. What it means: The upmarket push is succeeding. Enterprise customers have longer sales cycles but higher LTV and lower churn. What to do: Validate with cohort analysis that enterprise retention is indeed higher. If confirmed, consider reallocating more resources from SMB to enterprise. 3. Product Stickiness Improving What the data shows: 2024 signup cohorts show 91% 12-month retention vs. 85% for 2023 cohorts. What it means: Product improvements (likely the new analytics dashboard and integrations shipped in 2024) are creating more value for customers. What to do: Identify which features correlate with retention. Double down on those in 2026 roadmap. 4. November Churn Anomaly What the data shows: November monthly churn spiked to 2.8% vs. typical 1.2-1.5%. What it means: Something unusual happened in November. Could be:
  • Specific large customer(s) churned
  • Billing issue causing involuntary churn
  • Seasonal budget cuts
  • Competitive displacement
What to do:
  1. Pull list of November churned customers
  2. Segment by reason (voluntary vs. involuntary)
  3. If concentrated in specific accounts, conduct exit interviews
  4. If billing-related, audit payment failure recovery process
Risks to Monitor
RiskIndicatorCurrentThresholdStatus
Enterprise concentrationTop 10 customer % of ARR35%Under 40%Watch
Sales cycle lengtheningDays to close (enterprise)120 daysUnder 100 daysWatch
Pipeline coveragePipeline / Target2.6xOver 3.0xWatch
Recommended Actions
  1. Immediate: Investigate November churn spike; report findings to leadership within 1 week
  2. Q1 Priority: Build enterprise-specific cohort analysis to validate retention hypothesis
  3. 2026 Planning: Model scenarios with accelerated enterprise investment
  4. Dashboard Enhancement: Add leading indicators (NPS trend, support ticket volume, feature adoption)

Creating Custom Plugins — Full Walkthrough

Example: The “Contract Cliff-Detector”

Mission: Alert when contracts are approaching auto-renewal deadlines

Step 1: Plan Your Plugin

Use Case: Legal/Finance teams need to know which contracts
are approaching auto-renewal deadlines so they can:
- Renegotiate unfavorable terms
- Cancel unwanted subscriptions
- Budget for renewals

Components Needed:
- Slash command: /contracts:find-renewals <folder> <days-ahead>
- Sub-agent: Contract parser (extract renewal terms)
- Skill: Auto-renewal clause patterns
- Hook: Weekly scheduled scan (optional)
- MCP: Google Calendar (create reminder events)

Step 2: Create Directory Structure

mkdir -p contract-cliff-detector/.claude-plugin
mkdir -p contract-cliff-detector/commands
mkdir -p contract-cliff-detector/agents
mkdir -p contract-cliff-detector/skills/renewal-patterns
cd contract-cliff-detector

Step 3: Create Manifest

Create .claude-plugin/plugin.json:
{
  "name": "contract-cliff-detector",
  "version": "1.0.0",
  "description": "Scan contracts for auto-renewal deadlines and alert before it's too late",
  "author": {
    "name": "Your Legal Team",
    "email": "legal@company.com"
  },
  "keywords": ["legal", "contracts", "renewals", "compliance", "procurement"],
  "repository": "https://github.com/yourcompany/contract-cliff-detector"
}

Step 4: Create Slash Command

Create commands/find-renewals.md:
---
description: Scan contracts for upcoming auto-renewal deadlines
arguments: <contract-folder> [days-ahead]
---
# Contract Renewal Scanner
Scan all contracts in the specified folder and identify those with
auto-renewal clauses approaching their notice deadlines.
## Arguments
- **Contract Folder**: Path to folder containing contracts (PDF, DOCX)
- **Days Ahead** (optional): How far ahead to look (default: 90 days)
User provided: $ARGUMENTS
## Process
### Step 1: Inventory Contracts
List all PDF and DOCX files in the specified folder.
Report total count to user.
### Step 2: Parallel Extraction
For each contract, spawn a sub-agent (renewal-extractor) to find:
- Contract parties (us and counterparty)
- Contract type (SaaS, MSA, NDA, etc.)
- Initial term end date
- Auto-renewal clause (yes/no)
- Renewal term length (e.g., 1 year, 3 years)
- Notice period required (e.g., 30 days, 90 days)
- Notice deadline (calculated: term end minus notice period)
### Step 3: Filter and Alert
Filter contracts where:
- Auto-renewal = Yes
- Notice deadline <= today + [days-ahead]
### Step 4: Generate Output
Create Excel workbook with:
**Tab 1: Urgent Renewals** (notice deadline within 30 days)
Columns: Contract, Counterparty, Annual Value, Notice Deadline,
         Days Remaining, Action Required
Conditional formatting: Red if <14 days, Yellow if <30 days
**Tab 2: Upcoming Renewals** (notice deadline within [days-ahead])
Same columns, sorted by deadline
**Tab 3: All Contracts**
Full inventory with renewal terms extracted
**Tab 4: No Renewal Clause**
Contracts without auto-renewal (for reference)
Save as: contract-renewal-alerts.xlsx
### Step 5: Calendar Integration (if MCP available)
For each urgent renewal, create calendar event:
- Title: "CONTRACT RENEWAL DEADLINE: [Counterparty]"
- Date: Notice deadline
- Description: Contract details, action required
- Reminder: 7 days before, 1 day before

Step 5: Create Sub-agent

Create agents/renewal-extractor.md:
---
name: renewal-extractor
description: Extract renewal terms from a single contract
model: sonnet-4.5
tools: [Read]
permissions:
  file_access: read_only
---
You are a contract analyst specializing in renewal terms. Your task
is to extract auto-renewal information from a single contract.
## Instructions
1. Read the entire contract
2. Find the "Term" or "Duration" section
3. Look for auto-renewal language (see patterns below)
4. Extract specific values
5. Calculate notice deadline
## Common Auto-Renewal Patterns
Look for phrases like:
- "automatically renew for successive [period] terms"
- "shall renew unless [party] provides written notice"
- "evergreen clause"
- "continuous renewal"
- "unless terminated with [X] days notice"
## Return Format
Return ONLY valid JSON:
{
  "contract_file": "filename.pdf",
  "parties": {
    "us": "Our Company Name",
    "counterparty": "Vendor/Customer Name"
  },
  "contract_type": "SaaS Agreement | MSA | NDA | SOW | License | Other",
  "effective_date": "2024-01-15",
  "initial_term": {
    "end_date": "2025-01-14",
    "length": "1 year"
  },
  "auto_renewal": {
    "has_auto_renewal": true,
    "renewal_term": "1 year",
    "notice_period_days": 30,
    "notice_deadline": "2024-12-15",
    "exact_clause": "This Agreement shall automatically renew..."
  },
  "financial": {
    "annual_value": 50000,
    "currency": "USD"
  },
  "termination": {
    "can_terminate_for_convenience": true,
    "termination_notice_days": 30
  }
}
If no auto-renewal clause exists:
{
  "contract_file": "filename.pdf",
  "parties": {},
  "auto_renewal": {
    "has_auto_renewal": false,
    "notes": "Contract expires on end date with no auto-renewal"
  }
}

Step 6: Create Skill

Create skills/renewal-patterns/SKILL.md:
---
name: renewal-patterns
description: Common auto-renewal clause patterns and terminology.
  Use when analyzing contracts for renewal terms.
---
# Auto-Renewal Clause Patterns
## Standard Language Variations
### Pattern 1: Evergreen with Notice
"This Agreement shall automatically renew for successive one (1) year
terms unless either party provides written notice of non-renewal at
least thirty (30) days prior to the end of the then-current term."
Extract:
- Renewal term: 1 year
- Notice period: 30 days
- Notice type: Written
### Pattern 2: Silent Renewal
"Unless terminated in accordance with Section [X], this Agreement
shall continue in effect indefinitely."
Extract:
- This is NOT auto-renewal, it's evergreen
- Look for termination section for notice requirements
### Pattern 3: Price Escalation on Renewal
"Upon renewal, fees shall increase by the greater of (i) 5% or
(ii) the Consumer Price Index change for the prior 12 months."
Extract:
- Renewal terms: Standard
- Flag: Price escalation clause (risk)
### Pattern 4: Multi-Year Commitment
"Initial term of three (3) years, automatically renewing for
additional three (3) year terms."
Extract:
- Renewal term: 3 years (long commitment!)
- Flag: Multi-year auto-renewal (high risk)
## Notice Deadline Calculation
Formula:
Notice Deadline = Term End Date - Notice Period Days
Example:
- Term ends: December 31, 2025
- Notice period: 90 days
- Notice deadline: October 2, 2025
## Risk Flags
Mark as HIGH RISK if:
- Notice period > 60 days (easy to miss)
- Renewal term > 1 year (long commitment)
- Price escalation on renewal
- No termination for convenience
Mark as MEDIUM RISK if:
- Notice period 30-60 days
- Renewal term = 1 year
- Notice must be "written" (vs. email OK)
Mark as LOW RISK if:
- Notice period < 30 days
- Month-to-month after initial term
- Email notice accepted

Step 7: Test Plugin

# Test locally before installing
claude --plugin-dir ./contract-cliff-detector
# In Claude session:
/contract-cliff-detector:find-renewals /contracts/vendor-agreements/ 90

Step 8: Install Plugin

# Install to user scope (personal use)
claude plugin install ./contract-cliff-detector --scope user
# Or install to project scope (share with team via git)
claude plugin install ./contract-cliff-detector --scope project

Step 9: Distribute (Optional)

Option A: Share via Git
git init
git add .
git commit -m "Initial contract-cliff-detector plugin"
git remote add origin https://github.com/yourcompany/contract-cliff-detector
git push -u origin main
# Team members install with:
claude plugin install https://github.com/yourcompany/contract-cliff-detector
Option B: Submit to Plugin Marketplace
Coming soon: Anthropic plugin marketplace submission
Follow https://docs.anthropic.com/plugins for updates

Real-World Plugin Examples

Example 1: Sales “Prospect Research”

Workflow:
User: /sales:research-prospect "Acme Corp"

Claude orchestrates:

1. MCP: Crunchbase
   → Funding: $15M Series A (2024)
   → Investors: Sequoia, a16z
   → Employee count: 120
   → Growth: +50% headcount YoY

2. MCP: LinkedIn Sales Navigator
   → Decision makers: CTO, VP Engineering, VP Product
   → Recent hires: 5 engineers, 2 PMs in last quarter
   → Org chart: Engineering reports to CTO

3. Web scraping (via Browser MCP)
   → Product: B2B SaaS for inventory management
   → Pricing: Not public (request demo)
   → Tech stack: React, Python, AWS (from job postings)

4. MCP: Salesforce
   → Existing record? No
   → Past interactions? No
   → Similar customers? Yes - 3 inventory management companies
Output: Prospect-Acme-Corp.docx Company Profile
AttributeValue
CompanyAcme Corp
IndustryB2B SaaS - Inventory Management
StageSeries A ($15M)
Size120 employees
Tech StackReact, Python, AWS
Growth Signal+50% headcount YoY
Key Contacts
NameTitleLinkedInNotes
Jane SmithCTO/in/janesmithDecision maker for dev tools
John DoeVP Engineering/in/johndoeReports to CTO, 15 direct reports
Sarah LeeVP Product/in/sarahleeOwns roadmap, evaluation criteria
Pain Points (Inferred from Job Postings)
  1. “Scaling challenges” mentioned in 3 engineering roles
  2. “Data pipeline” issues — hiring 2 data engineers
  3. “Manual processes” — looking for ops automation
Discovery Questions (MEDDIC Framework)
  1. Metrics: “How are you measuring deployment velocity today?”
  2. Economic Buyer: “Who owns the budget for developer tools?”
  3. Decision Criteria: “What’s most important: speed, cost, or reliability?”
  4. Decision Process: “Walk me through how you evaluated your last tool purchase”
  5. Identify Pain: “You mentioned scaling challenges — tell me more”
  6. Champion: “Who on your team would benefit most from solving this?”

Example 2: Finance “Expense Categorization”

Workflow:
User: /finance:categorize-expenses /receipts/Q4-2025/

Claude processes:

Folder contains: 127 receipt images (photos, PDFs, screenshots)

Parallel processing (127 sub-agents):

Each sub-agent:
1. OCR extracts text from receipt image
2. Identifies: Merchant, date, amount, line items
3. Categorizes per company expense policy
4. Flags policy violations
Results aggregated — Output: Q4-Expense-Report.xlsx Tab 1: Categorized Expenses
DateMerchantCategoryAmountEmployeeStatusNotes
12/15Delta AirlinesTravel$450J. SmithOKSFO-NYC flight
12/15MarriottTravel$289J. SmithOKNYC hotel
12/16Capital GrilleMeals$312J. SmithFlagAmount over $100
12/16UberTransport$45J. SmithOKAirport transfer
12/17Best BuyEquipment$1,299M. JonesFlagNeeds pre-approval
Tab 2: Policy Violations
ReceiptViolationPolicyAction Required
Capital Grille $312Meal exceeds $100 limitMeals Policy 3.2Manager approval + guest names
Best Buy $1,299Equipment over $500 not pre-approvedProcurement Policy 2.1VP approval required
Amazon $89Missing receipt detailDocumentation Policy 1.4Itemized receipt needed
Tab 3: Summary by Category
CategoryCountTotal% of TotalBudgetVariance
Travel34$12,45042%$15,000-17%
Meals45$4,23014%$5,000-15%
Software23$8,90030%$8,000+11%
Equipment12$3,20011%$3,500-9%
Other13$8903%$1,000-11%
Total127$29,670100%$32,500-9%
Tab 4: Employee Summary
EmployeeReceiptsTotalViolationsStatus
J. Smith45$8,9002Review needed
M. Jones38$7,2001Review needed
A. Lee28$6,8000Approved
B. Chen16$6,7700Approved

Example 3: Product “Interview Synthesizer”

Workflow:
User: /product:synthesize-interviews /research/Q4-interviews/

Claude processes:

Folder contains: 20 interview transcripts (60-90 min each)

Parallel processing (20 sub-agents):

Each sub-agent reads one transcript and extracts:
- Jobs-to-be-done (what customer is trying to accomplish)
- Pain points (frustrations, workarounds)
- Feature requests (explicit asks)
- Quotes (verbatim compelling statements)
- Sentiment (overall satisfaction level)

Synthesis phase:
- Cluster pain points by theme
- Rank by frequency x intensity
- Identify patterns across segments
Output: Customer-Interview-Synthesis-Q4.docx Executive Summary 20 customer interviews conducted in Q4 2025. Customer segments: Enterprise (8), Mid-market (7), SMB (5). Top 3 Themes:
  1. Reporting limitations (15/20 customers, 75%)
  2. Mobile experience gaps (12/20 customers, 60%)
  3. Integration reliability (10/20 customers, 50%)
Theme 1: Reporting Limitations Frequency: 15/20 customers (75%) Intensity: High (8 mentioned as “major pain point”) Segment: Strongest in Enterprise (7/8) Specific Requests:
  • Custom report builder (12 requests)
  • Scheduled report delivery via email (8 requests)
  • Export to Google Sheets, not just Excel (6 requests)
  • Role-based report access (4 requests)
Recommendation: High priority for 2026 roadmap. Consider MVP with scheduled exports via email as a quick win, then V1 basic custom report builder, then V2 advanced builder with calculated fields and sharing. Theme 2: Mobile Experience Gaps Frequency: 12/20 customers (60%) Intensity: Medium-High (5 mentioned considering alternatives) Segment: Equal across all segments Specific Requests:
  • Mobile approval workflows (10 requests)
  • Android stability fixes (5 requests)
  • Offline mode (4 requests)
  • Mobile notifications that actually work (3 requests)
Recommendation: Medium-high priority. Approval workflows are table stakes for enterprise. Android stability is reputation risk. Theme 3: Integration Reliability Frequency: 10/20 customers (50%) Intensity: High (7 described as “daily frustration”) Segment: Strongest in Mid-market (6/7) Specific Requests:
  • Sync reliability improvements (8 requests)
  • Better error notifications (6 requests)
  • Self-service sync troubleshooting (4 requests)
  • Sync status dashboard (3 requests)
Recommendation: Critical for retention. Integration issues directly correlate with churn risk. Prioritize reliability over new integrations. Appendix: All Feature Requests (Ranked)
RankFeatureRequestsSegmentPriority
1Custom report builder12EnterpriseHigh
2Mobile approval workflows10AllHigh
3Integration reliability8Mid-marketCritical
4Scheduled report delivery8EnterpriseMedium
5Android stability5AllHigh
6Google Sheets export6Mid-marketMedium
7Error notifications6Mid-marketHigh
8Offline mobile mode4SMBLow
9Role-based reports4EnterpriseMedium
10Self-serve sync tools4Mid-marketMedium

Key Takeaways

Getting Started

  1. Start Small: Install pre-built plugins before building custom ones. The official library covers most common use cases.
  2. Test Before Scaling: Use claude --plugin-dir ./your-plugin to test locally before installing permanently.
  3. Leverage Community: 9,000+ community plugins already exist. Search before building from scratch.
  4. Focus on High-ROI: Target repetitive, time-consuming tasks that you do weekly or daily. Even 30-minute time savings compounds.
  5. Iterate: Plugins can be customized after installation. Start with defaults, then refine based on your workflow.

Architecture Decisions

ScenarioRecommended Approach
Process under 50 documentsSingle Cowork session
Process 50-500 documentsCowork with parallel sub-agents
Process 500+ documentsBatch API (Stage 1) + Cowork synthesis (Stage 2)
Need external dataAdd MCP server connections
Need automation triggersAdd hooks for lifecycle events
Need consistent behaviorAdd skills for domain knowledge

Common Pitfalls

PitfallSolution
Plugin doesn’t loadCheck manifest location: .claude-plugin/plugin.json
Commands not appearingVerify commands/ directory is at plugin root, not inside .claude-plugin/
Sub-agents failingEnsure model specified in frontmatter is available (e.g., sonnet-4.5)
MCP connection errorsVerify environment variables are set, credentials are valid
Slow performanceUse parallel sub-agents, consider Batch API for high-volume

Next Steps

  1. Download Claude Desktop: https://claude.com/download
  2. Enable Cowork: Click “Cowork” tab, grant folder access
  3. Install Your First Plugin: Cowork tab, then Plugins, then Browse, then Install
  4. Try a Built-in Command: Type / to see available commands
  5. Join the Community:

Conclusion

Claude Cowork plugins represent a fundamental shift in how knowledge workers interact with AI. Instead of generic conversations, you now have access to specialized, repeatable workflows that understand your domain. The key insight: Plugins aren’t just about saving time (though they do — often 50-80% reduction in manual work). They’re about consistency and scalability. A plugin that extracts RFP requirements does it the same way every time, catches the same edge cases, and produces the same structured output — whether it’s your 1st RFP or your 100th. What we covered:
  • How plugins work (5 components: commands, sub-agents, MCP, hooks, skills)
  • How to install and customize pre-built plugins
  • How to build your own plugins from scratch
  • 10 real-world use cases with full implementation details
  • Best practices for architecture and distribution
What’s next for you:
  1. Identify your “Monday morning dread” tasks — the repetitive work you procrastinate on or would benefit from delegating
  2. Find or build a plugin that automates 80% of it
  3. Reclaim those hours for work that actually requires human judgment
The future of knowledge work isn’t human vs. AI. It’s human with AI, and plugins are the bridge.