Co-Pilot
Updated 2 months ago

writing-skills

Oobra
28.1k
obra/superpowers/skills/writing-skills
82
Agent Score

💡 Summary

A framework for applying Test-Driven Development principles to create, test, and refine AI agent skills through pressure scenarios and iterative refinement.

🎯 Target Audience

AI skill developersTechnical writers for AI systemsAI agent trainersProduct managers overseeing AI capabilitiesQA engineers testing agent behavior

🤖 AI Roast:This skill is so meta it needs a skill to understand itself, creating a documentation ouroboros that could consume its own tail in recursive confusion.

Security AnalysisMedium Risk

The skill involves creating and executing subagent scenarios which could potentially lead to unsafe code generation if pressure tests include malicious prompts. Mitigation: Implement sandboxed testing environments and input validation for all scenario definitions to prevent prompt injection through test cases.


name: writing-skills description: Use when creating new skills, editing existing skills, or verifying skills work before deployment

Writing Skills

Overview

Writing skills IS Test-Driven Development applied to process documentation.

Personal skills live in agent-specific directories (~/.claude/skills for Claude Code, ~/.codex/skills for Codex)

You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).

Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.

REQUIRED BACKGROUND: You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.

Official guidance: For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.

What is a Skill?

A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.

Skills are: Reusable techniques, patterns, tools, reference guides

Skills are NOT: Narratives about how you solved a problem once

TDD Mapping for Skills

| TDD Concept | Skill Creation | |-------------|----------------| | Test case | Pressure scenario with subagent | | Production code | Skill document (SKILL.md) | | Test fails (RED) | Agent violates rule without skill (baseline) | | Test passes (GREEN) | Agent complies with skill present | | Refactor | Close loopholes while maintaining compliance | | Write test first | Run baseline scenario BEFORE writing skill | | Watch it fail | Document exact rationalizations agent uses | | Minimal code | Write skill addressing those specific violations | | Watch it pass | Verify agent now complies | | Refactor cycle | Find new rationalizations → plug → re-verify |

The entire skill creation process follows RED-GREEN-REFACTOR.

When to Create a Skill

Create when:

  • Technique wasn't intuitively obvious to you
  • You'd reference this again across projects
  • Pattern applies broadly (not project-specific)
  • Others would benefit

Don't create for:

  • One-off solutions
  • Standard practices well-documented elsewhere
  • Project-specific conventions (put in CLAUDE.md)
  • Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)

Skill Types

Technique

Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)

Pattern

Way of thinking about problems (flatten-with-flags, test-invariants)

Reference

API docs, syntax guides, tool documentation (office docs)

Directory Structure

skills/
  skill-name/
    SKILL.md              # Main reference (required)
    supporting-file.*     # Only if needed

Flat namespace - all skills in one searchable namespace

Separate files for:

  1. Heavy reference (100+ lines) - API docs, comprehensive syntax
  2. Reusable tools - Scripts, utilities, templates

Keep inline:

  • Principles and concepts
  • Code patterns (< 50 lines)
  • Everything else

SKILL.md Structure

Frontmatter (YAML):

  • Only two fields supported: name and description
  • Max 1024 characters total
  • name: Use letters, numbers, and hyphens only (no parentheses, special chars)
  • description: Third-person, describes ONLY when to use (NOT what it does)
    • Start with "Use when..." to focus on triggering conditions
    • Include specific symptoms, situations, and contexts
    • NEVER summarize the skill's process or workflow (see CSO section for why)
    • Keep under 500 characters if possible
--- name: Skill-Name-With-Hyphens description: Use when [specific triggering conditions and symptoms] --- # Skill Name ## Overview What is this? Core principle in 1-2 sentences. ## When to Use [Small inline flowchart IF decision non-obvious] Bullet list with SYMPTOMS and use cases When NOT to use ## Core Pattern (for techniques/patterns) Before/after code comparison ## Quick Reference Table or bullets for scanning common operations ## Implementation Inline code for simple patterns Link to file for heavy reference or reusable tools ## Common Mistakes What goes wrong + fixes ## Real-World Impact (optional) Concrete results

Claude Search Optimization (CSO)

Critical for discovery: Future Claude needs to FIND your skill

1. Rich Description Field

Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"

Format: Start with "Use when..." to focus on triggering conditions

CRITICAL: Description = When to Use, NOT What the Skill Does

The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.

Why this matters: Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).

When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.

The trap: Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.

# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill description: Use when executing plans - dispatches subagent per task with code review between tasks # ❌ BAD: Too much process detail description: Use for TDD - write test first, watch it fail, write minimal code, refactor # ✅ GOOD: Just triggering conditions, no workflow summary description: Use when executing implementation plans with independent tasks in the current session # ✅ GOOD: Triggering conditions only description: Use when implementing any feature or bugfix, before writing implementation code

Content:

  • Use concrete triggers, symptoms, and situations that signal this skill applies
  • Describe the problem (race conditions, inconsistent behavior) not language-specific symptoms (setTimeout, sleep)
  • Keep triggers technology-agnostic unless the skill itself is technology-specific
  • If skill is technology-specific, make that explicit in the trigger
  • Write in third person (injected into system prompt)
  • NEVER summarize the skill's process or workflow
# ❌ BAD: Too abstract, vague, doesn't include when to use description: For async testing # ❌ BAD: First person description: I can help you with async tests when they're flaky # ❌ BAD: Mentions technology but skill isn't specific to it description: Use when tests use setTimeout/sleep and are flaky # ✅ GOOD: Starts with "Use when", describes problem, no workflow description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently # ✅ GOOD: Technology-specific skill with explicit trigger description: Use when using React Router and handling authentication redirects

2. Keyword Coverage

Use words Claude would search for:

  • Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
  • Symptoms: "flaky", "hanging", "zombie", "pollution"
  • Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
  • Tools: Actual commands, library names, file types

3. Descriptive Naming

Use active voice, verb-first:

  • creating-skills not skill-creation
  • condition-based-waiting not async-test-helpers

4. Token Efficiency (Critical)

Problem: getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.

Target word counts:

  • getting-started workflows: <150 words each
  • Frequently-loaded skills: <200 words total
  • Other skills: <500 words (still be concise)

Techniques:

Move details to tool help:

# ❌ BAD: Document all flags in SKILL.md search-conversations supports --text, --both, --after DATE, --before DATE, --limit N # ✅ GOOD: Reference --help search-conversations supports multiple modes and filters. Run --help for details.

Use cross-references:

# ❌ BAD: Repeat workflow details When searching, dispatch subagent with template... [20 lines of repeated instructions] # ✅ GOOD: Reference other skill Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.

Compress examples:

# ❌ BAD: Verbose example (42 words) your human partner: "How did we handle authentication errors in React Router before?" You: I'll search past conversations for React Router authentication patterns. [Dispatch subagent with search query: "React Router authentication error handling 401"] # ✅ GOOD: Minimal example (20 words) Partner: "How did we handle auth errors in React Router?" You: Searching... [Dispatch subagent → synthesis]

Eliminate redundancy:

  • Don't repeat what's in cross-referenced skills
  • Don't explain what's obvious from command
  • Don't include multiple examples of same pattern

Verification:

wc -w skills/path/SKILL.md # getting-started workflows: aim for <150 each # Other frequently-loaded: aim for <200 total

Name by what you DO or core insight:

  • condition-based-waiting > async-test-helpers
  • using-skills not skill-usage
  • flatten-with-flags > data-structure-refactoring
  • root-cause-tracing > debugging-techniques

Gerunds (-ing) work well for processes:

  • `creating-skil
5-Dim Analysis
Clarity8/10
Novelty8/10
Utility9/10
Completeness9/10
Maintainability7/10
Pros & Cons

Pros

  • Provides rigorous methodology for skill validation
  • Emphasizes empirical testing over theoretical documentation
  • Includes concrete structure and templates
  • Focuses on discoverability through search optimization

Cons

  • High cognitive overhead for beginners
  • Meta-nature can be confusing
  • Requires understanding of TDD concepts first
  • Process-heavy for simple skills

Related Skills

skill-creator

B
toolCo-Pilot
76/ 100

“A skill about making skills is the ultimate meta, but it's like reading the manual on how to read manuals.”

pytorch

S
toolCode Lib
92/ 100

“It's the Swiss Army knife of deep learning, but good luck figuring out which of the 47 installation methods is the one that won't break your system.”

agno

S
toolCode Lib
90/ 100

“It promises to be the Kubernetes for agents, but let's see if developers have the patience to learn yet another orchestration layer.”

Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.

Copyright belongs to the original author obra.