💡 摘要
Ralph Wiggum通过规范驱动的迭代编码实现自主AI辅助软件开发。
🎯 适合人群
🤖 AI 吐槽: “看起来很能打,但别让配置把人劝退。”
风险:Medium。建议检查:是否执行 shell/命令行指令;是否发起外网请求(SSRF/数据外发);API Key/Token 的获取、存储与泄露风险。以最小权限运行,并在生产环境启用前审计代码与依赖。
🧠 Ralph Wiggum
Autonomous AI coding with spec-driven development
⚠️ Use at your own risk. This tool grants AI agents significant autonomy over your codebase and system. Review all changes and use in isolated environments when possible.
Quick Start
Install as Agent Skill (New!)
# Using Vercel's add-skill npx add-skill fstandhartinger/ralph-wiggum # Using OpenSkills openskills install fstandhartinger/ralph-wiggum
Full Setup with Interview
AI Agent? Start Here:
Read and execute INSTALLATION.md for a guided setup with interactive interview.
Human Developer? Start Here:
Read INSTALL.md for manual setup instructions.
What is Ralph Wiggum?
Ralph Wiggum (in this flavour) combines Geoffrey Huntley's original iterative bash loop with SpecKit-style specifications for fully autonomous AI-assisted software development.
Key Features
- 🔄 Iterative Self-Correction — Each loop picks ONE task, implements it, verifies, and commits
- 📋 Spec-Driven Development — Professional specifications with clear acceptance criteria
- 🎯 Completion Verification — Agent only outputs
<promise>DONE</promise>when criteria are 100% met - 🧠 Fresh Context Each Loop — Every iteration starts with a clean context window
- 📝 Shared State on Disk —
IMPLEMENTATION_PLAN.mdpersists between loops
How It Works
Based on Geoffrey Huntley's methodology:
┌─────────────────────────────────────────────────────────────┐
│ RALPH LOOP │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Orient │───▶│ Pick Task │───▶│ Implement │ │
│ │ Read specs │ │ from Plan │ │ & Test │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ┌────────────────────────────────────────┘ │
│ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Verify │───▶│ Commit │───▶│ Output DONE │ │
│ │ Criteria │ │ & Push │ │ (if passed) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ┌────────────────────────────────────────┘ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Bash loop checks for <promise>DONE</promise> │ │
│ │ If found: next iteration | If not: retry │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
The Magic Phrase
The agent outputs <promise>DONE</promise> ONLY when:
- All acceptance criteria are verified
- Tests pass
- Changes are committed and pushed
The bash loop checks for this phrase. If not found, it retries.
Two Modes
| Mode | Purpose | Command |
|------|---------|---------|
| build (default) | Pick spec/task, implement, test, commit | ./scripts/ralph-loop.sh |
| plan (optional) | Create detailed task breakdown from specs | ./scripts/ralph-loop.sh plan |
Planning is OPTIONAL
Most projects work fine directly from specs. The agent simply:
- Looks at
specs/folder - Picks the highest priority incomplete spec
- Implements it completely
Only use plan mode when you want a detailed breakdown of specs into smaller tasks.
Tip: Delete IMPLEMENTATION_PLAN.md to return to working directly from specs.
Installation
For AI Agents (Recommended)
Point your AI agent to this repo and say:
"Set up Ralph Wiggum in my project using https://github.com/fstandhartinger/ralph-wiggum"
The agent will read INSTALLATION.md and guide you through a lightweight, pleasant setup:
- Quick Setup (~1 min) — Create directories, download scripts
- Project Interview (~3-5 min) — Focus on your vision and goals, not technical minutiae
- Constitution — Create a guiding document for all future sessions
- Next Steps — Clear guidance on creating specs and starting Ralph
The interview prioritizes understanding what you're building and why over interrogating you about tech stack details. For existing projects, the agent can detect your stack automatically.
Manual Setup
See INSTALL.md for step-by-step manual instructions.
Usage
1. Create Specifications
Tell your AI what you want to build, or use /speckit.specify in Cursor:
/speckit.specify Add user authentication with OAuth
This creates specs/001-user-auth/spec.md with:
- Feature requirements
- Clear, testable acceptance criteria (critical!)
- Completion signal section
The key to good specs: Each spec needs acceptance criteria that are specific and testable. Not "works correctly" but "user can log in with Google and session persists across page reloads."
2. (Optional) Run Planning Mode
./scripts/ralph-loop.sh plan
Creates IMPLEMENTATION_PLAN.md with detailed task breakdown. This step is optional — most projects work fine directly from specs.
3. Run Build Mode
./scripts/ralph-loop.sh # Unlimited iterations ./scripts/ralph-loop.sh 20 # Max 20 iterations
Each iteration:
- Picks the highest priority task
- Implements it completely
- Verifies acceptance criteria
- Outputs
<promise>DONE</promise>only if criteria pass - Bash loop checks for the phrase
- Context cleared, next iteration starts
Logging (All Output Captured)
Every loop run writes all output to log files in logs/:
- Session log:
logs/ralph_*_session_YYYYMMDD_HHMMSS.log(entire run, including CLI output) - Iteration logs:
logs/ralph_*_iter_N_YYYYMMDD_HHMMSS.log(per-iteration CLI output) - Codex last message:
logs/ralph_codex_output_iter_N_*.txt
If something gets stuck, these logs contain the full verbose trace.
RLM Mode (Experimental)
For huge inputs, you can run in RLM-style mode by providing a large context file. The agent will treat the file as external environment and only load slices on demand. This is optional and experimental — it does not implement the full recursive runtime from the paper, but it does keep all loop outputs on disk and provides tooling guidance to query them.
./scripts/ralph-loop.sh --rlm-context ./rlm/context.txt ./scripts/ralph-loop-codex.sh --rlm-context ./rlm/context.txt
RLM workspace (when enabled):
rlm/trace/— Prompt snapshots per iterationrlm/index.tsv— Index of all iterationslogs/— Full CLI output per iteration
Optional recursive subcalls:
./scripts/rlm-subcall.sh --query rlm/queries/q1.md
This mirrors the idea from Recursive Language Models (RLMs), which treat long prompts as external environment rather than stuffing them into the context window.
Using Codex Instead
./scripts/ralph-loop-codex.sh plan ./scripts/ralph-loop-codex.sh
File Structure
project/
├── .specify/
│ └── memory/
│ └── constitution.md # Project principles & config
├── specs/
│ └── NNN-feature-name/
│ └── spec.md # Feature specification
├── scripts/
│ ├── ralph-loop.sh # Claude Code loop
│ └── ralph-loop-codex.sh # OpenAI Codex loop
├── PROMPT_build.md # Build mode instructions
├── PROMPT_plan.md # Planning mode instructions
├── IMPLEMENTATION_PLAN.md # (OPTIONAL) Detailed task list
├── AGENTS.md # Points to constitution
└── CLAUDE.md # Points to constitution
Note: IMPLEMENTATION_PLAN.md is optional. If it doesn't exist, the agent works directly from specs.
Core Principles
1. Fresh Context Each Loop
Each iteration gets a clean context window. The agent reads files from disk each time.
2. Shared State on Disk
IMPLEMENTATION_PLAN.md persists between loops. Agent reads it to pick tasks, updates it with progress.
3. Backpressure via Tests
Tests, lints, and builds reject invalid work. Agent must fix issues before the magic phrase.
4. Completion Verification
Agent only outputs <promise>DONE</promise> when acceptance criteria are 100% verified. The bash loop enforces this.
5. Let Ralph Ralph
Trust the AI to self-identify, self-correct, and self-improve. Observe patterns and adjust prompts.
Alternative Spec Sources
During installation, you can choose:
- SpecKit Specs (default) — Markdown files in
specs/ - GitHub Issues — Fetch from a repository
- Custom Source — Your own mechanism
The constitution and prompts adapt accordingly.
Agent Skills Compatibility
Ralph Wiggum follows the Agent Skills specification and is compatible with:
| Installer | Command |
|-----------|---------|
| Vercel add-skill | npx add-skill fstandhartinger/ralph-wiggum |
| OpenSkills | openskills install fstandhartinger/ralph-wiggum |
| Skillset | skillset add fstandhartinger/ralph-wiggum |
Works with: Claude Code, Cursor, Codex, Windsurf, Amp, OpenCode, and more.
Credits
This approach builds upon:
- Geoffrey Huntley's how-to-ralph-wiggum — The original methodolog
优点
- 支持迭代自我修正
- 任务的清晰接受标准
- 记录所有输出以便调试
- 与多种AI代理集成
缺点
- 需要仔细设置和理解
- 在复杂情况下可能需要手动干预
- 依赖外部规范
- 实验性功能可能不稳定
相关技能
免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。
版权归原作者所有 fstandhartinger.
