ralph-wiggum
π‘ Summary
Ralph Wiggum enables autonomous AI-assisted software development through spec-driven iterative coding.
π― Target Audience
π€ AI Roast: βPowerful, but the setup might scare off the impatient.β
Risk: Medium. Review: shell/CLI command execution; outbound network access (SSRF, data egress); API keys/tokens handling and storage. Run with least privilege and audit before enabling in production.
π§ Ralph Wiggum
Autonomous AI coding with spec-driven development
β οΈ Use at your own risk. This tool grants AI agents significant autonomy over your codebase and system. Review all changes and use in isolated environments when possible.
Quick Start
Install as Agent Skill (New!)
# Using Vercel's add-skill npx add-skill fstandhartinger/ralph-wiggum # Using OpenSkills openskills install fstandhartinger/ralph-wiggum
Full Setup with Interview
AI Agent? Start Here:
Read and execute INSTALLATION.md for a guided setup with interactive interview.
Human Developer? Start Here:
Read INSTALL.md for manual setup instructions.
What is Ralph Wiggum?
Ralph Wiggum (in this flavour) combines Geoffrey Huntley's original iterative bash loop with SpecKit-style specifications for fully autonomous AI-assisted software development.
Key Features
- π Iterative Self-Correction β Each loop picks ONE task, implements it, verifies, and commits
- π Spec-Driven Development β Professional specifications with clear acceptance criteria
- π― Completion Verification β Agent only outputs
<promise>DONE</promise>when criteria are 100% met - π§ Fresh Context Each Loop β Every iteration starts with a clean context window
- π Shared State on Disk β
IMPLEMENTATION_PLAN.mdpersists between loops
How It Works
Based on Geoffrey Huntley's methodology:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RALPH LOOP β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Orient βββββΆβ Pick Task βββββΆβ Implement β β
β β Read specs β β from Plan β β & Test β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββ β
β βΌ β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Verify βββββΆβ Commit βββββΆβ Output DONE β β
β β Criteria β β & Push β β (if passed) β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Bash loop checks for <promise>DONE</promise> β β
β β If found: next iteration | If not: retry β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The Magic Phrase
The agent outputs <promise>DONE</promise> ONLY when:
- All acceptance criteria are verified
- Tests pass
- Changes are committed and pushed
The bash loop checks for this phrase. If not found, it retries.
Two Modes
| Mode | Purpose | Command |
|------|---------|---------|
| build (default) | Pick spec/task, implement, test, commit | ./scripts/ralph-loop.sh |
| plan (optional) | Create detailed task breakdown from specs | ./scripts/ralph-loop.sh plan |
Planning is OPTIONAL
Most projects work fine directly from specs. The agent simply:
- Looks at
specs/folder - Picks the highest priority incomplete spec
- Implements it completely
Only use plan mode when you want a detailed breakdown of specs into smaller tasks.
Tip: Delete IMPLEMENTATION_PLAN.md to return to working directly from specs.
Installation
For AI Agents (Recommended)
Point your AI agent to this repo and say:
"Set up Ralph Wiggum in my project using https://github.com/fstandhartinger/ralph-wiggum"
The agent will read INSTALLATION.md and guide you through a lightweight, pleasant setup:
- Quick Setup (~1 min) β Create directories, download scripts
- Project Interview (~3-5 min) β Focus on your vision and goals, not technical minutiae
- Constitution β Create a guiding document for all future sessions
- Next Steps β Clear guidance on creating specs and starting Ralph
The interview prioritizes understanding what you're building and why over interrogating you about tech stack details. For existing projects, the agent can detect your stack automatically.
Manual Setup
See INSTALL.md for step-by-step manual instructions.
Usage
1. Create Specifications
Tell your AI what you want to build, or use /speckit.specify in Cursor:
/speckit.specify Add user authentication with OAuth
This creates specs/001-user-auth/spec.md with:
- Feature requirements
- Clear, testable acceptance criteria (critical!)
- Completion signal section
The key to good specs: Each spec needs acceptance criteria that are specific and testable. Not "works correctly" but "user can log in with Google and session persists across page reloads."
2. (Optional) Run Planning Mode
./scripts/ralph-loop.sh plan
Creates IMPLEMENTATION_PLAN.md with detailed task breakdown. This step is optional β most projects work fine directly from specs.
3. Run Build Mode
./scripts/ralph-loop.sh # Unlimited iterations ./scripts/ralph-loop.sh 20 # Max 20 iterations
Each iteration:
- Picks the highest priority task
- Implements it completely
- Verifies acceptance criteria
- Outputs
<promise>DONE</promise>only if criteria pass - Bash loop checks for the phrase
- Context cleared, next iteration starts
Logging (All Output Captured)
Every loop run writes all output to log files in logs/:
- Session log:
logs/ralph_*_session_YYYYMMDD_HHMMSS.log(entire run, including CLI output) - Iteration logs:
logs/ralph_*_iter_N_YYYYMMDD_HHMMSS.log(per-iteration CLI output) - Codex last message:
logs/ralph_codex_output_iter_N_*.txt
If something gets stuck, these logs contain the full verbose trace.
RLM Mode (Experimental)
For huge inputs, you can run in RLM-style mode by providing a large context file. The agent will treat the file as external environment and only load slices on demand. This is optional and experimental β it does not implement the full recursive runtime from the paper, but it does keep all loop outputs on disk and provides tooling guidance to query them.
./scripts/ralph-loop.sh --rlm-context ./rlm/context.txt ./scripts/ralph-loop-codex.sh --rlm-context ./rlm/context.txt
RLM workspace (when enabled):
rlm/trace/β Prompt snapshots per iterationrlm/index.tsvβ Index of all iterationslogs/β Full CLI output per iteration
Optional recursive subcalls:
./scripts/rlm-subcall.sh --query rlm/queries/q1.md
This mirrors the idea from Recursive Language Models (RLMs), which treat long prompts as external environment rather than stuffing them into the context window.
Using Codex Instead
./scripts/ralph-loop-codex.sh plan ./scripts/ralph-loop-codex.sh
File Structure
project/
βββ .specify/
β βββ memory/
β βββ constitution.md # Project principles & config
βββ specs/
β βββ NNN-feature-name/
β βββ spec.md # Feature specification
βββ scripts/
β βββ ralph-loop.sh # Claude Code loop
β βββ ralph-loop-codex.sh # OpenAI Codex loop
βββ PROMPT_build.md # Build mode instructions
βββ PROMPT_plan.md # Planning mode instructions
βββ IMPLEMENTATION_PLAN.md # (OPTIONAL) Detailed task list
βββ AGENTS.md # Points to constitution
βββ CLAUDE.md # Points to constitution
Note: IMPLEMENTATION_PLAN.md is optional. If it doesn't exist, the agent works directly from specs.
Core Principles
1. Fresh Context Each Loop
Each iteration gets a clean context window. The agent reads files from disk each time.
2. Shared State on Disk
IMPLEMENTATION_PLAN.md persists between loops. Agent reads it to pick tasks, updates it with progress.
3. Backpressure via Tests
Tests, lints, and builds reject invalid work. Agent must fix issues before the magic phrase.
4. Completion Verification
Agent only outputs <promise>DONE</promise> when acceptance criteria are 100% verified. The bash loop enforces this.
5. Let Ralph Ralph
Trust the AI to self-identify, self-correct, and self-improve. Observe patterns and adjust prompts.
Alternative Spec Sources
During installation, you can choose:
- SpecKit Specs (default) β Markdown files in
specs/ - GitHub Issues β Fetch from a repository
- Custom Source β Your own mechanism
The constitution and prompts adapt accordingly.
Agent Skills Compatibility
Ralph Wiggum follows the Agent Skills specification and is compatible with:
| Installer | Command |
|-----------|---------|
| Vercel add-skill | npx add-skill fstandhartinger/ralph-wiggum |
| OpenSkills | openskills install fstandhartinger/ralph-wiggum |
| Skillset | skillset add fstandhartinger/ralph-wiggum |
Works with: Claude Code, Cursor, Codex, Windsurf, Amp, OpenCode, and more.
Credits
This approach builds upon:
- Geoffrey Huntley's how-to-ralph-wiggum β The original methodolog
Pros
- Supports iterative self-correction
- Clear acceptance criteria for tasks
- Logs all outputs for debugging
- Integrates with various AI agents
Cons
- Requires careful setup and understanding
- May need manual intervention in complex cases
- Dependency on external specifications
- Experimental features may be unstable
Related Skills
pytorch
SβIt's the Swiss Army knife of deep learning, but good luck figuring out which of the 47 installation methods is the one that won't break your system.β
agno
SβIt promises to be the Kubernetes for agents, but let's see if developers have the patience to learn yet another orchestration layer.β
nuxt-skills
SβIt's essentially a well-organized cheat sheet that turns your AI assistant into a Nuxt framework parrot.β
Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.
Copyright belongs to the original author fstandhartinger.
