Co-Pilot
Updated 24 days ago

ralphie

Sskylarbarrera
0.0k
skylarbarrera/ralphie
84
Agent Score

💡 Summary

Ralphie automates coding tasks through structured AI-driven iterations and git integration.

🎯 Target Audience

Software developers looking for automated coding assistanceProject managers overseeing development workflowsTech teams aiming for improved code quality and efficiencyAI enthusiasts exploring autonomous coding solutionsStartups needing rapid prototyping of applications

🤖 AI Roast:Powerful, but the setup might scare off the impatient.

Security AnalysisMedium Risk

Risk: Medium. Review: shell/CLI command execution; outbound network access (SSRF, data egress); filesystem read/write scope and path traversal; dependency pinning and supply-chain risk. Run with least privilege and audit before enabling in production.

Ralphie

Autonomous AI coding loops.

Based on the Ralph Wiggum technique: describe what you want → AI builds it task by task → each task gets committed → come back to working code.

ralphie spec "Todo app with auth" # Creates spec ralphie run --all # Builds until done

Quick Start

1. Install Ralphie

npm install -g ralphie

2. Set up your AI provider

# Claude (default) curl -fsSL https://anthropic.com/install-claude.sh | sh # Or Codex npm install -g @openai/codex && export OPENAI_API_KEY=sk-... # Or OpenCode npm install -g opencode-ai && opencode auth login

3. Build something

# Create a spec ralphie spec "REST API with JWT auth" # Run the loop ralphie run --all git log --oneline # See what was built

What happens next? Ralphie generates a structured spec with research and analysis, then executes task-by-task with fresh context each iteration. Progress lives in git commits—the AI can fail, the loop restarts clean.

How It Works

Each iteration:

  1. Fresh context (no accumulated confusion)
  2. Reads spec → picks next pending task
  3. Implements, tests, commits
  4. Exits → loop restarts clean

The insight: Progress lives in git, not the LLM's context. The AI can fail—next iteration starts fresh and sees only committed work.

What makes Ralphie different: Structured specs with task IDs, status tracking, size budgeting, and verify commands. The AI knows exactly what to build, how to check it worked, and when it's done. No ambiguity, no drift.

Key Features

Compound Engineering - Each failure makes the system better:

  • Research phase: Fetches framework-specific best practices from skills.sh (React, Next.js, Expo, etc.) and web research
  • Dynamic tool selection: Discovers best-in-class libraries for your stack (not hardcoded recommendations)
  • Multi-agent review: Security, performance, architecture checks before implementation
  • Learnings system: Captures failure→fix transitions as reusable knowledge
  • Quality enforcement: >80% test coverage mandatory, typed interfaces required, security by default
  • Debug logs: Full audit trail in .ralphie/logs/ viewable with ralphie logs

Senior Engineer Output - Code quality built-in:

  • Research recommends current best tools (Zod, bcrypt, expo-auth-session)
  • Specs include explicit quality requirements (tests, security, architecture)
  • Test validator blocks task completion without >80% coverage
  • Clean, maintainable code with proper separation of concerns
  • See Code Quality Standards for details

Inspired by EveryInc/compound-engineering-plugin. See Architecture docs for details.

Commands

| Command | Description | |---------|-------------| | ralphie spec "desc" | Generate spec autonomously with research + analysis | | ralphie spec --skip-research | Skip deep research phase | | ralphie spec --skip-analyze | Skip SpecFlow analysis phase | | ralphie run | Run one iteration | | ralphie run -n 5 | Run 5 iterations | | ralphie run --all | Run until spec complete | | ralphie run --review | Run multi-agent review before iteration | | ralphie run --force | Override P1 blocking (use with --review) | | ralphie run --greedy | Multiple tasks per iteration | | ralphie run --headless | JSON output for CI/CD | | ralphie init | Add to existing project | | ralphie validate | Check spec format | | ralphie status | Show progress of active spec | | ralphie spec-list | List active and completed specs | | ralphie logs | View iteration logs (with --tail, --filter) | | ralphie archive | Move completed spec to archive |

Spec Format

Ralphie works from structured specs in .ralphie/specs/active/:

# My Project Goal: Build a REST API with authentication ## Tasks ### T001: Set up Express with TypeScript - Status: pending - Size: M **Deliverables:** - Initialize npm project with TypeScript - Configure Express server - Add basic health check endpoint **Verify:** `npm run build && npm test` --- ### T002: Create User model - Status: pending - Size: S **Deliverables:** - Define User interface - Add bcrypt password hashing **Verify:** `npm test`

Tasks transition from pendingin_progresspassed/failed. See Spec Guide for best practices.

Project Structure

After ralphie init, you'll have:

  • .ralphie/specs/active/ - Generated specs with task tracking
  • .ralphie/logs/ - Timestamped logs (research, spec generation, iterations)
  • .ralphie/learnings/ - Captured failure→fix knowledge
  • .ralphie/state.txt - Iteration progress log

See Architecture docs for complete structure and file formats.

Troubleshooting

| Problem | Solution | |---------|----------| | command not found: ralphie | npm install -g ralphie | | command not found: claude | export PATH="$HOME/.local/bin:$PATH" | | Missing ANTHROPIC_API_KEY | export ANTHROPIC_API_KEY=sk-ant-... (add to .zshrc) | | Missing OPENAI_API_KEY | export OPENAI_API_KEY=sk-... (add to .zshrc) | | Stuck on same task | Check task status. Run ralphie validate | | No spec found | ralphie spec "description" to create one |

Documentation

Requirements

  • Node.js 18+
  • Claude Code CLI, OpenAI Codex CLI, or OpenCode CLI
  • Git

License

MIT

5-Dim Analysis
Clarity9/10
Novelty8/10
Utility8/10
Completeness9/10
Maintainability8/10
Pros & Cons

Pros

  • Automates coding tasks effectively
  • Integrates well with git for version control
  • Ensures high code quality with testing requirements
  • Supports multiple AI providers for flexibility

Cons

  • Dependency on external AI providers
  • Potential for confusion in complex tasks
  • Requires initial setup and configuration
  • Learning curve for new users

Related Skills

pytorch

S
toolCode Lib
92/ 100

“It's the Swiss Army knife of deep learning, but good luck figuring out which of the 47 installation methods is the one that won't break your system.”

agno

S
toolCode Lib
90/ 100

“It promises to be the Kubernetes for agents, but let's see if developers have the patience to learn yet another orchestration layer.”

nuxt-skills

S
toolCo-Pilot
90/ 100

“It's essentially a well-organized cheat sheet that turns your AI assistant into a Nuxt framework parrot.”

Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.

Copyright belongs to the original author skylarbarrera.