Co-Pilot / 辅助式
更新于 25 days ago

helloagents

Hhellowind777
0.4k
hellowind777/helloagents
80
Agent 评分

💡 摘要

HelloAGENTS是一个结构化工作流系统,确保编码任务经过评估、实施和验证。

🎯 适合人群

寻求可靠输出的软件开发人员需要一致文档的项目经理旨在实现可追溯更改的开发团队质量保证专业人员参与项目文档的技术写作人员

🤖 AI 吐槽:这就像编码的GPS——只是别指望它能开车!

安全分析严重风险

自述文件暗示了执行Shell命令和管理文件系统访问等潜在风险。为减轻风险,确保严格验证输入并限制命令执行能力。

HelloAGENTS

An intelligent workflow system that keeps going: evaluate → implement → verify.

Router Version License PRs Welcome GitHub last commit


📑 Table of Contents


🎯 Why HelloAGENTS?

You know the pattern: the assistant gives a good analysis… then stops. Or it edits code but forgets the docs. Or it “finishes” without running anything.

HelloAGENTS is a structured workflow system (routing + stages + acceptance gates) that pushes the work through to a verifiable end.

| Challenge | Without HelloAGENTS | With HelloAGENTS | |---|---|---| | Inconsistent outputs | Depends on prompt quality | Unified output shell + deterministic stages | | Stops too early | “Here’s what you should do…” | Keeps going: implement → test → validate | | No quality gates | Manual review required | Stage / Gate / Flow acceptance | | Context drift | Decisions get lost | State variables + solution packages | | Risky commands | Easy to do damage | EHRB detection + workflow escalation |

💡 Best For

  • Coders who want “done” to mean “verified”
  • Teams that need consistent format and traceable changes
  • Projects where docs are part of the deliverable

⚠️ Not For

  • ❌ One-off snippets (a normal prompt is faster)
  • ❌ Projects where you can’t keep outputs in Git
  • ❌ Tasks that require hard guarantees (still review before production)

📊 Data That Speaks

No made-up “50% faster” claims here—just things you can verify in this repo:

| Item | Value | Where to verify | |---|---:|---| | Routing layers | 3 | AGENTS.md / CLAUDE.md (Context → Tools → Intent) | | Workflow stages | 4 | Evaluate → Analyze → Design → Develop | | Execution modes | 3 | Tweak / Lite / Standard | | Commands | 12 | {BUNDLE_DIR}/skills/helloagents/SKILL.md | | Reference modules | 23 | {BUNDLE_DIR}/skills/helloagents/references/ | | Automation scripts | 7 | {BUNDLE_DIR}/skills/helloagents/scripts/ | | Bundles in this repo | 5 | Codex CLI/, Claude Code/, Gemini CLI/, Grok CLI/, Qwen CLI/ |

🔁 Before & After

Sometimes the difference is easier to feel than to explain. Here’s a concrete “before vs after” snapshot:

| | Without HelloAGENTS | With HelloAGENTS | |---|---|---| | Start | You jump into implementation quickly | You start by scoring requirements and filling gaps | | Delivery | You assemble the steps manually | The workflow keeps pushing to “verified done” | | Docs | Often forgotten | Treated as a first-class deliverable | | Safety | Risky ops can slip through | EHRB detection escalates risky actions | | Repeatability | Depends on the prompt | Same stages + gates, every time |

Now let’s make it tangible. Below is a real “before/after” demo snapshot (Snake game generated with/without a structured workflow):

Without HelloAGENTS It works, but you’re still manually driving the process.

With HelloAGENTS More complete delivery, clearer controls, and verification steps baked in.

And here’s what the Evaluate stage looks like in practice: it asks the “boring but necessary” questions (platform, delivery form, controls, acceptance criteria) before writing code.

In plain words, you’ll typically be asked to clarify:

  • runtime target (browser / desktop / CLI)
  • delivery form (single file / repo / packaged build)
  • control scheme
  • rules and difficulty preferences
  • acceptance criteria (screen size, scoring, audio, obstacles, etc.)

✨ Features

Let’s be practical—here’s what you get.

🧭 3-layer intelligent routing

  • Continues the same task across turns
  • Detects tool calls (SKILL/MCP/plugins) vs internal workflow
  • Chooses tweak / lite / standard execution based on complexity

Benefit: less “prompt babysitting”

📚 4-stage workflow engine

  • Evaluate → Analyze → Design → Develop
  • Clear entry/exit gates
  • Keeps artifacts as solution packages

Benefit: repeatable delivery, not lucky outputs

⚡ 3-layer acceptance

  • Stage-level checks
  • Inter-stage gates (e.g., validate solution package)
  • Flow-level acceptance summary

Benefit: you can trust the result more

🛡️ EHRB safety detection

  • Keyword scan + semantic analysis
  • Escalates to confirmation when risky
  • Flags destructive ops (e.g., rm -rf, force push)

Benefit: fewer “oops” moments

🚀 Quick Start

This repo ships multiple ready-to-copy bundles (one per AI CLI):

Codex CLI, Claude Code, Gemini CLI, Grok CLI, Qwen CLI.

1) Clone the repo

git clone https://github.com/hellowind777/helloagents.git cd helloagents

2) Install (placeholder-based)

Because every CLI stores its config in a different place, the README uses placeholders.

First, pick your bundle parameters:

| Your CLI | BUNDLE_DIR | CONFIG_FILE | |---|---|---| | Codex CLI | Codex CLI | AGENTS.md | | Claude Code | Claude Code | CLAUDE.md | | Gemini CLI | Gemini CLI | GEMINI.md | | Grok CLI | Grok CLI | GROK.md | | Qwen CLI | Qwen CLI | QWEN.md |

Then copy both the config file and the skills/helloagents/ folder into your CLI config root.

macOS / Linux (bash)

CLI_CONFIG_ROOT="..." BUNDLE_DIR="Codex CLI" CONFIG_FILE="AGENTS.md" mkdir -p "$CLI_CONFIG_ROOT/skills" cp -f "$BUNDLE_DIR/$CONFIG_FILE" "$CLI_CONFIG_ROOT/$CONFIG_FILE" cp -R "$BUNDLE_DIR/skills/helloagents" "$CLI_CONFIG_ROOT/skills/helloagents"

Windows (PowerShell)

$CLI_CONFIG_ROOT = "..." $BUNDLE_DIR = "Codex CLI" $CONFIG_FILE = "AGENTS.md" New-Item -ItemType Directory -Force "$CLI_CONFIG_ROOT\\skills" | Out-Null Copy-Item -Force "$BUNDLE_DIR\\$CONFIG_FILE" "$CLI_CONFIG_ROOT\\$CONFIG_FILE" Copy-Item -Recurse -Force "$BUNDLE_DIR\\skills\\helloagents" "$CLI_CONFIG_ROOT\\skills\\helloagents"

3) Verify it works

In your CLI, run:

  • /helloagents or $helloagents

Expected: a welcome message that starts with something like:

💡【HelloAGENTS】- 技能已激活

4) Start using it

  • Try ~help to see all commands
  • Or just describe what you want; the router will pick the workflow

🔧 How It Works

flowchart TD Start([User input / 用户输入]) --> L1{Layer 1: Context / 上下文} L1 -->|Continue / 继续| Continue[Continue task / 继续任务] L1 -->|New request / 新请求| L2{Layer 2: Tools / 工具} L2 -->|External tool / 外部工具| Tool[Run tool + shell wrap / 执行工具+Shell包装] L2 -->|No tool / 无工具| L3{Layer 3: Intent / 意图} L3 -->|Q&A / 问答| Answer[Direct answer / 直接回答] L3 -->|Change / 改动| Eval[Evaluate / 需求评估] Eval -->|Score >= 7 / >=7| Complexity{Complexity / 复杂度} Eval -->|Score < 7 / <7| Clarify[Clarify / 追问补充] Complexity -->|Tweak / 微调| Tweak[Tweak mode / 微调模式] Complexity -->|Lite / 轻量| Analyze[Analyze / 项目分析] Complexity -->|Standard / 标准| Analyze Analyze --> Design[Design / 方案设计(方案包)] Design --> Develop[Develop / 开发实施(实现+测试)] Develop --> Done[✅ Done / 完成 + acceptance / 验收摘要] style Eval fill:#e3f2fd style Analyze fill:#fff3e0 style Design fill:#ede9fe style Develop fill:#dcfce7 style Done fill:#16a34a,color:#fff

Key artifacts you’ll see in real projects:

  • plan/YYYYMMDDHHMM_<feature>/ solution packa
五维分析
清晰度8/10
创新性8/10
实用性9/10
完整性8/10
可维护性7/10
优缺点分析

优点

  • 结构化工作流增强可靠性
  • 自动验证减少手动错误
  • 过程中的清晰文档

缺点

  • 可能不适合一次性编码任务
  • 需要Git进行输出管理
  • 仍需人工审核以确保生产

相关技能

pytorch

S
toolCode Lib / 代码库
92/ 100

“它是深度学习的瑞士军刀,但祝你好运能从47种安装方法里找到那个不会搞崩你系统的那一个。”

agno

S
toolCode Lib / 代码库
90/ 100

“它承诺成为智能体领域的Kubernetes,但得看开发者有没有耐心学习又一个编排层。”

nuxt-skills

S
toolCo-Pilot / 辅助式
90/ 100

“这本质上是一份组织良好的小抄,能把你的 AI 助手变成一只 Nuxt 框架的复读机。”

免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。

版权归原作者所有 hellowind777.