💡 摘要
LangGraphGo 是一个用于构建 AI 工作流的 Go 库,具有高级状态管理和并行执行能力。
🎯 适合人群
🤖 AI 吐槽: “看起来很能打,但别让配置把人劝退。”
风险:Medium。建议检查:是否执行 shell/命令行指令;是否发起外网请求(SSRF/数据外发);API Key/Token 的获取、存储与泄露风险;文件读写范围与路径穿越风险。以最小权限运行,并在生产环境启用前审计代码与依赖。
LangGraphGo

Abbreviated as
lango, 中文:懒狗
Website: http://lango.rpcx.io
🔀 Forked from paulnegz/langgraphgo - Enhanced with streaming, visualization, observability, and production-ready features.
This fork aims for feature parity with the Python LangGraph library, adding support for parallel execution, persistence, advanced state management, pre-built agents, and human-in-the-loop workflows.
Test coverage
🌐 Websites Built with LangGraphGo
Real-world applications built with LangGraphGo:
| Insight | NoteX |
| :--------------------------------: | :----------------------------: |
|
|
|
Insight - An AI-powered knowledge management and insight generation platform that uses LangGraphGo to build intelligent analysis workflows, helping users extract key insights from massive amounts of information.
NoteX - An intelligent note-taking and knowledge organization tool that leverages AI for automatic categorization, tag extraction, and content association, making knowledge management more efficient.
📦 Installation
go get github.com/smallnest/langgraphgo
Note: This repository uses Git submodules for the showcases directory. When cloning, use one of the following methods:
# Method 1: Clone with submodules git clone --recurse-submodules https://github.com/smallnest/langgraphgo # Method 2: Clone first, then initialize submodules git clone https://github.com/smallnest/langgraphgo cd langgraphgo git submodule update --init --recursive
🚀 Features
-
Core Runtime:
- Parallel Execution: Concurrent node execution (fan-out) with thread-safe state merging.
- Runtime Configuration: Propagate callbacks, tags, and metadata via
RunnableConfig. - Generic Types: Type-safe state management with generic StateGraph implementations.
- LangChain Compatible: Works seamlessly with
langchaingo.
-
Persistence & Reliability:
- Checkpointers: Redis, Postgres, SQLite, and File implementations for durable state.
- File Checkpointing: Lightweight file-based checkpointing without external dependencies.
- State Recovery: Pause and resume execution from checkpoints.
-
Advanced Capabilities:
- State Schema: Granular state updates with custom reducers (e.g.,
AppendReducer). - Smart Messages: Intelligent message merging with ID-based upserts (
AddMessages). - Command API: Dynamic control flow and state updates directly from nodes.
- Ephemeral Channels: Temporary state values that clear automatically after each step.
- Subgraphs: Compose complex agents by nesting graphs within graphs.
- Enhanced Streaming: Real-time event streaming with multiple modes (
updates,values,messages). - Pre-built Agents: Ready-to-use
ReAct,CreateAgent, andSupervisoragent factories. - Programmatic Tool Calling (PTC): LLM generates code that calls tools programmatically, reducing latency and token usage by 10x.
- State Schema: Granular state updates with custom reducers (e.g.,
-
Developer Experience:
- Visualization: Export graphs to Mermaid, DOT, and ASCII with conditional edge support.
- Human-in-the-loop (HITL): Interrupt execution, inspect state, edit history (
UpdateState), and resume. - Observability: Built-in tracing and metrics support.
- Tools: Integrated
TavilyandExasearch tools.
🎯 Quick Start
package main import ( "context" "fmt" "log" "github.com/smallnest/langgraphgo/graph" "github.com/tmc/langchaingo/llms" "github.com/tmc/langchaingo/llms/openai" ) func main() { ctx := context.Background() model, _ := openai.New() // 1. Create Graph g := graph.NewStateGraph[map[string]any]() // 2. Add Nodes g.AddNode("generate", "generate", func(ctx context.Context, state map[string]any) (map[string]any, error) { input, ok := state["input"].(string) if !ok { return nil, fmt.Errorf("invalid input") } response, err := model.Call(ctx, input) if err != nil { return nil, err } state["output"] = response return state, nil }) // 3. Define Edges g.AddEdge("generate", graph.END) g.SetEntryPoint("generate") // 4. Compile runnable, _ := g.Compile() // 5. Invoke initialState := map[string]any{ "input": "Hello, LangGraphGo!", } result, _ := runnable.Invoke(ctx, initialState) fmt.Println(result) }
📚 Examples
This project includes 90+ comprehensive examples organized into categories:
Featured Examples
- ReAct Agent - Reason and Action agent using tools
- RAG Pipeline - Complete retrieval-augmented generation
- Chat Agent - Multi-turn conversation with session management
- Supervisor - Multi-agent orchestration
- Tree of Thoughts - Search-based reasoning with multiple solution paths
- Planning Agent - Dynamic workflow plan creation
- PEV Agent - Plan-Execute-Verify with self-correction
- Reflection Agent - Iterative improvement through self-reflection
- Mental Loop - Simulator-in-the-loop for safe action testing
- Reflexive Metacognitive Agent - Self-aware agent with explicit capabilities model
Example Categories
- Basic Concepts - Simple LLM integration, LangChain compatibility
- State Management - State schema, custom reducers, smart messages
- Graph Structure - Conditional routing, subgraphs, generics
- Parallel Execution - Fan-out/fan-in with state merging
- Streaming & Events - Real-time updates, listeners, logging
- Persistence - Checkpointing with file, memory, databases
- Human-in-the-Loop - Interrupts, approval, time travel
- Pre-built Agents - ReAct, Supervisor, Chat, Planning agents
- Programmatic Tool Calling - PTC for 10x latency reduction
- Memory - Buffer, sliding window, summarization strategies
- RAG - Vector stores, GraphRAG with FalkorDB
- Tools & Integrations - Search tools, GoSkills, MCP
🔧 Key Concepts
Parallel Execution
LangGraphGo automatically executes nodes in parallel when they share the same starting node. Results are merged using the graph's state merger or schema.
g.AddEdge("start", "branch_a") g.AddEdge("start", "branch_b") // branch_a and branch_b run concurrently
Human-in-the-loop (HITL)
Pause execution to allow for human approval or input.
config := &graph.Config{ InterruptBefore: []string{"human_review"}, } // Execution stops before "human_review" node state, err := runnable.InvokeWithConfig(ctx, input, config) // Resume execution resumeConfig := &graph.Config{ ResumeFrom: []string{"human_review"}, } runnable.InvokeWithConfig(ctx, state, resumeConfig)
Pre-built Agents
Quickly create complex agents using factory functions.
// Create a ReAct agent agent, err := prebuilt.CreateReactAgent(model, tools) // Create an agent with options agent, err := prebuilt.CreateAgent(model, tools, prebuilt.WithSystemMessage("System prompt")) // Create a Supervisor agent supervisor, err := prebuilt.CreateSupervisor(model, agents)
Programmatic Tool Calling (PTC)
Generate code that calls tools directly, reducing API round-trips and token usage.
// Create a PTC agent agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{ Model: model, Tools: toolList, Language: ptc.LanguagePython, // or ptc.LanguageGo ExecutionMode: ptc.ModeDirect, // Subprocess (default) or ModeServer MaxIterations: 10, }) // LLM generates code that calls tools programmatically result, err := agent.Invoke(ctx, initialState)
See the PTC README for detailed documentation.
🎨 Graph Visualization
exporter := runnable.GetGraph() fmt.Println(exporter.DrawMermaid()) // Generates Mermaid flowchart
📈 Performance
- Graph Operations: ~14-94μs depending on format
- Tracing Overhead: ~4μs per execution
- Event Processing: 1000+ events/second
- Streaming Latency: <100ms
🧪 Testing
go test ./... -v
Contributors
This project is op
优点
- 支持并行执行,提高效率。
- 提供高级状态管理功能。
- 与现有 AI 工具良好集成。
缺点
- 对初学者可能有较陡的学习曲线。
- 文档可能需要更详细。
- 对 Go 的依赖可能限制用户基础。
相关技能
免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。
版权归原作者所有 smallnest.
