Co-Pilot / 辅助式
更新于 25 days ago

langfuse

Llangfuse
21.4k
langfuse/langfuse
80
Agent 评分

💡 摘要

Langfuse是一个开源平台,用于协作开发和监控AI应用程序。

🎯 适合人群

AI开发者数据科学家技术产品经理DevOps工程师研究团队

🤖 AI 吐槽:看起来很能打,但别让配置把人劝退。

安全分析中风险

风险:Medium。建议检查:是否执行 shell/命令行指令;是否发起外网请求(SSRF/数据外发);依赖锁定与供应链风险。以最小权限运行,并在生产环境启用前审计代码与依赖。

Langfuse is an open source LLM engineering platform. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be self-hosted in minutes and is battle-tested.

Langfuse Overview Video

✨ Core Features

  • LLM Application Observability: Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions. Try the interactive demo to see this in action.

  • Prompt Management helps you centrally manage, version control, and collaboratively iterate on your prompts. Thanks to strong caching on server and client side, you can iterate on prompts without adding latency to your application.

  • Evaluations are key to the LLM application development workflow, and Langfuse adapts to your needs. It supports LLM-as-a-judge, user feedback collection, manual labeling, and custom evaluation pipelines via APIs/SDKs.

  • Datasets enable test sets and benchmarks for evaluating your LLM application. They support continuous improvement, pre-deployment testing, structured experiments, flexible evaluation, and seamless integration with frameworks like LangChain and LlamaIndex.

  • LLM Playground is a tool for testing and iterating on your prompts and model configurations, shortening the feedback loop and accelerating development. When you see a bad result in tracing, you can directly jump to the playground to iterate on it.

  • Comprehensive API: Langfuse is frequently used to power bespoke LLMOps workflows while using the building blocks provided by Langfuse via the API. OpenAPI spec, Postman collection, and typed SDKs for Python, JS/TS are available.

📦 Deploy Langfuse

Langfuse Cloud

Managed deployment by the Langfuse team, generous free-tier, no credit card required.

Self-Host Langfuse

Run Langfuse on your own infrastructure:

  • Local (docker compose): Run Langfuse on your own machine in 5 minutes using Docker Compose.

    # Get a copy of the latest Langfuse repository git clone https://github.com/langfuse/langfuse.git cd langfuse # Run the langfuse docker compose docker compose up
  • VM: Run Langfuse on a single Virtual Machine using Docker Compose.

  • Kubernetes (Helm): Run Langfuse on a Kubernetes cluster using Helm. This is the preferred production deployment.

  • Terraform Templates: AWS, Azure, GCP

See self-hosting documentation to learn more about architecture and configuration options.

🔌 Integrations

Main Integrations:

| Integration | Supports | Description | | ---------------------------------------------------------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | SDK | Python, JS/TS | Manual instrumentation using the SDKs for full flexibility. | | OpenAI | Python, JS/TS | Automated instrumentation using drop-in replacement of OpenAI SDK. | | Langchain | Python, JS/TS | Automated instrumentation by passing callback handler to Langchain application. | | LlamaIndex | Python | Automated instrumentation via LlamaIndex callback system.

五维分析
清晰度8/10
创新性7/10
实用性9/10
完整性8/10
可维护性8/10
优缺点分析

优点

  • 开源和社区驱动
  • 支持多种部署选项
  • 全面的API用于集成
  • 促进协作开发

缺点

  • 可能需要技术专长进行设置
  • 高级功能的文档有限
  • 自托管设置可能存在性能问题
  • 新用户的学习曲线

相关技能

useful-ai-prompts

A
toolCo-Pilot / 辅助式
88/ 100

“一个提示的宝藏,但别指望它们为你写小说。”

mcpspy

A
toolCo-Pilot / 辅助式
86/ 100

“MCPSpy:因为谁不想窥探他们 AI 的秘密?”

fastmcp

A
toolCo-Pilot / 辅助式
86/ 100

“FastMCP:因为谁不喜欢在AI中添加一点复杂性呢?”

免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。

版权归原作者所有 langfuse.