Co-Pilot / 辅助式
更新于 a month ago

llamafarm

Lllama-farm
0.8k
llama-farm/llamafarm
82
Agent 评分

💡 摘要

LlamaFarm是一个开源AI平台,能够在本地运行AI应用,确保完全隐私且无需依赖云服务。

🎯 适合人群

寻找本地AI解决方案的数据科学家希望构建无需云成本的AI应用的开发者需要保护敏感数据隐私的企业需要可定制AI工具进行文档分析的研究人员对动手机器学习项目感兴趣的AI爱好者

🤖 AI 吐槽:看起来很能打,但别让配置把人劝退。

安全分析中风险

风险:Medium。建议检查:是否执行 shell/命令行指令;是否发起外网请求(SSRF/数据外发);API Key/Token 的获取、存储与泄露风险;依赖锁定与供应链风险。以最小权限运行,并在生产环境启用前审计代码与依赖。

LlamaFarm - Edge AI for Everyone

Enterprise AI capabilities on your own hardware. No cloud required.

License: Apache 2.0 Python 3.10+ Go 1.24+ Docs Discord

LlamaFarm is an open-source AI platform that runs entirely on your hardware. Build RAG applications, train custom classifiers, detect anomalies, and run document processing—all locally with complete privacy.

  • 🔒 Complete Privacy — Your data never leaves your device
  • 💰 No API Costs — Use open-source models without per-token fees
  • 🌐 Offline Capable — Works without internet once models are downloaded
  • Hardware Optimized — Automatic GPU/NPU acceleration on Apple Silicon, NVIDIA, and AMD

Desktop App Downloads

Get started instantly — no command line required:

| Platform | Download | |----------|----------| | Mac (Universal) | Download | | Windows | Download | | Linux (x86_64) | Download | | Linux (ARM64) | Download |


What Can You Build?

| Capability | Description | |-----------|-------------| | RAG (Retrieval-Augmented Generation) | Ingest PDFs, docs, CSVs and query them with AI | | Custom Classifiers | Train text classifiers with 8-16 examples using SetFit | | Anomaly Detection | Detect outliers in logs, metrics, or transactions | | OCR & Document Extraction | Extract text and structured data from images and PDFs | | Named Entity Recognition | Find people, organizations, and locations | | Multi-Model Runtime | Switch between Ollama, OpenAI, vLLM, or local GGUF models |

Video demo (90 seconds): https://youtu.be/W7MHGyN0MdQ


Quickstart

Option 1: Desktop App

Download the desktop app above and run it. No additional setup required.

Option 2: CLI + Development Mode

  1. Install the CLI

    macOS / Linux:

    curl -fsSL https://raw.githubusercontent.com/llama-farm/llamafarm/main/install.sh | bash

    Windows (PowerShell):

    irm https://raw.githubusercontent.com/llama-farm/llamafarm/main/install.ps1 | iex

    Or download directly from releases.

  2. Create and run a project

    lf init my-project # Generates llamafarm.yaml lf start # Starts services and opens Designer UI
  3. Chat with your AI

    lf chat # Interactive chat lf chat "Hello, LlamaFarm!" # One-off message

The Designer web interface is available at http://localhost:8000.

Option 3: Development from Source

git clone https://github.com/llama-farm/llamafarm.git cd llamafarm # Install Nx globally and initialize the workspace npm install -g nx nx init --useDotNxInstallation --interactive=false # Required on first clone # Start all services (run each in a separate terminal) nx start server # FastAPI server (port 8000) nx start rag # RAG worker for document processing nx start universal-runtime # ML models, OCR, embeddings (port 11540)

Architecture

LlamaFarm consists of three main services:

| Service | Port | Purpose | |---------|------|---------| | Server | 8000 | FastAPI REST API, Designer web UI, project management | | RAG Worker | - | Celery worker for async document processing | | Universal Runtime | 11540 | ML model inference, embeddings, OCR, anomaly detection |

All configuration lives in llamafarm.yaml—no scattered settings or hidden defaults.


Runtime Options

Universal Runtime (Recommended)

The Universal Runtime provides access to HuggingFace models plus specialized ML capabilities:

  • Text Generation - Any HuggingFace text model
  • Embeddings - sentence-transformers and other embedding models
  • OCR - Text extraction from images/PDFs (Surya, EasyOCR, PaddleOCR, Tesseract)
  • Document Extraction - Forms, invoices, receipts via vision models
  • Text Classification - Pre-trained or custom models via SetFit
  • Named Entity Recognition - Extract people, organizations, locations
  • Reranking - Cross-encoder models for improved RAG quality
  • Anomaly Detection - Isolation Forest, One-Class SVM, Local Outlier Factor, Autoencoders
runtime: models: default: provider: universal model: Qwen/Qwen2.5-1.5B-Instruct base_url: http://127.0.0.1:11540/v1

Ollama

Simple setup for GGUF models with CPU/GPU acceleration:

runtime: models: default: provider: ollama model: qwen3:8b base_url: http://localhost:11434/v1

OpenAI-Compatible

Works with vLLM, Together, Mistral API, or any OpenAI-compatible endpoint:

runtime: models: default: provider: openai model: gpt-4o base_url: https://api.openai.com/v1 api_key: ${OPENAI_API_KEY}

Core Workflows

CLI Commands

| Task | Command | |------|---------| | Initialize project | lf init my-project | | Start services | lf start | | Interactive chat | lf chat | | One-off message | lf chat "Your question" | | List models | lf models list | | Use specific model | lf chat --model powerful "Question" | | Create dataset | lf datasets create -s pdf_ingest -b main_db research | | Upload files (auto-process by default) | lf datasets upload research ./docs/*.pdf | | Process dataset (if you skipped auto-process) | lf datasets process research | | Query RAG | lf rag query --database main_db "Your query" | | Check RAG health | lf rag health |

RAG Pipeline

  1. Create a dataset linked to a processing strategy and database
  2. Upload files (PDF, DOCX, Markdown, TXT) — processing runs automatically unless you pass --no-process
  3. Process manually only when you intentionally skipped auto-processing (e.g., large batches)
  4. Query using semantic search with optional metadata filtering
lf datasets create -s default -b main_db research lf datasets upload research ./papers/*.pdf # auto-processes by default # For large batches: # lf datasets upload research ./papers/*.pdf --no-process # lf datasets process research lf rag query --database main_db "What are the key findings?"

Designer Web UI

The Designer at http://localhost:8000 provides:

  • Visual dataset management with drag-and-drop uploads
  • Interactive configuration editor with live validation
  • Integrated chat with RAG context
  • Switch between visual and YAML editing modes

Configuration

llamafarm.yaml is the source of truth for each project:

version: v1 name: my-assistant namespace: default # Multi-model configuration runtime: default_model: fast models: fast: description: "Fast local model" provider: universal model: Qwen/Qwen2.5-1.5B-Instruct base_url: http://127.0.0.1:11540/v1 powerful: description: "More capable model" provider: universal model: Qwen/Qwen2.5-7B-Instruct base_url: http://127.0.0.1:11540/v1 # System prompts prompts: - name: default messages: - role: system content: You are a helpful assistant. # RAG configuration rag: databases: - name: main_db type: ChromaStore default_embedding_strategy: default_embeddings default_retrieval_strategy: semantic_search embedding_strategies: - name: default_embeddings type: UniversalEmbedder config: model: sentence-transformers/all-MiniLM-L6-v2 base_url: http://127.0.0.1:11540/v1 retrieval_strategies: - name: semantic_search type: BasicSimilarityStrategy config: top_k: 5 data_processing_strategies: - name: default parsers: - type: PDFParser_LlamaIndex config: chunk_size: 1000 chunk_overlap: 100 - type: MarkdownParser_Python config: chunk_size: 1000 extractors: [] # Dataset definitions datasets: - name: research data_processing_strategy: default database: main_db

Environment Variable Substitution

Use ${VAR} syntax to inject secrets from .env files:

runtime: models: openai: api_key: ${OPENAI_API_KEY} # With default: ${OPENAI_API_KEY:-sk-default} # From specific file: ${file:.env.production:API_KEY}

See the Configuration Guide for complete reference.


REST API

LlamaFarm provides an OpenAI-compatible REST API:

Chat Completions

curl -X POST http://localhost:8000/v1/projects/default/my-project/chat/completions \ -H "Content-Type: application/json" \ -d '{ "messages": [{"role": "user", "content": "Hello"}], "stream": false, "rag_enabled": true }'

RAG Query

curl -X POST http://localhost:8000/v1/projects/default/my-project/rag/query \ -H "Content-Type: application/json" \ -d '{ "query": "What are the requirements?", "database": "main_db", "top_k": 5 }'

See the API Reference for all endpoints.


Specia

五维分析
清晰度8/10
创新性7/10
实用性9/10
完整性9/10
可维护性8/10
优缺点分析

优点

  • 本地数据处理确保完全隐私
  • 使用无API费用
  • 下载模型后可离线使用
  • 支持多种AI模型和框架

缺点

  • 需要能够运行AI模型的本地硬件
  • 对于非技术用户,初始设置可能较复杂
  • 与更大平台相比,社区支持有限
  • 性能依赖于本地硬件规格

相关技能

pytorch

S
toolCode Lib / 代码库
92/ 100

“它是深度学习的瑞士军刀,但祝你好运能从47种安装方法里找到那个不会搞崩你系统的那一个。”

agno

S
toolCode Lib / 代码库
90/ 100

“它承诺成为智能体领域的Kubernetes,但得看开发者有没有耐心学习又一个编排层。”

nuxt-skills

S
toolCo-Pilot / 辅助式
90/ 100

“这本质上是一份组织良好的小抄,能把你的 AI 助手变成一只 Nuxt 框架的复读机。”

免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。

版权归原作者所有 llama-farm.