💡 Summary
MCPSpy is a command-line tool for real-time monitoring of Model Context Protocol communications using eBPF technology.
🎯 Target Audience
🤖 AI Roast: “MCPSpy: because who doesn't want to spy on their AI's secrets?”
MCPSpy requires root access, which poses risks such as unauthorized access to system resources and potential exploitation of vulnerabilities. To mitigate risks, ensure the tool is run in a controlled environment and limit access to trusted users.
MCPSpy - MCP Monitoring with eBPF 🕵️✨
Overview
MCPSpy is a powerful command-line tool that leverages eBPF (Extended Berkeley Packet Filter) technology to monitor Model Context Protocol (MCP) communication at the kernel level. It provides real-time visibility into JSON-RPC 2.0 messages exchanged between MCP clients and servers by hooking into low-level system calls.
The Model Context Protocol supports three transport protocols for communication:
- Stdio: Communication over standard input/output streams
- Streamable HTTP: Direct HTTP request/response communication with server-sent events
- SSE (Server-Sent Events): HTTP-based streaming communication (Deprecated)
MCPSpy supports monitoring of both Stdio and HTTP/HTTPS transports (including Server-Sent Events), providing comprehensive coverage of MCP communication channels.

Why MCPSpy?
The Model Context Protocol is becoming the standard for AI tool integration, but understanding what's happening under the hood can be challenging. MCPSpy addresses this by providing:
- 🔒 Security Analysis: Monitor what data is being transmitted, detect PII leakage, and audit tool executions
- 🛡️ Prompt Injection Detection: Real-time detection of prompt injection and jailbreak attempts using ML models
- 🐛 Debugging: Troubleshoot MCP integrations by seeing the actual message flow
- 📊 Performance Monitoring: Track message patterns and identify bottlenecks
- 🔍 Compliance: Ensure MCP communications meet regulatory requirements
- 🎓 Learning: Understand how MCP works by observing real communications
Installation
Prerequisites
- Linux kernel version 5.15 or later
- Root privileges (required for eBPF)
Download Pre-built Binary (Auto-detect OS + Arch)
Download the latest release from the release page:
# Set platform-aware binary name BIN="mcpspy-$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m | sed -e 's/x86_64/amd64/' -e 's/aarch64/arm64/')" # Download the correct binary wget "https://github.com/alex-ilgayev/mcpspy/releases/latest/download/${BIN}" # Make it executable and move to a directory in your PATH chmod +x "${BIN}" sudo mv "${BIN}" /usr/local/bin/mcpspy
✅ Note: Currently supported platforms: linux-amd64, linux-arm64
Build from Source
Install Dependencies
First, install the required system dependencies:
sudo apt-get update # Install build essentials, eBPF dependencies sudo apt-get install -y clang clang-format llvm make libbpf-dev build-essential # Install Python 3 and pip (for e2e tests) sudo apt-get install -y python3 python3-pip python3-venv # Install docker and buildx (if not already installed) sudo apt-get install -y docker.io docker-buildx
Install Go
MCPSpy requires Go 1.24 or later. Install Go using one of these methods:
Option 1: Install from the official Go website (Recommended)
# Download and install Go 1.24.1 (adjust version as needed) wget https://go.dev/dl/go1.24.1.linux-amd64.tar.gz sudo rm -rf /usr/local/go sudo tar -C /usr/local -xzf go1.24.1.linux-amd64.tar.gz # Add Go to PATH (add this to your ~/.bashrc or ~/.profile for persistence) export PATH=$PATH:/usr/local/go/bin
Option 2: Install via snap
sudo snap install go --classic
Build MCPSpy
Clone the repository and build MCPSpy:
# Clone the repository git clone https://github.com/alex-ilgayev/mcpspy.git cd mcpspy # Build the project make all
Docker
# Build Docker image make image # Or pull the latest image docker pull ghcr.io/alex-ilgayev/mcpspy:latest # Or pull a specific image release docker pull ghcr.io/alex-ilgayev/mcpspy:v0.1.0 # Run the container docker run --rm -it --privileged ghcr.io/alex-ilgayev/mcpspy:latest
Kubernetes
MCPSpy can be deployed in Kubernetes clusters to monitor MCP traffic from AI/LLM services like LangFlow, LangGraph, and other applications that use the Model Context Protocol.
# Deploy MCPSpy as a DaemonSet kubectl apply -f https://raw.githubusercontent.com/alex-ilgayev/mcpspy/main/deploy/kubernetes/mcpspy.yaml
Real-World Use Cases in Kubernetes
-
Monitoring LangFlow/LangGraph Deployments
- Observe MCP traffic between LangFlow/LangGraph and AI services
- Debug integration issues in complex AI workflows
- Audit AI interactions for security and compliance
-
AI Service Monitoring
- Track interactions with both remote and local MCP servers
- Identify performance bottlenecks in AI service calls
- Detect potential data leakage in AI communications
-
Development and Testing
- Test MCP implementations in containerized environments
- Validate AI service integrations before production deployment
- Ensure consistent behavior across different environments
For detailed instructions and real-world examples of monitoring AI services in Kubernetes, see the Kubernetes Usage Guide.
Usage
Basic Usage
# Start monitoring MCP communication (TUI mode is default) sudo mcpspy # Start monitoring with static console output (disable TUI) sudo mcpspy --tui=false # Start monitoring and save output to JSONL file sudo mcpspy -o output.jsonl # Stop monitoring with Ctrl+C (or 'q' in TUI mode)
Prompt Injection Detection
MCPSpy includes optional real-time prompt injection detection using HuggingFace's Inference API. When enabled, it analyzes MCP tool calls for potential injection attacks and jailbreak attempts.
Detection coverage:
- Request-based injection: Detects malicious prompts in tool call arguments
- Response-based injection: Detects malicious content in tool responses that could manipulate the agent
# Enable security scanning with HuggingFace token sudo mcpspy --security --hf-token=hf_xxxxx # Use a custom detection model sudo mcpspy --security --hf-token=hf_xxxxx --security-model=protectai/deberta-v3-base-prompt-injection-v2 # Adjust detection threshold (default: 0.5) sudo mcpspy --security --hf-token=hf_xxxxx --security-threshold=0.7 # Run analysis synchronously (blocks until analysis completes) sudo mcpspy --security --hf-token=hf_xxxxx --security-async=false
Security CLI Flags:
| Flag | Description | Default |
| ---------------------- | --------------------------------------------------------- | ------------------------------------- |
| --security | Enable prompt injection detection | false |
| --hf-token | HuggingFace API token (required when security is enabled) | - |
| --security-model | HuggingFace model for detection | protectai/deberta-v3-base-prompt-injection-v2 |
| --security-threshold | Detection threshold (0.0-1.0) | 0.5 |
| --security-async | Run analysis asynchronously | true |
Supported Models:
protectai/deberta-v3-base-prompt-injection-v2(default, publicly accessible)meta-llama/Llama-Prompt-Guard-2-86M(deprecated on HF Inference API)
When a potential injection is detected, MCPSpy displays a security alert with risk level (low/medium/high/critical), category, and the analyzed content.
Output Format
TUI Mode (Default)
MCPSpy runs in interactive Terminal UI mode by default. The TUI provides:
- Interactive table view with scrolling
- Detailed message inspection (press Enter)
- Filtering by transport, type, and actor
- Multiple density modes for different screen sizes
- Real-time statistics
Static Console Output
When running with --tui=false:
12:34:56.789 python[12345] → python[12346] REQ tools/call (get_weather) Execute a tool
12:34:56.890 python[12346] → python[12345] RESP OK
JSONL Output
Stdio Transport - Request:
{ "timestamp": "2024-01-15T12:34:56.789Z", "transport_type": "stdio", "stdio_transport": { "from_pid": 12345, "from_comm": "python", "to_pid": 12346, "to_comm": "python" }, "type": "request", "id": 7, "method": "tools/call", "params": { "name": "get_weather", "arguments": { "city": "New York" } }, "error": {}, "raw": "{...}" }
Stdio Transport - Response:
{ "timestamp": "2024-01-15T12:34:56.890Z", "transport_type": "stdio", "stdio_transport": { "from_pid": 12346, "from_comm": "python", "to_pid": 12345, "to_comm": "python" }, "type": "response", "id": 7, "result": { "content": [ { "type": "text", "text": "Weather in New York: 20°C" } ], "isError": false }, "error": {}, "request": { "type": "request", "id": 7, "method": "tools/call", "params": { "name": "get_weather", "arguments": { "city": "New York" } }, "error": {} }, "raw": "{...}"
Pros
- Real-time monitoring of MCP communications
- Supports multiple transport protocols
- Detects prompt injection attempts
- Comprehensive security analysis
Cons
- Requires root privileges for eBPF
- Limited to Linux environments
- Complex setup for beginners
- Dependency on external tools like Docker
Related Skills
useful-ai-prompts
A“A treasure trove of prompts, but don’t expect them to write your novel for you.”
fastmcp
A“FastMCP: because who doesn't love a little complexity with their AI?”
Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.
Copyright belongs to the original author alex-ilgayev.
