Langfuse is the leading open-source platform for LLM observability and engineering. It provides tracing, evaluation, prompt management, and debugging tools for AI applications with integrations for every major framework.
Langfuse gives you X-ray vision into your AI agents. Every LLM call, tool use, and agent step is traced, timed, and costed — so you can debug failures, optimize performance, and track costs.
from langfuse import Langfuse
langfuse = Langfuse()
trace = langfuse.trace(name="research-agent")
span = trace.span(name="web-search")
# ... your logic
span.end()
Dashboard:
├── Research Agent (4.2s, $0.08)
│ ├── Query Planning (0.3s, $0.01)
│ ├── Web Search (1.2s)
│ ├── Synthesis (1.5s, $0.05)
│ └── Report Gen (0.4s, $0.02)
⚠️ Web search returning 0 results 15% of the time
AI agents that work well with Langfuse.
Google's MCP toolbox for databases — connect AI agents to PostgreSQL, MySQL, BigQuery, Spanner, and more.
AI-driven ETL pipelines that transform, clean, and load data with intelligent schema mapping and error handling.
Agent monitoring, cost tracking, and evaluation — observability built specifically for AI agents.