TwoTail is an AI analyst that clusters your agent traces, diagnoses root causes, and runs optimization experiments.
Works with your stack
How It Works
See how TwoTail turns raw agent telemetry into actionable improvements.
Features
Everything you need to understand, debug, and improve your agents.
Send OpenTelemetry traces from any framework. Works with LangChain, LlamaIndex, or your custom stack.
Teach TwoTail your terminology, user segments, and what success looks like.
Ask questions in plain English. Get charts and insights in seconds.
Pre-built analyses for common agent patterns: loops, failures, latency spikes.
A/B test prompts, models, and configurations. Measure what actually improves outcomes.
TwoTail continuously monitors your traces and surfaces issues before you ask. Get alerts on regressions, anomalies, and emerging failure patterns — no queries needed.
TwoTail accepts traces via OpenTelemetry (OTLP). If your agent framework already emits OTel spans — LangChain, LlamaIndex, CrewAI, or custom setups — just point the exporter at your TwoTail endpoint. No SDK to install.
Those tools show you traces. TwoTail analyzes them. Ask questions in plain English and get charts, cluster similar failures, and surface patterns across thousands of runs — without writing queries or building dashboards.
No. If you already have OpenTelemetry instrumentation, TwoTail works with your existing setup. If you don't, adding a few lines of OTel config is all it takes.
Your data is stored in isolated Supabase-backed Postgres databases with row-level security. Each account's data is fully segregated, and all connections are encrypted in transit.
TwoTail has a free Starter tier for small projects. Check our pricing page for full details on plans and limits.
TwoTail turns raw telemetry into the insights you need to ship better agents, faster. No more grep-ing through JSON.