Skip to main contentLaminar automatically traces calls to supported LLM providers, agent frameworks, and vector databases. You don’t need to change your code—just initialize Laminar, and calls to OpenAI, Anthropic, LangChain, and others are captured with full details: prompts, responses, tokens, latency, and cost.
Browse the integrations below to see setup instructions and what’s captured for each.
Don’t see your provider? You can trace any LLM with the @observe decorator or by creating manual spans. See Tracing Structure or jump straight to observe usage.