Skip to main content
Laminar documentation home page
Search...
⌘K
Contact support
Dashboard
lmnr-ai/lmnr
lmnr-ai/lmnr
Search...
Navigation
Evaluations
Documentation
Guides
SDK
Changelog
Community
Overview
Laminar
Getting Started
Hosting Options
Tracing
Introduction
Integrations
Tracing Structure
Traces monitoring
Session replay for browser agents
LangGraph Visualization
Troubleshooting
Evaluations
Introduction
Quickstart
Using Laminar datasets
Human Evaluators
Configuration
Manual Evaluation
Cookbook
Online Evaluators
Datasets
Introduction
Adding data
CLI
Platform
Introduction
Viewing traces
Live tracing
SQL Editor
Full text search
Custom Dashboards
Labeling Queues
Playground
Evaluations
Evaluations score traces so you can quantify improvements and catch regressions as models, prompts, and code change.
Introduction
What evals are and when to use them.
Quickstart
Run your first evaluation.
Using a Dataset
Evaluate against curated data.
Online Evaluators
Score production traffic continuously.
Next: turn failures into reusable test data with
Datasets
and
Queues
.
⌘I