What You Can Query
| Table | Contains |
|---|---|
| spans | Individual spans (LLM/tool/custom/eval spans) with events and tags |
| traces | Trace-level aggregates derived from spans |
| signal_events | Signal-triggered events and payloads |
| signal_runs | Signal execution runs and statuses |
| logs | Streaming logs (for example, sandbox command output) |
| dataset_datapoints | Dataset datapoints (latest version per datapoint) |
| dataset_datapoint_versions | Dataset datapoints (all versions/history) |
| evaluation_datapoints | Evaluation datapoints with scores, executor output, dataset links, and associated trace data |
SELECT queries are allowed.
Getting Started
Open the SQL Editor from the sidebar. Write a query:/v1/sql/query—authenticate with your project API key and pass { "query": "..." }.
Writing Queries
Laminar uses ClickHouse, a columnar analytics database. The basics work like standard SQL (SELECT, FROM, WHERE, GROUP BY, ORDER BY, LIMIT), with a few differences.
Always filter by time
Spans are ordered bystart_time. Adding a time filter dramatically speeds up queries and prevents memory issues:
Avoid joins
ClickHouse isn’t optimized for joins. Instead, run two queries and combine results in your application:Working with dates
Truncate timestamps for grouping withtoStartOfInterval:
INTERVAL 15 MINUTE, INTERVAL 1 HOUR, etc.
Shortcuts exist for common intervals: toStartOfDay(value), toStartOfHour(value), toStartOfWeek(value).
Working with JSON
Many columns (likeattributes) store JSON as strings. Use simpleJSONExtract* functions for fast extraction:
simpleJSONHas:
JSONExtract* functions—more flexible but slower.
Working with tuples
Some columns store data as arrays of named tuples. The most common examples are:spans.events:Array(Tuple(timestamp Int64, name String, attributes String))evaluation_datapoints.trace_spans:Array(Tuple(name String, duration Float64, type String))
tupleElement(tuple, 'fieldName') to extract fields from tuples:
arrayExists to filter arrays by tuple field values:
ARRAY JOIN to unnest tuple arrays into separate rows:
arrayMap and tupleElement for transformations:
Data types
| Type | Used for |
|---|---|
UUID | Most ID columns (trace_id, span_id, id) |
DateTime64(9, 'UTC') | Timestamps (always UTC) |
String | Text, JSON stored as strings, and enum-like columns (span_type, trace_type, status) |
LowCardinality(String) | Low-cardinality enums |
Float64 | Floating point numbers |
Int64 | Counts, token numbers |
UInt8 | Small numeric enums (for example, logs.severity_number) |
UInt32 | Flags and bitmasks (for example, logs.flags) |
UInt64 | Indexes and counters (evaluation_datapoints.index) |
Bool | Flags like has_browser_session |
Array(String) | Tag lists and other string arrays |
Array(Tuple(...)) | Complex nested arrays (for example, events) |
CAST(value AS Type) or toDateTime64('2025-01-01 00:00:00', 9, 'UTC').
Table Schemas
These are the logical tables exposed in the SQL Editor. The schemas below reflect the columns available for queries.spans
| Column | Type | Example value |
|---|---|---|
span_id | UUID | "00000000-0000-0000-1234-426614174000" |
name | String | "openai.chat" |
span_type | String | "LLM" |
start_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
end_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
duration | Float64 | 1.23 |
input_cost | Float64 | 0.5667 |
output_cost | Float64 | 0.123 |
total_cost | Float64 | 0.6897 |
input_tokens | Int64 | 150 |
output_tokens | Int64 | 100 |
total_tokens | Int64 | 250 |
request_model | String | "gpt-4.1-mini" |
response_model | String | "gpt-4.1-mini-2025-04-14" |
model | String | "gpt-4.1-mini-2025-04-14" |
trace_id | UUID | "12345678-90ab-4def-8234-426614174000" |
provider | String | "openai" |
path | String | "workflow.process.step1.openai.chat" |
input | String | "[{\"role\": \"user\", \"content\": \"Hello\"}]" |
output | String | "[{\"role\": \"assistant\", \"content\": \"Hi\"}]" |
status | String | "success" |
parent_span_id | UUID | "00000000-0000-0000-a456-abcd5667ef09" |
attributes | String | "{\"gen_ai.system\": \"openai\", \"gen_ai.model\": \"gpt-4o\"}" |
tags | Array(String) | ["needs-review", "tool-call"] |
events | Array(Tuple(timestamp Int64, name String, attributes String)) | [Tuple(1735689600000000000, 'cache_hit', '{}'), ...] |
Path
Laminar span path is stored as an array of span names in span attributes. However, in SQL queries, it is stored as a string with items joined by a dot. For example, if the span path is["outer", "inner"], the path column will be "outer.inner".
If needed, you can still access the array value by reading attributes with simpleJSONExtractRaw(attributes, 'lmnr.span.path').
Parent span ID
If the current span is the top span of the trace, theparent_span_id will be a 0 UUID, i.e. "00000000-0000-0000-0000-000000000000".
Span type
Here are the values of thespan_type column and their meanings:
Status
Status is normalized to"success" or "error".
Input and output
Theinput and output columns are stored as either raw strings or stringified JSON. The best way to parse them is to try to parse them as JSON, and if it fails, use the raw string. You can also use isValidJSON function right in the query to test for this.
input and output columns are also indexed on content, so you can use them in WHERE conditions. Use ILIKE instead of LIKE, because the index is case-insensitive.
Attributes
Theattributes column is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads them. In addition, you can use JSON* and simpleJSON* functions on them right in the queries. Attributes are guaranteed to be a valid JSON object.
Model
Themodel column is set to the response model if present, otherwise it is set to the request model.
Total tokens and total cost
Usually,total_tokens = input_tokens + output_tokens and total_cost = input_cost + output_cost.
However, you can manually report these values using the relevant attributes. In this case, totals may not be equal to the sum of the input and output tokens and costs.
Tags
Thetags column contains an array of string tags attached to spans. You can filter by tags using array functions:
Events
Theevents column contains an array of event tuples. Each event has:
timestamp(Int64): Unix nanosecondsname(String): Event nameattributes(String): JSON attributes
arrayJoin:
traces
| Column | Type | Example value |
|---|---|---|
id | UUID | "01234567-1234-4def-8234-426614174000" |
start_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
end_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
input_tokens | Int64 | 150 |
output_tokens | Int64 | 100 |
total_tokens | Int64 | 250 |
input_cost | Float64 | 0.5667 |
output_cost | Float64 | 0.123 |
total_cost | Float64 | 0.6897 |
duration | Float64 | 1.23 |
metadata | String | "{\"key\": \"value\"}" |
session_id | String | "session_123" |
user_id | String | "user_123" |
status | String | "success" |
top_span_id | UUID | "00000000-0000-0000-1234-426614174000" |
top_span_name | String | "run" |
top_span_type | String | "DEFAULT" |
trace_type | String | "DEFAULT" |
tags | Array(String) | ["needs-review", "production"] |
has_browser_session | Bool | true |
id is the trace ID; join to spans with spans.trace_id = traces.id.
Trace type
Here are the values of thetrace_type column and their meanings:
Duration
The duration is in seconds, and is calculated asend_time - start_time.
Status
Status is set toerror if any span in the trace has status error, otherwise it is success.
Metadata
Metadata is stored as a string in JSON format. That is, you can safelyJSON.parse / json.loads it. In addition, you can use JSON* and simpleJSON* functions on it right in the queries. Metadata is guaranteed to be a valid JSON object.
signal_events
| Column | Type | Example value |
|---|---|---|
id | UUID | "01234567-89ab-4def-8234-426614174000" |
signal_id | UUID | "11111111-2222-4333-9444-555555555555" |
trace_id | UUID | "01234567-1234-4def-a234-426614174000" |
run_id | UUID | "22222222-3333-4444-b555-666666666666" |
name | String | "error.clustered" |
payload | String | "{\"error_type\": \"timeout\", \"count\": 3}" |
timestamp | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
Payload
Thepayload column is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads it. In addition, you can use JSON* and simpleJSON* functions on it right in the queries. Payload is guaranteed to be a valid JSON object.
signal_runs
| Column | Type | Example value |
|---|---|---|
signal_id | UUID | "11111111-2222-4333-8444-555555555555" |
job_id | UUID | "33333333-4444-4555-9666-777777777777" |
trigger_id | UUID | "44444444-5555-4666-a777-888888888888" |
run_id | UUID | "22222222-3333-4444-b555-666666666666" |
trace_id | UUID | "01234567-1234-4def-8234-426614174000" |
status | String | "COMPLETED" |
event_id | UUID | "55555555-6666-4777-8888-999999999999" |
updated_at | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
Status
status is one of "PENDING", "COMPLETED", "FAILED", or "UNKNOWN".
logs
| Column | Type | Example value |
|---|---|---|
log_id | UUID | "01234567-89ab-4def-8234-426614174000" |
time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
observed_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:01" |
severity_number | UInt8 | 9 |
severity_text | String | "INFO" |
body | String | "Processing iteration 1 of 5" |
attributes | String | "{\"stream\": \"stdout\", \"sandbox_id\": \"sbx_123\"}" |
trace_id | UUID | "01234567-1234-4def-9234-426614174000" |
span_id | UUID | "00000000-0000-0000-1234-426614174000" |
flags | UInt32 | 0 |
event_name | String | "process.stdout" |
Attributes
Theattributes column is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads it. In addition, you can use JSON* and simpleJSON* functions on it right in the queries. Attributes are guaranteed to be a valid JSON object.
evaluation_datapoints
| Column | Type | Example value |
|---|---|---|
id | UUID | "01234567-89ab-4def-8234-426614174000" |
evaluation_id | UUID | "98765432-1098-4654-9210-987654321098" |
data | String | "{\"key\": \"value\"}" |
target | String | "{\"key\": \"value\"}" |
metadata | String | "{\"key\": \"value\"}" |
executor_output | String | "{\"key\": \"value\"}" |
index | UInt64 | 0 |
trace_id | UUID | "01234567-1234-4def-a234-426614174000" |
group_id | String | "group_a" |
scores | String | "{\"score1\": 0.85}" |
created_at | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
dataset_id | UUID | "00000000-0000-0000-0000-000000000000" |
dataset_datapoint_id | UUID | "00000000-0000-0000-0000-000000000000" |
dataset_datapoint_created_at | DateTime64(9, 'UTC') | "1970-01-01 00:00:00" |
duration | Float64 | 1.23 |
input_cost | Float64 | 0.5667 |
output_cost | Float64 | 0.123 |
total_cost | Float64 | 0.6897 |
start_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
end_time | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
input_tokens | Int64 | 150 |
output_tokens | Int64 | 100 |
total_tokens | Int64 | 250 |
trace_status | String | "success" |
trace_metadata | String | "{\"key\": \"value\"}" |
trace_tags | Array(String) | ["production", "experiment-a"] |
trace_spans | Array(Tuple(name String, duration Float64, type String)) | [Tuple('openai.chat', 1.23, 'LLM'), ...] |
data, target, metadata, executor_output, scores, and trace_metadata are JSON stored as strings. scores is a JSON object of string keys to numeric values.
When the datapoint is not sourced from a dataset, dataset_id and dataset_datapoint_id are a nil UUID (all zeroes) and dataset_datapoint_created_at is the Unix epoch.
The trace-related columns (duration, input_cost, output_cost, total_cost, start_time, end_time, input_tokens, output_tokens, total_tokens, trace_status, trace_metadata, trace_tags, trace_spans) are joined from the associated trace. If no trace exists for the datapoint, these columns will be nil/empty/zero.
dataset_datapoints
| Column | Type | Example value |
|---|---|---|
id | UUID | "019c6634-0cb4-7f9d-8192-f36604488483" |
created_at | DateTime64(9, 'UTC') | "2021-01-01 00:00:00" |
dataset_id | UUID | "11111111-2222-4333-a444-555555555555" |
data | String | "{\"query\": \"What is 2+2?\"}" |
target | String | "{\"answer\": \"4\"}" |
metadata | String | "{\"source\": \"prod\"}" |
data, target, and metadata are JSON stored as strings.
dataset_datapoint_versions
Same schema asdataset_datapoints, but includes all versions and history for each datapoint.
Example Queries
Cost breakdown by model:Exporting Results
Select results and click “Export to Dataset.” Map columns to dataset fields (data, target, metadata). Use this to build evaluation datasets from query results.
