When You Need This
- Self-hosted or fine-tuned models
- Providers Laminar doesn’t instrument yet
- Direct HTTP calls to LLM APIs
- Custom inference servers
Supported Provider Names (for Cost Calculation)
When you manually instrument an LLM span, set the provider name to one of the supported values below so Laminar can match pricing.| Provider | Provider name | Example model |
|---|---|---|
| OpenAI | openai | gpt-4o, gpt-4o-2024-11-20 |
| Anthropic | anthropic | claude-3-5-sonnet, claude-3-5-sonnet-20241022 |
| Google Gemini | gemini, google-genai | models/gemini-1.5-pro |
| Azure OpenAI | azure-openai | gpt-4o-mini-2024-07-18 |
| AWS Bedrock (Anthropic) | bedrock-anthropic | claude-3-5-sonnet-20241022-v2:0 |
| Mistral | mistral | mistral-large-2407 |
| Groq | groq | llama-3.1-70b-versatile |
Required Attributes
For Laminar to calculate costs and display LLM-specific UI, set these attributes:| Attribute | Description |
|---|---|
| Provider | Provider name (e.g., openai, anthropic, custom) |
| Request model | Model name you requested |
| Response model | Model name returned by the API |
| Input token count | Number of input tokens |
| Output token count | Number of output tokens |
spanType: 'LLM' when creating the span. Without this, the span appears as a generic operation.
Example: Manually Instrument an LLM Call
- TypeScript
- Python
Model Name Formats
Use the exact model string as returned by the provider API.- OpenAI:
gpt-4o,gpt-4o-mini-2024-07-18 - Anthropic:
claude-3-5-sonnet-20241022,claude-3-5-sonnet-20241022-v2:0 - Gemini:
models/gemini-1.5-pro,models/gemini-1.5-flash
Custom Providers and Explicit Costs
If Laminar can’t look up pricing for your provider/model, you can still attach explicit cost attributes (see the example above). If explicit cost attributes are present, they take precedence over calculated costs.How Laminar Calculates Cost
Laminar computes cost from:- Provider name
- Model name
- Token counts (input/output)
Viewing Costs
Costs show up in the Laminar UI on:- Trace details (sum of all LLM calls in the trace)
- Individual LLM spans (per-call cost)
- Analytics dashboards (aggregated by provider/model)
Pricing Data
Laminar maintains current pricing for supported providers. A snapshot of pricing seed data is available in the open-source repo atfrontend/lib/db/initial-data.json (table llm_prices).
Best Practices
- Always set provider + response model if you want cost calculation.
- Use exact model names from the API response (don’t “simplify” them).
- Handle missing usage data gracefully (set token counts only if present).
- TypeScript
- Python
See also:
LaminarAttributes and span.setAttributes