- Trace your own functions — Your agent has logic beyond LLM calls. Wrap functions with
@observeto see them in traces. - Add context — Attach user IDs, session IDs, metadata, and tags so you can filter and debug effectively.
- Control what’s captured — Skip recording inputs/outputs for sensitive operations.
Why Structure Matters
Without structure, each LLM call becomes an isolated trace, making it hard to:- Understand relationships between calls (routing → tool use → final answer)
- Follow multi-step workflows end-to-end
- Track conversations across turns
- Find the right trace quickly (by user/session/metadata)
Quickstart
Start by creating a parent span for your request/turn, then set user ID, session ID, and metadata inside that span so everything downstream inherits it.- TypeScript
- Python
Quick Reference
| I want to… | Use |
|---|---|
| Trace a function I wrote | Trace Functions |
| Trace code that isn’t a function | Trace Parts of Your Code |
| Group traces by conversation/workflow | Sessions |
| Associate traces with users | User ID |
| Add key-value context to traces | Metadata |
| Add categorical labels to spans | Tags |
| Trace images sent to LLMs | Tracing Images |
| Continue traces across services | Continuing Traces |
| Track costs for custom LLM spans | LLM Cost Tracking |
| Ensure spans send in serverless/CLI | Flushing & Shutting Down |
