Overview
Laminar automatically instruments the official Google Gemini package with a single line of code, allowing you to trace and monitor all your Gemini API calls without modifying your existing code. This provides complete visibility into your AI application’s performance, costs, and behavior.Getting Started
1. Install Laminar and Google Gemini
You may remove the
[all] extra, and there is no specific extra for Gemini.
Gemini instrumentation is currently shipped with the default lmnr package.2. Set up your environment variables
Store your API keys in a.env file:
To see an example of how to integrate Laminar within a FastAPI application, check out our FastAPI integration guide.
3. Initialize Laminar
Just add a single line at the start of your application or file to instrument Gemini with Laminar.4. Use Gemini as usual
Monitoring Your Gemini Usage
After instrumenting your Gemini calls with Laminar, you’ll be able to:- View detailed traces of each Gemini API call, including request and response
- Track token usage and cost across different models
- Monitor latency and performance metrics
- Open LLM span in Playground for prompt engineering
- Debug issues with failed API calls or unexpected model outputs
Advanced Features
- Enrich traces with sessions, user IDs, metadata, and tags via the SDK reference.
- Wrap custom functions with
observeto capture business logic alongside model calls (see the SDK reference).
