Skip to main content
Debugger lets you rerun long-running agents from the exact point you care about, without leaving Laminar.

Why the debugger

When you are debugging long-running agents, three things slow you down:
  • You constantly context-switch between windows to edit code, run it, and inspect execution.
  • You wait a long time for the agent to reach the event you care about.
  • The agent can take a different path and never hit that event at all.
Debugger solves this by letting you set a checkpoint on a span, tweak configuration (like the system prompt) in the UI, rerun from there, and inspect the new trace in the same page. Once the CLI is connected, you never have to leave the page.

Setup

1) Build a looped agent entrypoint

Debugger sessions work best when your entrypoint makes repeated LLM/tool calls so Laminar can cache earlier steps and rerun from a later point. For AI SDK, this is often a single generateText / streamText call with stopWhen: stepCountIs(N). For browser-use, use a browser-use Agent loop as your entrypoint. For a full walkthrough, see the Browser Use debugger guide. For general Python, write your own loop that calls LLM + tools repeatedly.

2) Mark the entrypoint for debugger sessions

Wrap your entrypoint with observe and mark it as a debugger entrypoint.
import { observe } from '@lmnr-ai/lmnr';

export const callAgent = observe(
  { name: 'callAgent', rolloutEntrypoint: true },
  async (messages, model, modelId, reasoningEffort) => {
    // Your agent code here.
  }
);
Export the observed function from the entry file so the CLI can discover it.

3) AI SDK setup (if you use Vercel AI SDK)

Make sure the latest @lmnr-ai/lmnr is installed in the same workspace where you run debugger sessions. Make sure general Laminar observability is configured first. See the AI SDK integration. Then wrap the model you pass to generateText / streamText so debugger sessions can cache and rerun steps.
import { observe, Laminar, wrapLanguageModel } from '@lmnr-ai/lmnr';
import { generateText, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';

Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY });

export const callAgent = observe({ rolloutEntrypoint: true }, async () => {
  const result = await generateText({
    model: wrapLanguageModel(openai('gpt-4.1-nano')),
    prompt: 'Find pricing tiers on example.com.',
    stopWhen: stepCountIs(6),
  });
  return result.text;
});
If you pass a provider/model string instead of a model object:
import { wrapLanguageModel } from '@lmnr-ai/lmnr';
import { gateway } from 'ai';

const model = wrapLanguageModel(gateway('openai/gpt-4.1-nano'));

4) Start a debugger session

Run the CLI from the root of your project.
npx lmnr-cli@latest dev path/to/entrypoint.ts

5) Open the session

Open the session from Debugger in your project, or click the link printed by the CLI.
If the file exposes multiple debugger entrypoints, pass --function to select one. Set LMNR_PROJECT_API_KEY or use --project-api-key to connect.

Use in Laminar

  1. Open Debugger and select your session.
  2. In the Run tab, enter Input Arguments as JSON, either as an array of ordered args or an object keyed by param name.
  3. Click Run to generate the first trace.
  4. In Reader or Tree view, click the checkpoint icon on a span (Run from here) to reuse earlier outputs.
  5. Optionally override System Prompts in the Run tab.
  6. Click Run again to get the updated, faster trace.
  7. Use Tree or Reader view, search, and the condensed timeline to navigate.
  8. Click Stop to cancel a run.
Keep the CLI running while you iterate. Reruns will use the latest code on disk; restart the CLI if you change dependencies or entrypoint discovery.