Skip to main content
Stagehand provides comprehensive logging capabilities to help you debug automation workflows, track execution, and diagnose issues. Configure logging levels, structured output, and debugging tools for both development and production environments.

Quick Start

Choose your logging setup based on your environment:
import { Stagehand } from "@browserbasehq/stagehand";

const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,  // Full debug output
  // restOfYourConfiguration...
});

Operational Logging

Real-time event logging during automation execution.

Verbosity Level

Control how much detail you see in logs:
  • Level 2: Debug
  • Level 1: Info (Default)
  • Level 0: Errors Only
Use for: Development, debugging specific issues
const stagehand = new Stagehand({
  verbose: 2,  // Maximum detail
  // restOfYourConfiguration...
});
[12:34:56] DEBUG: Capturing DOM snapshot
[12:34:57] DEBUG: DOM contains 847 elements
[12:34:58] DEBUG: LLM inference started
[12:34:59] DEBUG: LLM response: {"selector": "#btn-submit", "method": "click"}
[12:35:00] INFO: act completed successfully

Log Destinations

Logs can be sent to different destinations, including your console and external observability platforms:
  • Pino (Default)
  • Console Fallback
  • Custom Logger
  • External Logger (Production)
Fast, structured, colorized JSON logger with console output.When to use: Development, staging, or production without external observability; can manage multiple Stagehand instances
// Enabled by default - Pino handles console output automatically
const stagehand = new Stagehand({
  verbose: 1,
  // restOfYourConfiguration...
});
  • process.env.NODE_ENV === "test"
  • process.env.JEST_WORKER_ID !== undefined (Jest tests)
  • process.env.PLAYWRIGHT_TEST_BASE_DIR !== undefined (Playwright tests)
  • process.env.CI === "true" (CI/CD environments)
Why auto-disable? Pino uses worker threads for pretty-printing, which can cause issues in test runners.

LLM Inference Debugging

Development only - Creates large files and contains page content. Do not use in production.
Save complete LLM request/response dumps to disk for offline analysis. See exactly what DOM was sent to the LLM and why it chose the wrong element.
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
  logInferenceToFile: true,  // Writes files to ./inference_summary/
});
Creates timestamped files for each LLM call:
./inference_summary/
├── act_summary/
│   ├── act_summary.json                      # Aggregate metrics
│   ├── 20250127_123456_act_call.txt          # LLM request
│   ├── 20250127_123456_act_response.txt      # LLM response
│   ├── 20250127_123501_act_call.txt
│   └── 20250127_123501_act_response.txt
├── extract_summary/
│   ├── extract_summary.json
│   ├── 20250127_123510_extract_call.txt
│   ├── 20250127_123510_extract_response.txt
│   ├── 20250127_123511_metadata_call.txt
│   └── 20250127_123511_metadata_response.txt
└── observe_summary/
    ├── observe_summary.json
    └── ...
File Types:
Contains the complete LLM request:
{
  "modelCall": "act",
  "messages": [
    {
      "role": "system",
      "content": "You are a browser automation assistant. You have access to these actions:\n- click\n- type\n- scroll\n..."
    },
    {
      "role": "user",
      "content": "Click the sign in button\n\nDOM:\n<html>\n  <body>\n    <button id=\"btn-1\">Sign In</button>\n    <button id=\"btn-2\">Sign Up</button>\n  </body>\n</html>"
    }
  ]
}
Contains the LLM output:
{
  "modelResponse": "act",
  "rawResponse": {
    "selector": "#btn-1",
    "method": "click",
    "reasoning": "Found sign in button with ID btn-1"
  }
}
Aggregates all calls with metrics:
{
  "act_summary": [
    {
      "act_inference_type": "act",
      "timestamp": "20250127_123456",
      "LLM_input_file": "20250127_123456_act_call.txt",
      "LLM_output_file": "20250127_123456_act_response.txt",
      "prompt_tokens": 3451,
      "completion_tokens": 45,
      "inference_time_ms": 951
    },
    {
      "act_inference_type": "act",
      "timestamp": "20250127_123501",
      "LLM_input_file": "20250127_123501_act_call.txt",
      "LLM_output_file": "20250127_123501_act_response.txt",
      "prompt_tokens": 2890,
      "completion_tokens": 38,
      "inference_time_ms": 823
    }
  ]
}

Reference

Logging Configuration

All logging options are passed to the Stagehand constructor:
const stagehand = new Stagehand({
  // ... your other configurations (env, model, etc.)

  // Logging options:
  verbose?: 0 | 1 | 2;                   // Log level (default: 1)
  logger?: (line: LogLine) => void;      // External logger function
  disablePino?: boolean;                 // Disable Pino backend (default: false)
  logInferenceToFile?: boolean;          // Save LLM requests to disk (default: false)
});
OptionDefaultDescription
verbose1Log level: 0 = errors only, 1 = info, 2 = debug
loggerundefinedCustom logger function for external platforms
disablePinofalseDisable Pino (auto true in tests)
logInferenceToFilefalseSave LLM requests to disk (default: false)

Log Structure

Each log entry follows a structured format:
interface LogLine {
  message: string;              // "act completed successfully"
  level?: 0 | 1 | 2;            // error | info | debug
  category?: string;            // "action", "llm", "browser", "cache"
  timestamp?: string;           // ISO 8601 timestamp
  auxiliary?: {                 // Additional structured metadata
    [key: string]: {
      value: string;             // Serialized value
      type: "object" | "string" | "integer" | "float" | "boolean";
    };
  };
}
  • Successful Action
  • LLM Inference
  • Error
{
  "category": "action",
  "message": "act completed successfully",
  "level": 1,
  "timestamp": "2025-01-27T12:35:00.123Z",
  "auxiliary": {
    "selector": {
      "value": "#btn-submit",
      "type": "string"
    },
    "executionTime": {
      "value": "1250",
      "type": "integer"
    }
  }
}

Next Steps

Now that you have logging configured, explore additional debugging and monitoring tools in the Observability guide: