Quick Start
Choose your logging setup based on your environment:Operational Logging
Real-time event logging during automation execution.Verbosity Level
Control how much detail you see in logs:- Level 2: Debug
- Level 1: Info (Default)
- Level 0: Errors Only
Use for: Development, debugging specific issues
Example Output
Example Output
Log Destinations
Logs can be sent to different destinations, including your console and external observability platforms:- Pino (Default)
- Console Fallback
- Custom Logger
- External Logger (Production)
Fast, structured, colorized JSON logger with console output.When to use: Development, staging, or production without external observability; can manage multiple Stagehand instances
Auto-disabled when
Auto-disabled when
process.env.NODE_ENV === "test"process.env.JEST_WORKER_ID !== undefined(Jest tests)process.env.PLAYWRIGHT_TEST_BASE_DIR !== undefined(Playwright tests)process.env.CI === "true"(CI/CD environments)
LLM Inference Debugging
Development only - Creates large files and contains page content. Do not use in production.
Call File
Call File
Contains the complete LLM request:
Response File
Response File
Contains the LLM output:
Summary File
Summary File
Aggregates all calls with metrics:
Reference
Logging Configuration
All logging options are passed to the Stagehand constructor:| Option | Default | Description |
|---|---|---|
verbose | 1 | Log level: 0 = errors only, 1 = info, 2 = debug |
logger | undefined | Custom logger function for external platforms |
disablePino | false | Disable Pino (auto true in tests) |
logInferenceToFile | false | Save LLM requests to disk (default: false) |
Log Structure
Each log entry follows a structured format:Log Examples
Log Examples
- Successful Action
- LLM Inference
- Error
Next Steps
Now that you have logging configured, explore additional debugging and monitoring tools in the Observability guide:History API
Track all LLM operations (act, extract, observe, agent) with parameters, results, and timestamps. Perfect for debugging sequences and replaying workflows.
Metrics API
Monitor token usage and performance in real-time. Track costs per operation, identify expensive calls, and optimize resource usage.
LLM Inference Debugging
Save complete LLM request/response dumps to disk. See exactly what DOM was sent to the LLM and why it made specific decisions.
Browserbase Session Monitoring
Watch your automation visually with session recordings, network monitoring, and real-time browser inspection (Browserbase only).

