Telemetry and Evaluations
How to view LLM usage and run evals on your Stagehand workflows.
View LLM usage and token counts
Token usage telemetry is only available in Stagehand 2.0.
You can view your token usage at any point with stagehand.metrics
.
This will return an object with the following shape:
View granular LLM usage
You can set logInferenceToFile: true
in the Stagehand constructors. This will dump all act, extract, and observe calls to a directory called inference_summary
.
inference_summary
will have the following structure:
Each of these files will have the following shape:
Run Evaluations (Evals)
Stagehand evaluations are how we, the Stagehand team, test the validity of Stagehand itself.
To run evals, you’ll need to clone the Stagehand repo and run npm install
to install the dependencies.
We have three types of evals:
- Deterministic Evals - These are evals that are deterministic and can be run without any LLM inference.
- LLM-based Evals - These are evals that test the underlying functionality of Stagehand’s AI primitives.
Deterministic Evals
To run deterministic evals, you can just run npm run e2e
from within the Stagehand repo. This will test the functionality of Playwright within Stagehand to make sure it’s working as expected.
These tests are in evals/deterministic
and test on both Browserbase browsers and local headless Chromium browsers.
LLM-based Evals
To run LLM evals, you’ll need a Braintrust account.
To run LLM-based evals, you can run npm run evals
from within the Stagehand repo. This will test the functionality of the LLM primitives within Stagehand to make sure they’re working as expected.
Evals are grouped into three categories:
- Act Evals - These are evals that test the functionality of the
act
method. - Extract Evals - These are evals that test the functionality of the
extract
method. - Observe Evals - These are evals that test the functionality of the
observe
method. - Combination Evals - These are evals that test the functionality of the
act
,extract
, andobserve
methods together.
Configuring and Running Evals
You can view the specific evals in evals/tasks
. Each eval is grouped into eval categories based on evals/evals.config.json
. You can specify models to run and other general task config in evals/taskConfig.ts
.
To run a specific eval, you can run npm run evals <eval>
, or run all evals in a category with npm run evals category <category>
.
Viewing eval results
Eval results are viewable on Braintrust. You can view the results of a specific eval by going to the Braintrust URL specified in the terminal when you run npm run evals
.
By default, each eval will run five times per model. The “Exact Match” column shows the percentage of times the eval was correct. The “Error Rate” column shows the percentage of times the eval errored out.
You can use the Braintrust UI to filter by model/eval and aggregate results across all evals.
Adding new evals
To add a new eval, you can create a new file in evals/tasks
and add it to the appropriate category in evals/evals.config.json
.