(NEW) Model Gateway — Use your Browserbase API key to access top models from OpenAI, Anthropic, and Google. One key, one bill, no provider accounts needed. Read the blog
Model Gateway
Model Gateway lets you use Stagehand without wiring up model providers yourself. Instead of managing separate API keys for each provider, just pass your Browserbase API key and pick a model. Browserbase routes the request to the upstream provider for you.Setup
Model Gateway requires Browserbase-hosted browsers. It does not work with
env: "LOCAL".Switching models
With Model Gateway, switching between providers is just a config change — no new accounts, API keys, or code rewiring required:- OpenAI
- Anthropic
- Google
Key benefits
- One key, one bill — LLM inference, browser infrastructure, and caching all run through your Browserbase API key.
- Market-price tokens — We charge the same price as going direct to the provider. No markup.
- Built-in reliability — Retries, backoff, and rate limit handling are managed for you.
- No tier-gating — Access new models immediately without hitting provider spend thresholds.
- Action caching — Model Gateway works with Stagehand’s managed action caching, so repeated steps are reused instead of re-run.
Supported providers
| Provider | Example Model |
|---|---|
| OpenAI | openai/gpt-5 |
| Anthropic | anthropic/claude-sonnet-4-6 |
google/gemini-2.5-flash |
Configuration Setup
Quick Start
Get started with Google Gemini (recommended for speed and cost):First Class Models
Use any model from the following supported providers.- Google
- Google Vertex
- Anthropic
- OpenAI
- Azure
- Cerebras
- DeepSeek
- Groq
- Mistral
- Ollama
- Perplexity
- TogetherAI
- xAI
Custom Models
Amazon Bedrock, Cohere, all first class models, and any model from the Vercel AI SDK is supported. Use this configuration for custom endpoints and custom retry or caching logic. We’ll use Amazon Bedrock and Google as examples below.Amazon Bedrock
Amazon Bedrock
Google
All Providers
All Providers
To implement a custom model, follow the steps for the provider you are using. See the Amazon Bedrock and Google examples above. All supported providers and models are in the Vercel AI SDK.
Choose a Model
Different models excel at different tasks. Consider speed, accuracy, and cost for your use case.Model Selection Guide
Find detailed model comparisons and recommendations on our Model Evaluation page.
| Use Case | Recommended Model | Why |
|---|---|---|
| Production | google/gemini-2.5-flash | Fast, accurate, cost-effective |
| Intelligence | google/gemini-3-pro-preview | Best accuracy on hard tasks |
| Speed | google/gemini-2.5-flash | Fastest response times |
| Cost | google/gemini-2.5-flash | Best value per token |
| Local/offline | ollama/qwen3 | No API costs, full control |
Advanced Options
Agent Models (with CUA Support)
Default The Stagehand agent by default uses the same model passed to Stagehand. All models (first class and custom) are supported. Here’s an example with Gemini:model parameter, which accepts any first class model, including computer use agents (CUA). This is useful when you’d like the agent to use a different model than the one passed to Stagehand.
- Google CUA
- Anthropic CUA
- OpenAI CUA
- Example First Class Model
All Supported CUA Models
All Supported CUA Models
| Provider | Model |
|---|---|
| Anthropic | anthropic/claude-haiku-4-5-20251001 |
| Anthropic | anthropic/claude-sonnet-4-6 |
| Anthropic | anthropic/claude-sonnet-4-5-20250929 |
| Anthropic | anthropic/claude-opus-4-5-20251101 |
| Anthropic | anthropic/claude-opus-4-6 |
google/gemini-2.5-computer-use-preview-10-2025 | |
google/gemini-3-flash-preview | |
google/gemini-3-pro-preview | |
| Microsoft | microsoft/fara-7b |
| OpenAI | openai/computer-use-preview |
| OpenAI | openai/computer-use-preview-2025-03-11 |
For overriding the agent API key, using a corporate proxy, adding provider-specific options, or other advanced use cases, the agent model can also take the form of an object. To learn more, see the Agent Reference.
Custom Endpoints
If you need Azure OpenAI deployments or enterprise deployments.- OpenAI
- Anthropic
- All Other Providers
For OpenAI, you can pass configuration directly without using
llmClient using the model parameter:AI Gateway
The Vercel AI Gateway lets you access models from multiple providers (OpenAI, Anthropic, Google, and more) through a single API key and interface. No extra provider SDKs or per-provider API keys needed. Key benefits:- Access models from all major providers with a single
AI_GATEWAY_API_KEY - Automatic provider fallback and dynamic routing based on uptime and latency
- Usage tracking and observability through the Vercel dashboard
- Bring Your Own Key (BYOK) support for existing provider credentials
- Simple
- Custom Config
Use the Works with any model available on the gateway:
gateway/ prefix followed by the provider and model name:Extending the AI SDK Client
For advanced use cases like custom retries or caching logic, you can extend theAISdkClient:
Legacy Model Format
The following models work without theprovider/ prefix in the model parameter as part of legacy support:
Google
gemini-2.5-flash-preview-04-17gemini-2.5-pro-preview-03-25gemini-2.0-flashgemini-2.0-flash-litegemini-1.5-flashgemini-1.5-flash-8bgemini-1.5-pro
Anthropic
Anthropic
claude-sonnet-4-6claude-sonnet-4-5-20250929claude-haiku-4-5-20251001
OpenAI
OpenAI
gpt-4ogpt-4o-minio1o1-minio3o3-minigpt-4.1gpt-4.1-minigpt-4.1-nanoo4-minigpt-4.5-previewgpt-4o-2024-08-06o1-preview
Cerebras
Cerebras
cerebras-llama-3.3-70bcerebras-llama-3.1-8b
Groq
Groq
groq-llama-3.3-70b-versatilegroq-llama-3.3-70b-specdecmoonshotai/kimi-k2-instruct
Troubleshooting
Error: API key not found
Error: API key not found
Error:
API key not foundSolutions:- Check
.envfile has the correct variable name for the provider you are using - Ensure environment variables are loaded (use
dotenv) - Restart your application after updating
.envfile
| Provider | Environment Variable |
|---|---|
| Model Gateway | BROWSERBASE_API_KEY (no provider key needed) |
GOOGLE_GENERATIVE_AI_API_KEY or GEMINI_API_KEY | |
| Vertex | Service account credentials (see setup) |
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Azure | AZURE_API_KEY |
| Cerebras | CEREBRAS_API_KEY |
| DeepSeek | DEEPSEEK_API_KEY |
| Groq | GROQ_API_KEY |
| Mistral | MISTRAL_API_KEY |
| Ollama | None (local) |
| Perplexity | PERPLEXITY_API_KEY |
| TogetherAI | TOGETHER_AI_API_KEY |
| xAI | XAI_API_KEY |
| AI Gateway | AI_GATEWAY_API_KEY |
Error: Model not supported
Error: Model not supported
Error:
Unsupported modelSolutions:- Use the
provider/modelformat:openai/gpt-5 - Verify the model name exists in the provider’s documentation
- Check model name is spelled correctly
- Ensure your Model API key can access the model
Model doesn't support structured outputs
Model doesn't support structured outputs
Error:
Model does not support structured outputsSolutions:- Check our Model Evaluation page for recommended models
High costs or slow performance
High costs or slow performance
Python SDK or custom models
Python SDK or custom models
Python is now supported in Stagehand v3! The Python SDK uses a BYOB (Bring Your Own Browser) architecture.Solutions:
- See the Python SDK documentation for installation and usage
- Check the Python migration guide if upgrading from v2
Need Help? Contact Support
Can’t find a solution? Have a question? Reach out to our support team:Contact Support
Email us at support@browserbase.com
Next Steps
Prompting Guide
Learn how to prompt LLMs for optimal results
Run Evals
Test which models work best for your specific use case
Caching Guide
Cache responses to reduce costs and improve speed
Optimize Costs
Reduce LLM spending with caching and smart model selection

