Documentation Index Fetch the complete documentation index at: https://stagehand-shrey-check-v3-metrics-docs.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Stagehand provides powerful observability features to help you monitor, track performance, and analyze your browser automation workflows. Focus on session monitoring, resource usage, and operational insights for both Browserbase and local environments.
Browserbase Session Monitoring
When running on Browserbase, you gain access to comprehensive cloud-based monitoring and session management through the Browserbase API and dashboard.
Live Session Visibility
Browserbase provides real-time visibility into your automation sessions:
Session Dashboard Features
Real-time browser screen recording and replay
Network request monitoring with detailed timing
JavaScript console logs and error tracking
CPU and memory usage metrics
Session status and duration tracking
Session Management & API Access
import { Stagehand } from "@browserbasehq/stagehand" ;
import { Browserbase } from "@browserbasehq/sdk" ;
const browserbase = new Browserbase ({
apiKey: process . env . BROWSERBASE_API_KEY ,
});
const stagehand = new Stagehand ({
env: "BROWSERBASE"
});
await stagehand . init ();
const sessionInfo = await browserbase . sessions . retrieve ( stagehand . sessionId );
console . log ( "Session status:" , sessionInfo . status );
console . log ( "Session region:" , sessionInfo . region );
console . log ( "CPU usage:" , sessionInfo . avgCpuUsage );
console . log ( "Memory usage:" , sessionInfo . memoryUsage );
console . log ( "Proxy bytes:" , sessionInfo . proxyBytes );
Session Analytics & Insights
Real-Time Monitoring Monitor live session status, resource usage, and geographic distribution. Scale and manage concurrent sessions with real-time insights.
Session Recordings Review complete session recordings with frame-by-frame playback. Analyze network requests and debug browser interactions visually.
API Management Programmatically access session data, automate lifecycle management, and integrate with monitoring systems through our API.
Usage Monitoring Track resource consumption, session duration, and API usage. Get detailed breakdowns of costs and utilization across your automation.
Session Monitoring & Filtering
Query and monitor sessions by status and metadata:
import { Browserbase } from "@browserbasehq/sdk" ;
const browserbase = new Browserbase ({
apiKey: process . env . BROWSERBASE_API_KEY ,
});
// List sessions with filtering
async function getFilteredSessions () {
const sessions = await browserbase . sessions . list ({
status: 'RUNNING'
});
return sessions . map ( session => ({
id: session . id ,
status: session . status , // RUNNING, COMPLETED, ERROR, TIMED_OUT
startedAt: session . startedAt ,
endedAt: session . endedAt ,
region: session . region ,
avgCpuUsage: session . avgCpuUsage ,
memoryUsage: session . memoryUsage ,
proxyBytes: session . proxyBytes ,
userMetadata: session . userMetadata
}));
}
// Query sessions by metadata
async function querySessionsByMetadata ( query : string ) {
const sessions = await browserbase . sessions . list ({
q: query
});
return sessions ;
}
Local Environment Monitoring
For local development, Stagehand provides performance monitoring and resource tracking capabilities directly on your machine.
Performance Tracking
import { Stagehand } from "@browserbasehq/stagehand" ;
const stagehand = new Stagehand ({
env: "LOCAL" ,
verbose: 1 , // Monitor performance without debug noise
});
await stagehand . init ();
// Track local automation metrics
const startTime = Date . now ();
const initialMetrics = await stagehand . metrics ;
// ... perform automation tasks
const page = stagehand . context . pages ()[ 0 ];
await page . goto ( "https://example.com" );
await stagehand . act ( "click button" );
await stagehand . extract ({ instruction: "get data" , schema: DataSchema });
const finalMetrics = await stagehand . metrics ;
const executionTime = Date . now () - startTime ;
console . log ( 'Local Performance Summary:' , {
executionTime: ` ${ executionTime } ms` ,
totalTokens: finalMetrics . totalPromptTokens + finalMetrics . totalCompletionTokens ,
totalInferenceTime: ` ${ finalMetrics . totalInferenceTimeMs } ms` ,
tokensPerSecond: (( finalMetrics . totalPromptTokens + finalMetrics . totalCompletionTokens ) / ( executionTime / 1000 )). toFixed ( 2 )
});
Resource Usage Monitoring
When running locally, monitor system resource usage and browser performance:
import { Stagehand } from "@browserbasehq/stagehand" ;
import * as os from 'os' ;
import { performance } from 'perf_hooks' ;
class LocalResourceMonitor {
private cpuUsage : number [] = [];
private memoryUsage : number [] = [];
startMonitoring () {
const interval = setInterval (() => {
// Track system resources
const memUsage = process . memoryUsage ();
this . memoryUsage . push ( memUsage . heapUsed / 1024 / 1024 ); // MB
// Track CPU (simplified)
const loadAvg = os . loadavg ()[ 0 ];
this . cpuUsage . push ( loadAvg );
}, 1000 );
return interval ;
}
getResourceSummary () {
return {
avgMemoryUsage: this . memoryUsage . reduce (( a , b ) => a + b , 0 ) / this . memoryUsage . length ,
peakMemoryUsage: Math . max ( ... this . memoryUsage ),
avgCpuLoad: this . cpuUsage . reduce (( a , b ) => a + b , 0 ) / this . cpuUsage . length ,
totalDataPoints: this . cpuUsage . length
};
}
}
const monitor = new LocalResourceMonitor ();
const interval = monitor . startMonitoring ();
const stagehand = new Stagehand ({ env: "LOCAL" });
// ... run automation
clearInterval ( interval );
console . log ( 'Resource Usage:' , monitor . getResourceSummary ());
LLM Usage Monitor token usage, costs, and speed. Set up automated alerting for critical failures. Implement cost tracking across different environments. Use session analytics to optimize automation workflows.
Real-Time Metrics & Monitoring
Basic Usage Tracking
Monitor your automation’s resource usage in real-time with stagehand.metrics:
import { Stagehand } from "@browserbasehq/stagehand" ;
const stagehand = new Stagehand ({ env: "BROWSERBASE" });
await stagehand . init ();
// Metrics are async in V3
const metrics = await stagehand . metrics ;
console . log ( metrics );
// Monitor during automation
const startTime = Date . now ();
const initialMetrics = await stagehand . metrics ;
// ... perform automation tasks
const page = stagehand . context . pages ()[ 0 ];
await page . goto ( "https://example.com" );
await stagehand . act ( "click the login button" );
const data = await stagehand . extract ({
instruction: "extract user info" ,
schema: UserSchema
});
const finalMetrics = await stagehand . metrics ;
const executionTime = Date . now () - startTime ;
console . log ( 'Automation Summary:' , {
totalTokens: finalMetrics . totalPromptTokens + finalMetrics . totalCompletionTokens ,
executionTime: ` ${ executionTime } ms` ,
avgInferenceTime: ` ${ finalMetrics . totalInferenceTimeMs / 3 } ms` ,
});
Understanding Metrics Data
The metrics object provides detailed breakdown by Stagehand operation:
interface StagehandMetrics {
// Act operation metrics
actPromptTokens : number ;
actCompletionTokens : number ;
actReasoningTokens : number ;
actCachedInputTokens : number ;
actInferenceTimeMs : number ;
// Extract operation metrics
extractPromptTokens : number ;
extractCompletionTokens : number ;
extractReasoningTokens : number ;
extractCachedInputTokens : number ;
extractInferenceTimeMs : number ;
// Observe operation metrics
observePromptTokens : number ;
observeCompletionTokens : number ;
observeReasoningTokens : number ;
observeCachedInputTokens : number ;
observeInferenceTimeMs : number ;
// Agent operation metrics
agentPromptTokens : number ;
agentCompletionTokens : number ;
agentReasoningTokens : number ;
agentCachedInputTokens : number ;
agentInferenceTimeMs : number ;
// Cumulative totals
totalPromptTokens : number ;
totalCompletionTokens : number ;
totalReasoningTokens : number ;
totalCachedInputTokens : number ;
totalInferenceTimeMs : number ;
}
Example metrics output:
const metrics = await stagehand . metrics ;
console . log ( metrics );
// {
// actPromptTokens: 4011,
// actCompletionTokens: 51,
// actReasoningTokens: 12,
// actCachedInputTokens: 0,
// actInferenceTimeMs: 1688,
// extractPromptTokens: 4200,
// extractCompletionTokens: 243,
// extractReasoningTokens: 18,
// extractCachedInputTokens: 0,
// extractInferenceTimeMs: 4297,
// observePromptTokens: 347,
// observeCompletionTokens: 43,
// observeReasoningTokens: 5,
// observeCachedInputTokens: 0,
// observeInferenceTimeMs: 903,
// agentPromptTokens: 0,
// agentCompletionTokens: 0,
// agentReasoningTokens: 0,
// agentCachedInputTokens: 0,
// agentInferenceTimeMs: 0,
// totalPromptTokens: 8558,
// totalCompletionTokens: 337,
// totalReasoningTokens: 35,
// totalCachedInputTokens: 0,
// totalInferenceTimeMs: 6888
// }
Best Practices
Track session success rates and failure patterns
Monitor resource usage and scaling requirements
Set up automated alerting for critical failures
Implement cost tracking across different environments
Use session analytics to optimize automation workflows
Track session distribution across regions
Monitor concurrent session limits and scaling
Analyze failure patterns and common error scenarios
Use session recordings for root cause analysis
Implement custom metadata for workflow categorization
Integrate session APIs with monitoring dashboards
Set up automated notifications for session failures
Track SLA compliance and performance benchmarks
Monitor resource costs and usage patterns
Use analytics data for capacity planning and optimization
Next Steps
History Tracking Track all LLM operations with parameters, results, and timestamps for debugging.
Logging Configure logging levels, custom loggers, and file-based session logging.