Performance Testing Guide
This guide outlines our approach to performance testing for the Astro project, covering key metrics, testing tools, and optimization strategies.Key Performance Metrics
We focus on measuring the following Core Web Vitals and performance metrics:Loading Performance
- Time to First Byte (TTFB): Time from request to first byte of response received
- First Contentful Paint (FCP): Time until first content is rendered
- Largest Contentful Paint (LCP): Time until largest content element is rendered
- Speed Index: How quickly content is visually displayed
Interactivity
- Total Blocking Time (TBT): Sum of time when main thread is blocked
- First Input Delay (FID): Time from user interaction to response
- Time to Interactive (TTI): Time until page is fully interactive
Visual Stability
- Cumulative Layout Shift (CLS): Measures visual stability and unexpected layout shifts
Resource Metrics
- JavaScript bundle size: Total size of JS assets
- CSS size: Total size of CSS assets
- Request count: Number of HTTP requests made
- Image optimization: Size and format efficiency
Performance Testing Infrastructure
Our performance testing setup consists of:-
Automated tests with Playwright that measure core metrics
- Located in
tests/performance/ - Captures metrics for key pages
- Validates against performance budgets
- Located in
-
Lighthouse CI integration (in CI/CD pipeline)
- Runs performance audits on PRs
- Blocks merges if performance regresses
- Generates reports with optimization suggestions
-
Real User Monitoring (RUM)
- Collects metrics from actual users
- Segments by device type, connection speed, and geography
- Provides insights into real-world performance
Running Performance Tests
Local Testing
CI/CD Pipeline
Performance tests run automatically on:- Pull requests (blocking if performance budgets are exceeded)
- Daily against the main branch
- Release candidate builds
Performance Budgets
We’ve established the following performance budgets:| Metric | Target (Good) | Maximum (Acceptable) |
|---|---|---|
| TTFB | < 300ms | < 600ms |
| FCP | < 1.8s | < 3s |
| LCP | < 2.5s | < 4s |
| CLS | < 0.1 | < 0.25 |
| TBT | < 200ms | < 600ms |
| FID | < 100ms | < 300ms |
| JS Size | < 500KB | < 1MB |
| CSS Size | < 100KB | < 200KB |
| Requests | < 50 | < 80 |
Common Performance Optimizations
Astro-specific Optimizations
-
Partial Hydration
- Use client directives judiciously (
client:idle,client:visible) - Only hydrate components that need interactivity
- Use client directives judiciously (
-
Image Optimization
- Use Astro’s built-in
<Image>component - Implement responsive images with appropriate sizes
- Use modern formats (WebP, AVIF)
- Use Astro’s built-in
-
Content Collections
- Leverage Astro’s content collections for optimal static generation
- Use front matter for metadata to avoid runtime calculations
-
View Transitions
- Implement view transitions for smoother page navigation
- Use
transition:animatedirectives effectively
General Optimizations
-
Code Splitting
- Split code into logical chunks
- Implement dynamic imports for large dependencies
-
Asset Optimization
- Minify and compress all assets
- Remove unused CSS/JS
- Implement proper caching strategies
-
Third-party Scripts
- Defer non-critical third-party scripts
- Use resource hints (
preconnect,preload) - Consider self-hosting critical third-party resources
-
Server Optimizations
- Implement HTTP/2 or HTTP/3
- Enable Brotli compression
- Use appropriate caching headers
Debugging Performance Issues
When a performance test fails or metrics degrade:- Compare the failing test with the baseline to identify regression
- Check Lighthouse reports for specific recommendations
- Use Chrome DevTools Performance panel to profile the page
- Look for:
- Long-running JavaScript
- Render-blocking resources
- Layout shifts
- Large network payloads
Adding New Performance Tests
When adding new pages or features:- Add the page to
PAGES_TO_TESTintests/performance/page-performance.spec.ts - Set appropriate thresholds if different from defaults
- Run the tests locally to establish a baseline
- Document any special considerations for the new page
Continuous Improvement Process
Our performance optimization is an ongoing process:- Measure: Collect metrics from tests and RUM
- Analyze: Identify performance bottlenecks
- Optimize: Implement improvements
- Validate: Confirm improvements with tests
- Monitor: Watch for regressions