How to Use PageSpeed Insights
How to Use PageSpeed Insights to Fix Core Web Vitals (Complete Guide)
PageSpeed Insights is the fastest and most accessible tool for diagnosing Core Web Vitals failures. It shows both real-user field data (what Google uses for rankings) and lab data (what helps you debug). Most site owners use it incorrectly — they focus on the overall score rather than the specific diagnostics. This guide shows you how to read every section of PageSpeed Insights correctly, extract the fixes that matter most, and use it as an ongoing monitoring tool.
PageSpeed Insights is the first tool most people open when they hear their Core Web Vitals are failing. It is free, requires no setup, and gives you results in under 30 seconds.
But most people misread it.
They look at the score — 67 out of 100 — feel bad, and close the tab. They do not know that the score is largely irrelevant for SEO. They do not know where the real diagnostic data is. They do not know the difference between the field data at the top and the lab data below it.
This guide fixes all of that. By the end, you will know exactly how to use PageSpeed Insights to identify your failing Core Web Vitals, find the specific issues causing each failure, prioritise which fixes to implement first, and track your improvement over time.
If you want to understand what Core Web Vitals are and why they matter for rankings before diving into the diagnostic tool, start with the complete Core Web Vitals fix guide. If you already know the basics and want to go straight to diagnosing your specific site, this is the right place.
What PageSpeed Insights actually measures
PageSpeed Insights combines two completely different types of data into one report. Understanding the difference between them is the most important thing you can learn about this tool.
Field data — what Google uses for rankings
Field data (also called CrUX data — Chrome User Experience Report) is collected from real Chrome users visiting your actual site. It reflects:
- Real devices (phones, tablets, desktops)
- Real connection speeds (4G, 3G, fibre, slow WiFi)
- Real geographic locations
- Real user behaviour (how they interact, when they scroll, what they click)
Field data is averaged over a rolling 28-day window. It is updated approximately monthly. This is the data Google uses to determine your Page Experience ranking signal. If your field data shows Poor LCP, Google is scoring your site as Poor LCP — regardless of what any lab test shows.
Lab data — what helps you debug
Lab data is generated by Lighthouse — a simulated test run from a fixed location, on a fixed virtual device, on a fixed connection speed. It is:
- Controlled and reproducible (same result every time on the same page state)
- Run from Google's servers in a specific location
- Simulated on a mid-tier mobile device with a throttled CPU and network
- A snapshot of a single page load, not an average of real users
Lab data is excellent for debugging. It tells you exactly what is causing performance issues and gives estimated time savings for each fix. But it is not what Google scores you on.
The critical implication: A page with a Lighthouse score of 95 can still have Poor field data if most of your users are on slow mobile connections or in regions far from your server. Always use field data as your primary success metric. Use lab data to understand why and find the fix.
How to run PageSpeed Insights correctly
Step 1: Go to pagespeed.web.dev
Open pagespeed.web.dev in a browser. Do not use Google Search Console's PageSpeed link for a detailed diagnosis — it takes you to a simplified view. Use the direct URL.
Step 2: Enter the right URL
Do not just test your homepage. Test the specific pages that are failing in Google Search Console. The pages with the most organic traffic. Your main product pages. Your top blog posts.
Performance varies significantly between page types. A homepage optimised with a static hero image may have Good LCP while product pages with JavaScript-rendered content have Poor LCP. Fixing the homepage does nothing for your product page rankings.
Step 3: Check both mobile and desktop
PageSpeed Insights defaults to mobile. Always run both:
- Click Mobile — tests with a simulated mid-tier mobile device, throttled CPU, and Fast 3G network
- Click Desktop — tests with a simulated desktop device, no CPU throttling, fast network
Google scores mobile and desktop separately in Search Console. A page can pass on desktop and fail on mobile. Since Google uses mobile-first indexing, mobile performance matters more for rankings — but both need attention.
Step 4: Test in an incognito window
Browser extensions can inject JavaScript and CSS that distort your results. Run PageSpeed Insights from an incognito window to get a clean reading unaffected by extensions like ad blockers, password managers, or SEO toolbars.
Reading the PageSpeed Insights report section by section
Section 1: Field data (top of the report)
The field data section appears at the very top of the report under "Discover what your real users are experiencing".
It shows four metrics with colour-coded ratings:
- LCP — Largest Contentful Paint
- INP — Interaction to Next Paint
- CLS — Cumulative Layout Shift
- FCP — First Contentful Paint (not a Core Web Vital but useful context)
Each metric shows a value and a colour:
- Green = Good (passes Google's threshold)
- Orange = Needs improvement
- Red = Poor (failing Google's threshold)
What to look for:
Any metric showing orange or red is directly affecting your Page Experience ranking signal. Note which specific metric is failing — this determines which fix guide you need.
If the field data section shows "The Chrome User Experience Report does not have sufficient real-world speed data for this page", your page does not have enough traffic for individual URL data. In this case, look for origin-level data (your whole domain aggregated) or use the lab data section for diagnosis.
The Core Web Vitals Assessment:
Below the four metrics is a pass/fail summary: "Core Web Vitals Assessment: Passed" or "Core Web Vitals Assessment: Failed". All three Core Web Vitals (LCP, INP, CLS) must show Good for this to pass. One failing metric fails the entire assessment.
Section 2: Lab data (Lighthouse score)
Below the field data is the Lighthouse performance score — the large coloured circle with a number from 0 to 100. This is the number most people fixate on. It is a useful context, but it is not what Google uses for rankings.
The score is calculated from a weighted combination of lab metrics:
| Metric | Weight |
|---|---|
| Total Blocking Time (TBT) | 30% |
| Largest Contentful Paint (LCP) | 25% |
| Speed Index | 10% |
| Cumulative Layout Shift (CLS) | 25% |
| First Contentful Paint (FCP) | 10% |
Notice what is not in the score: INP. The Lighthouse score does not directly include INP — it uses Total Blocking Time (TBT) as a proxy for interactivity instead. This is why a page can have a high Lighthouse score but still fail INP in the field.
What the score is useful for:
- Tracking relative improvement over time on the same page
- Identifying pages that need attention (score below 50 usually means significant issues)
- Quickly comparing performance before and after a change
What the score is not useful for:
- Predicting your field data CWV scores
- Determining whether Google considers your page fast
- Comparing against competitors (they may have very different user bases)
Section 3: Opportunities
The Opportunities section lists specific improvements that could reduce your page load time, with an estimated time saving for each.
These are the most actionable items in the report. Each opportunity includes:
- The issue — what is causing the problem
- Estimated savings — how much time fixing it could save
- Affected resources — which specific files or elements are involved
The most important opportunities for Core Web Vitals:
| Opportunity | Relevant Metric | Where to Fix |
|---|---|---|
| Eliminate render-blocking resources | LCP | How to eliminate render-blocking resources |
| Reduce initial server response time | LCP | How to fix slow server response time |
| Preload Largest Contentful Paint image | LCP | How to preload your LCP image |
| Reduce unused JavaScript | LCP + INP | How to reduce JavaScript execution time |
| Reduce the impact of third-party code | INP | Third-party scripts and their impact on INP |
| Avoid large layout shifts | CLS | What causes cumulative layout shift |
| Image elements do not have explicit width and height | CLS | Image and video size attributes to fix CLS |
How to prioritise opportunities:
Sort by estimated savings — tackle the highest-savings items first. A single opportunity saving 2,400ms is worth more than five opportunities saving 200ms each.
Section 4: Diagnostics
The Diagnostics section goes deeper than Opportunities. It lists issues that may not have direct time savings but affect overall performance quality.
Key diagnostics to pay attention to:
"Avoid long main-thread tasks" — lists JavaScript tasks over 50ms with their duration. These are your INP culprits. For how to find and fix them, see how to identify and fix long tasks in Chrome DevTools.
"Reduce JavaScript execution time" — breaks down every JavaScript file by execution time, parse time, and compile time. Identifies which files are the most expensive. For the fix strategy, see how to reduce JavaScript execution time.
"Reduce the impact of third-party code" — lists every third-party script with its transfer size and main-thread blocking time. Sort by blocking time to find your biggest INP contributors. Full fix guide in third-party scripts and their impact on INP.
"Avoid an excessive DOM size" — flags pages with more than 1,400 DOM nodes. Large DOMs slow down style recalculation after interactions — a direct INP contributor.
"Serve images in next-gen formats" — identifies JPEG and PNG images that should be converted to WebP or AVIF for faster loading.
"Properly size images" — identifies images being served larger than their display size. A 2,400px image displayed at 400px is wasting bandwidth and slowing LCP.
Section 5: Passed audits
The Passed Audits section (collapsed by default) shows everything PageSpeed Insights tested that your page is already doing correctly. Expand this occasionally to confirm your fixes are working — a previously failing audit moving to Passed confirms the fix was applied correctly.
The field data vs lab data gap — understanding and closing it
The most frustrating PageSpeed Insights experience is having good lab data but poor field data — or vice versa. Understanding why this gap exists helps you close it.
Why field data can be worse than lab data
Your users are on slower devices. Lighthouse simulates a mid-tier mobile device. If a significant portion of your audience uses budget Android phones, their real experience is worse than what Lighthouse simulates.
Your users are geographically distant from your server. Lighthouse runs from a fixed location. If your server is in the US and 40% of your users are in Asia, their TTFB is significantly higher than what a US-based Lighthouse test shows. A CDN fixes this. See how to fix slow server response time for the full solution.
Third-party scripts affect real users differently. Lighthouse excludes some third-party scripts or runs them in a clean environment. Real users encounter these scripts with all their full production complexity — sometimes including A/B test variants, personalisation scripts, and advertising tags that were not present in the Lighthouse test. Full coverage of third-party script impact in third-party scripts and their impact on INP.
Real users interact with the page. Lighthouse measures a passive page load. INP only exists when users actually interact — and real users interact with your page in ways that Lighthouse cannot simulate. A filter button that causes a 500ms INP failure will never appear in a Lighthouse test because Lighthouse does not click filter buttons.
Why lab data can be worse than field data
Lighthouse uses throttled conditions. The 4x CPU throttling and Fast 3G network simulation in Lighthouse is slower than what many of your actual desktop and broadband users experience. Your field data may show Good scores because most of your users are on fast connections while Lighthouse simulates a slow one.
Field data is a 28-day average. If you recently deployed improvements, your field data has not yet caught up. The rolling 28-day window means it takes up to 28 days for field data to fully reflect your changes. Your lab data shows the current state immediately.
Using PageSpeed Insights to diagnose each Core Web Vital
Diagnosing LCP failures
If field data shows Poor or Needs Improvement LCP:
- In the lab data section, find the LCP value — it shows in milliseconds
- In Opportunities, look for:
- "Reduce initial server response time" — fix TTFB first if this appears
- "Preload Largest Contentful Paint image" — implement immediately
- "Eliminate render-blocking resources" — defer scripts and async CSS
- In Diagnostics, look for the LCP element — PageSpeed Insights identifies exactly which element is your LCP and shows a screenshot
- Check the LCP element — is it an image? Is it lazy-loaded? Does it have explicit dimensions?
For the complete LCP fix strategy from fastest to slowest: how to preload your LCP image, how to fix slow server response time, and how to eliminate render-blocking resources.
For understanding what a passing LCP score looks like and what factors affect it, see what is a good LCP score.
Diagnosing INP failures
INP is the hardest metric to diagnose from PageSpeed Insights alone because lab data cannot simulate real user interactions. If field data shows Poor or Needs Improvement INP:
- In Diagnostics, look for "Avoid long main-thread tasks" — these are your primary INP culprits
- Look for "Reduce the impact of third-party code" — third-party blocking time directly causes INP failures
- Look for "Reduce JavaScript execution time" — identifies which scripts are most expensive
- Look for "Avoid an excessive DOM size" — large DOMs slow every interaction
PageSpeed Insights gives you the list of suspects. Chrome DevTools gives you the exact culprit. For the detailed diagnostic process, see how to identify and fix long tasks in Chrome DevTools.
For the fix strategy once you have identified the cause: how to reduce JavaScript execution time and third-party scripts and their impact on INP.
Diagnosing CLS failures
CLS is the most straightforward metric to diagnose from PageSpeed Insights:
- In Opportunities, look for "Avoid large layout shifts" — lists the specific elements causing shifts with their individual CLS contribution scores
- Look for "Image elements do not have explicit width and height" — the most common CLS cause
- Look for "Eliminate render-blocking resources" — render-blocking CSS delays font and image loading, causing late shifts
For the complete CLS cause and fix breakdown: what causes cumulative layout shift, image and video size attributes to fix CLS, and fix CLS from dynamic ads and embeds.
The PageSpeed Insights workflow for systematic improvement
Use this workflow every time you work on Core Web Vitals:
Week 1: Baseline and prioritise
- Run PageSpeed Insights on your top 10 pages by organic traffic (get this from Google Search Console → Performance → Pages)
- Record the field data scores for LCP, INP, and CLS for each page
- Note which metrics are failing and by how much
- Group pages by issue type — "LCP failures", "INP failures", "CLS failures"
- Start with the issue type affecting the most pages
Week 2–4: Fix and re-test
- Implement fixes for the highest-priority issue type
- Re-run PageSpeed Insights after each fix to confirm lab data improvement
- Note: field data will not update for 28 days after fixes are deployed
Month 2: Verify in field data
- Return to Google Search Console → Core Web Vitals
- Check if the failing URLs have moved from Poor to Needs Improvement or Good
- If field data has not improved despite good lab data: investigate the gap (geographic distribution, device types, user behaviour)
Ongoing: Monthly monitoring
- Run PageSpeed Insights on key pages monthly
- Check for regressions — new plugins, theme updates, and third-party script changes can introduce new issues
- Compare against the baseline scores you recorded in Week 1
Common PageSpeed Insights mistakes and how to avoid them
Mistake 1: Focusing on the score instead of the diagnostics
The score is a useful summary but it is not the goal. A score of 75 with Good field data is better than a score of 95 with Poor field data. Focus on the field data CWV pass/fail status and the specific Opportunities and Diagnostics — not the number in the circle.
Mistake 2: Only testing the homepage
The homepage is typically the best-performing page on most sites. It gets the most attention, the most optimisation, and often has the simplest structure. Your product pages, category pages, and blog posts are usually significantly slower. Test the pages that actually rank for competitive keywords.
Mistake 3: Comparing scores across different sites
A score of 80 on a complex e-commerce site is not the same as a score of 80 on a simple blog. The score is not normalised for page complexity. Use it only for tracking improvement on the same page over time.
Mistake 4: Fixing lab issues that are not causing field failures
PageSpeed Insights may flag 15 Opportunities. Not all of them are affecting your Core Web Vitals scores. Before fixing an opportunity, check whether the related metric is actually failing in field data. Fixing "Serve images in next-gen formats" when your LCP is already Good is low-value optimisation.
Mistake 5: Testing immediately after deployment
Lab data reflects the current state of the page — but CDN caches, browser caches, and server-side caches may mean the page being tested is not the version you just deployed. Clear caches and wait 5–10 minutes after deployment before running a PageSpeed Insights test.
Mistake 6: Not testing on mobile
PageSpeed Insights defaults to mobile but many developers switch to desktop and forget to check mobile. Google uses mobile-first indexing. Mobile is where most Core Web Vitals failures occur. Always check mobile field data before desktop.
Advanced PageSpeed Insights techniques
Testing authenticated pages
Some important pages (checkout, account dashboard, post-login content) require authentication. PageSpeed Insights cannot directly test authenticated pages. Workarounds:
- Create a test account and temporarily make specific pages publicly accessible for testing
- Use Chrome DevTools Lighthouse directly in the browser (which can access authenticated pages) instead of PageSpeed Insights
- Use WebPageTest, which supports custom request headers and cookies
Comparing before and after
PageSpeed Insights does not have a built-in comparison view. The most reliable way to compare before and after:
- Screenshot the full PageSpeed Insights report before making changes
- Record all metric values in a spreadsheet
- Deploy your changes
- Run PageSpeed Insights again and compare values
For a more systematic approach, use the PageSpeed Insights API to automate testing and log results over time:
// PageSpeed Insights API — free, no authentication required
async function runPageSpeedTest(url, strategy = 'mobile') {
const apiUrl = `https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=${encodeURIComponent(url)}&strategy=${strategy}`;
const response = await fetch(apiUrl);
const data = await response.json();
return {
url,
strategy,
lcp: data.loadingExperience?.metrics?.LARGEST_CONTENTFUL_PAINT_MS?.percentile,
inp: data.loadingExperience?.metrics?.INTERACTION_TO_NEXT_PAINT?.percentile,
cls: data.loadingExperience?.metrics?.CUMULATIVE_LAYOUT_SHIFT_SCORE?.percentile,
lcpCategory: data.loadingExperience?.metrics?.LARGEST_CONTENTFUL_PAINT_MS?.category,
inpCategory: data.loadingExperience?.metrics?.INTERACTION_TO_NEXT_PAINT?.category,
clsCategory: data.loadingExperience?.metrics?.CUMULATIVE_LAYOUT_SHIFT_SCORE?.category,
lighthouseScore: data.lighthouseResult?.categories?.performance?.score * 100,
timestamp: new Date().toISOString()
};
}
// Test multiple pages
const pages = [
'https://yoursite.com/',
'https://yoursite.com/products/',
'https://yoursite.com/blog/'
];
Promise.all(pages.map(url => runPageSpeedTest(url)))
.then(results => console.table(results));This API approach lets you monitor multiple pages automatically and track field data changes over time — far more efficient than manual testing.
Using the Origin Summary
When individual URL field data is not available (insufficient traffic), PageSpeed Insights shows origin-level data — an aggregate of all pages on your domain. This is less specific than page-level data but still useful for:
- Understanding your site's overall performance baseline
- Identifying whether performance problems are site-wide or page-specific
- Tracking improvement across the entire domain
Look for "Origin Summary" in the field data section when individual URL data is unavailable.
PageSpeed Insights vs other tools — when to use each
PageSpeed Insights is the right starting point for most diagnoses. But other tools serve specific purposes better:
| Tool | Best For | What PageSpeed Insights Cannot Do |
|---|---|---|
| Google Search Console | Site-wide field data across all URLs | PSI only tests one URL at a time |
| Chrome DevTools | Deep JavaScript profiling, INP interaction tracing | PSI cannot trace specific interactions |
| WebPageTest | Waterfall analysis, multi-step testing, geographic testing | PSI tests from one fixed location |
| Lighthouse CLI | Automated CI/CD testing, batch testing | PSI has no API for batch runs |
| web-vitals library | Real user monitoring in production | PSI is a point-in-time test |
Use PageSpeed Insights for: initial diagnosis, confirming fixes, and monitoring key pages. Use Chrome DevTools for: deep diagnosis of JavaScript issues found by PSI. Use Google Search Console for: site-wide monitoring and identifying which pages to test. Use the web-vitals library for production RUM (Real User Monitoring) that confirms field data trends.
How PageSpeed Insights connects to your full Core Web Vitals strategy
PageSpeed Insights is the entry point — the tool that tells you what is failing. The rest of this content cluster tells you how to fix each specific failure:
For LCP failures flagged in PageSpeed Insights: the fix pathway runs through what is a good LCP score → how to preload your LCP image → how to fix slow server response time → how to eliminate render-blocking resources.
For INP failures flagged in PageSpeed Insights: the fix pathway runs through INP vs FID: what changed → how to reduce JavaScript execution time → how to identify and fix long tasks in Chrome DevTools → third-party scripts and their impact on INP.
For CLS failures flagged in PageSpeed Insights: the fix pathway runs through what causes cumulative layout shift → image and video size attributes to fix CLS → fix CLS from dynamic ads and embeds.
Everything ties back to the complete Core Web Vitals guide which covers all three metrics and their interdependencies in one prioritised plan.
Frequently Asked Questions: How to Use PageSpeed Insights
Q1. Is a PageSpeed Insights score of 100 necessary to pass Core Web Vitals?
No. A score of 100 in PageSpeed Insights has no direct relationship to passing Core Web Vitals. The score is a lab-based Lighthouse metric. Core Web Vitals are measured from field data — real users. A page with a score of 65 can have Good field data if the specific metrics Google measures (LCP, INP, CLS) are within their thresholds for real users. Focus on the field data section, not the score.
Q2. How often does PageSpeed Insights field data update?
The field data in PageSpeed Insights comes from the CrUX (Chrome User Experience Report) dataset, which updates approximately monthly. After deploying fixes, it typically takes 28–35 days for the field data to fully reflect your changes. The lab data (Lighthouse score) reflects the current state of the page immediately.
Q3. Why does my PageSpeed Insights score change between tests?
Lab data (the score) can vary between tests due to server load variability, network conditions at Google's test servers, and any dynamic content or A/B tests on your page. Variation of ±5 points is normal. If you see a variation larger than 10 points, look for dynamic content (personalisation, A/B tests, ads) that is rendering differently between tests.
Q4. Can I use PageSpeed Insights to test competitor sites?
Yes. PageSpeed Insights works on any publicly accessible URL. Testing competitor pages is a legitimate and useful competitive analysis technique. Look at their field data scores (if available) and their Opportunities section to understand what they are optimising for. Note that you are seeing their public performance data — the same data Google uses.
Q5. Why does my PageSpeed Insights score look good but Google Search Console shows failing CWV?
The most common reason is the lab vs field data gap. PageSpeed Insights lab data simulates specific conditions that may be better than your actual user conditions. If your users are geographically distant from your server, on slower devices, or interacting with the page in ways that trigger slow JavaScript, their real experience will be worse than the simulated test. Fix your TTFB with a CDN (see how to fix slow server response time), and investigate third-party scripts as covered in third-party scripts and their impact on INP.
Q6. Does PageSpeed Insights test Core Web Vitals or just performance?
Both. PageSpeed Insights shows Core Web Vitals field data (LCP, INP, CLS) from real users at the top of the report. It also runs a Lighthouse performance audit (lab data) which covers additional metrics beyond Core Web Vitals. The Core Web Vitals assessment — the pass/fail at the top — is what matters for SEO. The Lighthouse audit below it is a diagnostic tool.
Q7. How do I test pages with no field data available?
For pages with insufficient traffic for CrUX field data, use the lab data section as your primary diagnostic tool. Run the Lighthouse audit and focus on the Opportunities and Diagnostics sections. Also check your domain's origin-level field data — it shows aggregate performance across your entire site and is available even when individual URL data is not.
Summary
PageSpeed Insights is the most accessible Core Web Vitals diagnostic tool available — free, instant, and requiring no setup. But it is only valuable if you know how to read it correctly.
The five things to do every time you open PageSpeed Insights:
- Check field data first — the coloured metrics at the top are what Google uses for rankings. Everything else is supporting context.
- Note the failing metric — LCP, INP, or CLS. This determines which fix guide you need.
- Read Opportunities by estimated savings — tackle the highest-savings items first.
- Check Diagnostics for long tasks and third-party scripts — these are the most common INP failure sources.
- Test mobile, not just desktop — Google uses mobile-first indexing, and most CWV failures occur on mobile.
Use PageSpeed Insights as your starting point and your verification tool. Use the rest of this series to execute the specific fixes it identifies. And use Google Search Console to monitor whether those fixes are improving your real-user field data over the 28-day update window.
