Website Speed Checker
Check website speed and performance free. Get Core Web Vitals, FCP, LCP scores and improvement tips.
Advertisement
Advertisement
How to Use This Tool
Enter a Website URL
Enter any website URL (including your own site). The tool uses the Google PageSpeed Insights API to analyse real-world performance.
View Performance Scores
See Performance, Accessibility, Best Practices, and SEO scores (0–100). Green is good (90+), yellow is average (50–89), red needs improvement (below 50).
Review Recommendations
Expand specific issues to see exactly what to fix — unused JavaScript, render-blocking resources, image optimisation opportunities, and more.
Advertisement
Related Tools
Frequently Asked Questions
What are Core Web Vitals?
Does page speed affect Google rankings?
What is a good PageSpeed score?
How can I improve my PageSpeed score?
About Website Speed Checker
Your Core Web Vitals assessment in Search Console dropped from 'Good' to 'Needs Improvement' last week and the PM wants to know which page and which metric before sprint planning tomorrow. Or you just shipped a hero-image change and need to confirm LCP did not regress before the marketing campaign goes live on Monday. This checker runs a Lighthouse-style audit against a URL you provide and reports the three Core Web Vitals — Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and Interaction to Next Paint (INP, which replaced First Input Delay in March 2024) — alongside Time to First Byte, First Contentful Paint, and Total Blocking Time. This is lab data (synthetic, controlled conditions) not field data (real users from the Chrome User Experience Report). Lab data is fast, reproducible, and good for A/B comparison; field data is what actually determines your Search Console score. Performance numbers vary run-to-run because of network jitter, CDN cache states, and browser extension noise, so run three to five times and compare medians rather than trusting a single reading.
How it works
- 1
Headless Chrome renders the URL
A headless Chrome instance (Puppeteer or Playwright underneath) loads the URL in a clean profile with simulated 4G network throttling (1.6 Mbps down, 750 Kbps up, 150 ms RTT) and 4x CPU slowdown by default, reflecting a mid-range Android device on a typical mobile connection. No extensions, no caches, cold start to match worst-realistic-case user experience.
- 2
Core Web Vitals captured via the PerformanceObserver API
LCP is the largest image or text block painted above the fold, measured from navigationStart. CLS sums every layout shift of unexpected content movement weighted by impact area. INP samples interaction latency across every click, keypress, and tap during the audit run and reports the worst percentile. TTFB and FCP come from the Navigation Timing API.
- 3
Results scored against Google's thresholds
LCP: <=2.5s Good, <=4.0s Needs Improvement, above Poor. CLS: <=0.1 Good, <=0.25 Needs Improvement. INP: <=200ms Good, <=500ms Needs Improvement. The same thresholds Google applies in Search Console, so a 'Good' here is a strong predictor (not a guarantee) of a 'Good' field-data score once users have generated enough CrUX samples for the URL.
Pro tips
Run at least three times and compare medians
Single performance runs are noisy: TCP connection reuse, CDN cache hit-or-miss, and background network activity on the measurement server all cause multi-hundred-ms variation between identical runs. Always run at least three, ideally five, and compare the median rather than trusting a single reading. A 3.2s LCP on one run does not mean your page is slow — it might mean the measurement's CDN lookup was cold. Consistent 3.2s across five runs is a real regression worth investigating.
Lab data is not the same as field data
This tool produces lab data — synthetic, reproducible, run on demand. Search Console and your Core Web Vitals report use field data from the Chrome User Experience Report, aggregated from real Chrome users on real networks and devices over a 28-day window. Lab results are directional (if lab LCP regresses, field LCP will eventually follow) but not identical. A lab score of 2.1s might correspond to a field p75 of 2.8s because field data includes users on worse connections than the lab simulation. Use lab data for sprint-end checks and debugging; trust field data for strategic decisions.
INP replaced FID in March 2024 — the bar is higher
Interaction to Next Paint supersedes First Input Delay as the third Core Web Vital since March 2024. Unlike FID which only measured the first interaction's input-to-processing delay, INP measures the full interaction-to-next-paint duration across every interaction during the visit and reports a worst-case percentile. Sites that passed FID comfortably often fail INP because INP catches slow re-render patterns (large React state updates, heavy event handlers, unoptimized third-party scripts) that FID never saw. If your FID was green and INP is now red, the regression is real, not a measurement-method artifact.
Honest limitations
- · Lab data only — does not reflect real-user field data (Chrome User Experience Report) that determines Search Console scoring; use lab for A/B comparison and field for strategic decisions.
- · Single runs are noisy due to network and CDN variance; always run three to five times and compare medians rather than individual readings.
- · Measurement reflects one device profile (simulated mid-range mobile by default); desktop performance, high-end mobile, and genuinely slow networks can differ meaningfully from the lab profile.
Frequently asked questions
What are Core Web Vitals and why do they matter?
Core Web Vitals are Google's user-experience metrics used as a ranking signal in search results. Largest Contentful Paint (LCP) measures loading performance — how fast the main content becomes visible. Cumulative Layout Shift (CLS) measures visual stability — how much content jumps around during load. Interaction to Next Paint (INP, replaced First Input Delay in March 2024) measures responsiveness — how fast the page reacts to user input. Google uses field data from real Chrome users to score each URL, and sites in the 'Good' band for all three metrics get a small but real SEO boost over sites in 'Needs Improvement' or 'Poor'.
Why do my results vary between runs?
Performance measurement is inherently noisy. Between any two runs, the CDN cache state may differ (first request warms the cache, second request hits it), TCP connections may reuse from a prior run, the measurement server's CPU load fluctuates, third-party scripts load from shared infrastructure whose own latency varies, and Chrome's own scheduling jitter introduces milliseconds of randomness. Expect a 10 to 20 percent spread across runs on a typical page. To get a reliable reading, run at least three times and compare the median rather than the first or last individual result. Consistent patterns across multiple runs are trustworthy; single readings are suggestive at best.
What is the difference between lab data and field data?
Lab data comes from a synthetic measurement in a controlled environment — one simulated device on one simulated network, run on demand, reproducible. This tool and Lighthouse produce lab data. Field data comes from the Chrome User Experience Report (CrUX), aggregated from real Chrome users on their real devices over a rolling 28-day window. PageSpeed Insights shows both; Search Console shows only field data in its Core Web Vitals report. Lab data is fast and good for debugging specific regressions or A/B comparing page variants. Field data is what actually determines your Core Web Vitals pass/fail for SEO purposes.
My lab score is good but Search Console shows poor — which is right?
Search Console is the one that matters for SEO because it uses field data from actual users. Lab data represents one simulated device on one simulated network; real users span a range of devices (old phones, high-end laptops), network conditions (gigabit fiber, congested LTE), and geographic latencies (on-continent CDN POPs vs trans-oceanic). Field data averages all of that into a p75 score that is often worse than the best-case lab reading. If lab is green and field is red, improve the field experience: look at the p75 distribution, identify which user segments are struggling, and optimize for those paths. Lab-only optimization will not move the Search Console score.
What causes a high CLS score?
CLS penalizes unexpected layout movement after page load — content jumping around as images, ads, or async components finish loading. The common causes are images without explicit width and height attributes (reserve space in HTML so the browser does not have to re-layout when the image arrives), web fonts that cause text to reflow when the custom font loads (use font-display: swap with a metric-matched fallback), ads or embeds inserted above existing content (always reserve their space with a min-height container), and dynamically injected banners (cookie notices, announcement bars) that push content down. Fixing CLS usually means adding explicit dimension hints to async content so the browser lays out once, not multiple times.
Performance auditing usually leads into optimization workflows on adjacent assets. image-compressor and image-converter shrink the images that dominate LCP for most content pages. css-minifier reduces stylesheet bytes that block first render. For third-party-script investigation, json-formatter cleans up the network-timing JSON you export for deeper analysis. When CDN configuration is suspect, ip-address-lookup confirms which edge POP is serving your traffic, and whois-lookup verifies the DNS delegation is pointing where you expect after a migration.
Advertisement