What Is Page Speed?

Page speed refers to how quickly a web page loads its content and becomes fully usable by the visitor. It covers the entire sequence of events from the moment a user's browser sends a request to the server, all the way through to the final visible element being painted on screen and the page becoming interactive. In practical terms, page speed is not a single number — it is a collection of metrics that each capture a different dimension of the loading experience.

When most people talk about page speed, they are thinking about page load time — roughly how long it takes before the page looks and feels complete. But modern performance measurement goes far deeper than a single stopwatch figure. Google's measurement framework, Lighthouse, breaks the experience into distinct phases: how quickly the server responds, how soon the first pixels appear, how fast the largest visible element loads, whether the layout jumps around unexpectedly, and how quickly the page responds to a tap or click.

It is worth distinguishing page speed from site speed. Page speed refers to the performance of one specific URL. Site speed is the aggregate performance across all pages on a domain — the average you would see in Google Analytics or Google Search Console. Both concepts matter for SEO, but diagnosing problems requires looking at individual pages because performance varies enormously depending on how much content, how many images, how many third-party scripts, and how much JavaScript each page carries.

Page speed is also measured in two fundamentally different contexts. Lab data (also called synthetic data) is collected by running your page through a simulated environment with a fixed device, CPU speed, and network connection. Tools like Lighthouse and PageSpeed Insights use lab data. Field data (also called real-user monitoring, or RUM) is collected from actual visitors using real devices and real network connections, aggregated via the Chrome User Experience Report (CrUX). Google uses field data for its Core Web Vitals ranking signals, which means your real-world visitors' experience is what ultimately matters for search rankings.

Check your speed now: Use the RankNibbler Website Speed Test or run a full Site Audit to get your Core Web Vitals scores, PageSpeed Insights results, and actionable recommendations in one place.

Why Page Speed Matters for SEO

The relationship between page speed and SEO has been developing for over fifteen years. Google officially acknowledged speed as a ranking signal for desktop in April 2010, and extended that signal to mobile in July 2018 with the "Speed Update." The 2021 Page Experience update elevated speed even further by making Core Web Vitals — a specific set of speed and interaction metrics — an explicit part of the ranking algorithm. Understanding exactly how website speed affects your rankings requires looking at four separate mechanisms.

Speed as a direct ranking factor

Google's ranking systems use page speed as a direct signal. The precise weighting is not published, and Google has consistently said that content quality and relevance outweigh speed in most situations. However, for competitive queries where many pages are closely matched on content and authority, speed becomes a meaningful differentiator. Google's own guidance describes speed as a "tiebreaker" — when two pages are broadly equivalent in relevance, the faster one is more likely to rank higher. For pages that score very poorly on Core Web Vitals (the bottom tier Google calls "Poor"), there is an additional negative signal applied through the Page Experience ranking system.

The Core Web Vitals thresholds that Google uses as ranking signals are: LCP under 2.5 seconds for "Good," FCP, INP, and CLS all have their own thresholds (covered in detail below). Pages that meet all three Core Web Vitals thresholds receive a small positive ranking boost relative to pages that do not. The boost is described as modest, but in competitive verticals even a small advantage compounds over thousands of queries.

User experience, bounce rate, and engagement signals

Speed affects SEO indirectly through user behaviour signals that Google monitors. When a page loads slowly, visitors leave before seeing the content — a phenomenon known as pre-click abandonment on mobile. Google's own research showed that moving from a 1-second to a 3-second page load time increases the probability of a bounce by 32%. Moving from 1 to 5 seconds increases it by 90%, and from 1 to 10 seconds the probability of bounce rises by 123%.

When users bounce back to the search results immediately after clicking your page — a behaviour sometimes called a "pogostick" — it is a negative signal. Google interprets this as the page not satisfying the searcher's intent, which over time can reduce the page's rankings. A fast page that loads in under 2 seconds keeps users engaged, increases pages-per-session, and improves dwell time — all engagement metrics that correlate strongly with ranking performance.

Conversion rates follow the same pattern. Amazon calculated that a 100-millisecond delay in load time cost them 1% of revenue. Walmart found that for every 1-second improvement in load time, conversions increased by 2%. For e-commerce sites particularly, the connection between website speed, bounce rate, and revenue is direct and measurable.

Crawl budget and crawl efficiency

Every site has a crawl budget — the number of pages Googlebot will crawl within a given period. Crawl budget is determined by your server's crawl rate limit (how quickly your server can respond) and Google's crawl demand (how valuable it deems your content). Slow pages consume more of both. When Googlebot has to wait longer for each page response, it crawls fewer total pages per day. For small sites with under a few hundred pages this is rarely a problem. For large sites with tens of thousands of pages — e-commerce catalogues, news sites, aggregators — slow pages can mean that new and updated content takes days or weeks longer to get indexed.

Improving page load time SEO on large sites is therefore partly about making sure Google can discover and re-index your content efficiently. A faster server response time (TTFB) in particular directly improves crawl efficiency because Googlebot's primary bottleneck is waiting for the first byte from your server.

Mobile-first indexing and real-world speed

Since 2023, all sites in Google's index are indexed using the mobile version of the page. Google crawls your pages using a smartphone Googlebot, and the performance scores it measures are based on mobile conditions. This matters enormously for speed because mobile devices are typically slower than desktops — slower CPUs, less RAM, and network connections that range from fast 5G to sluggish 3G depending on the user's location and signal. A page that loads in 1.5 seconds on a desktop computer may take 4 seconds or more on a mid-range Android phone on a 4G connection, which is the benchmark device Google uses for PageSpeed Insights field data.

This gap between desktop and mobile performance is one of the most common problems found in site audits. Developers test on powerful machines with fast internet and see green scores, while real users on mobile devices experience slow, frustrating loads. Running a full site audit that tests mobile performance specifically is essential for understanding your real-world speed.

Page Speed Metrics Explained in Detail

Google and the web performance community measure page speed through a set of standardised metrics. Each captures a distinct phase of the loading experience. Understanding what each metric measures — and what score counts as good, needs improvement, or poor — is essential for diagnosing and fixing speed problems.

MetricWhat It MeasuresGoodNeeds ImprovementPoor
LCPLargest Contentful Paint< 2.5s2.5s – 4.0s> 4.0s
FCPFirst Contentful Paint< 1.8s1.8s – 3.0s> 3.0s
CLSCumulative Layout Shift< 0.10.1 – 0.25> 0.25
INPInteraction to Next Paint< 200ms200ms – 500ms> 500ms
TBTTotal Blocking Time< 200ms200ms – 600ms> 600ms
Speed IndexVisual completeness progression< 3.4s3.4s – 5.8s> 5.8s
TTFBTime to First Byte< 200ms200ms – 600ms> 600ms

Largest Contentful Paint (LCP)

LCP measures how long it takes for the largest visible content element — typically a hero image, a large block of text, or a video thumbnail — to fully render in the viewport. It is Google's primary metric for perceived load speed because the largest element is usually what the user is waiting to see before they can engage with the page. A good LCP score is under 2.5 seconds. Scores above 4.0 seconds are considered Poor and are likely to incur a negative ranking signal.

LCP is heavily influenced by image optimisation. On most pages, the LCP element is an image, and uncompressed or oversized images are the most common reason for a slow LCP. Other contributors include render-blocking resources that delay the browser from starting to fetch the LCP element, and slow server response times that push the entire timeline back. See our detailed guide on what is Largest Contentful Paint for a full breakdown of causes and fixes.

First Contentful Paint (FCP)

FCP marks the moment the browser renders the first piece of DOM content — text, an image, a canvas element, or an SVG. It is the user's first visual feedback that the page is loading. A good FCP is under 1.8 seconds. FCP is not one of the three Core Web Vitals that Google uses directly as a ranking signal, but it is a strong predictor of overall loading performance and heavily influences the Lighthouse performance score.

FCP is most commonly delayed by render-blocking scripts and stylesheets in the document head, slow server response times, and the absence of a browser cache on repeat visits. Eliminating render-blocking resources and reducing TTFB are the two most effective ways to improve FCP.

Cumulative Layout Shift (CLS)

CLS measures visual stability — specifically, how much the layout of a page shifts unexpectedly during and after loading. It is scored on a scale from 0 (no shifting) upward. A score under 0.1 is Good; above 0.25 is Poor. Layout shifts are what happens when you start reading an article and the text jumps down because an advertisement loaded above it, or when you tap a button and something moves so you tap the wrong element.

CLS is a Core Web Vitals ranking signal. Common causes include images and embeds without specified width and height attributes (causing the browser to reserve no space for them), ads that load asynchronously and push content down, and web fonts that cause text to reflow when they finish loading. Our guide on what is Cumulative Layout Shift covers the measurement methodology and fixes in depth. Check your images with the image dimensions checker to identify missing size attributes.

Interaction to Next Paint (INP)

INP replaced First Input Delay (FID) as a Core Web Vitals metric in March 2024. Where FID only measured the delay before the browser could begin processing the first user interaction, INP measures the full latency of all interactions throughout the page's lifetime — from when the user taps, clicks, or presses a key, to when the next frame is painted in response. This makes INP a more comprehensive measure of page responsiveness.

A good INP score is under 200 milliseconds. Scores above 500ms are Poor. INP problems are almost always caused by JavaScript — specifically, long tasks running on the browser's main thread that block the browser from responding to input. Auditing your scripts and breaking up long JavaScript tasks is the primary fix. Use our script loading checker to identify scripts that may be contributing to main thread congestion.

Total Blocking Time (TBT)

TBT is a lab metric (not measured in field data) that correlates closely with INP. It measures the total amount of time between First Contentful Paint and Time to Interactive during which the main thread was blocked by a long task (any task taking more than 50 milliseconds). TBT is the metric most directly influenced by JavaScript execution. A page with a TBT under 200ms is considered Good; over 600ms is Poor.

TBT is heavily weighted in the Lighthouse performance score and is one of the best indicators of whether your page will feel sluggish to users even after it appears visually complete. Third-party tag managers, chat widgets, analytics scripts, and advertising scripts are among the most common contributors to high TBT scores. Auditing which third-party scripts you are loading — and whether each is genuinely necessary — is often the fastest route to TBT improvement.

Speed Index

Speed Index measures how quickly the visible content of a page is progressively displayed during loading. Unlike LCP, which marks a single moment, Speed Index captures the overall visual completeness progression from start to finish. It is calculated by measuring the visual completeness of the page at regular intervals and computing a score based on how quickly the page reaches full visual completion. A lower Speed Index is better. Good is under 3.4 seconds; Poor is above 5.8 seconds.

Speed Index is particularly useful for understanding the subjective experience of watching a page load — a page that loads all at once near the end will score worse than a page that progressively reveals content throughout the loading process. Techniques like server-side rendering, streaming HTML responses, and prioritising above-the-fold content all improve Speed Index.

Time to First Byte (TTFB)

TTFB measures the time from when the browser sends the HTTP request to when it receives the first byte of the response from the server. A good TTFB is under 200 milliseconds; above 600ms is Poor. TTFB is a foundational metric because it affects every other timing metric — if your server is slow to respond, every downstream event (FCP, LCP, page load) is delayed by the same amount.

TTFB is determined by server performance, network latency (distance between the user and the server), and whether the response is cached. The main ways to improve TTFB are to use a Content Delivery Network (CDN) to serve content from servers closer to the user, implement server-side caching so the server does not have to regenerate pages on every request, and upgrade to a faster hosting infrastructure if your current server is underpowered.

How to Test Page Speed

Testing your page speed accurately requires using the right tools and understanding the difference between lab and field data. A single tool gives you one perspective; using multiple tools gives you a complete picture.

ToolData TypeWhat It ProvidesBest For
RankNibbler Speed TestLab + FieldPageSpeed Insights API, Core Web Vitals, recommendationsQuick audits, SEO-focused analysis
RankNibbler Site AuditLab + FieldFull SEO audit including performance, scripts, images, CWVComprehensive site-wide analysis
Google PageSpeed InsightsLab + FieldLighthouse scores, CrUX field data, diagnosticsOfficial Google data
Google Search ConsoleField onlyCore Web Vitals by URL group, pass/fail statusMonitoring real-world performance
Chrome DevTools (Lighthouse)Lab onlyDetailed waterfall, opportunity list, diagnosticsDevelopment and debugging
WebPageTestLab onlyWaterfall chart, filmstrip view, multi-location testingDeep technical analysis
GTmetrixLab onlyLighthouse + proprietary scores, video of loadHistorical tracking, video review
Chrome UX Report (CrUX)Field onlyReal-user Core Web Vitals data by origin and URLPopulation-level field data

Understanding lab vs. field data discrepancies

It is common to see a good PageSpeed Insights lab score but a poor field data score (or vice versa). Lab data simulates a specific device and network; field data averages real visits across all devices, locations, and connection types. If your field data is poor despite good lab scores, it typically means your real users are on slower devices or connections than the lab simulation assumes, or that a specific user journey (scrolling, interacting with a widget) triggers performance problems that the static lab test does not catch.

For SEO purposes, field data is what matters — Google's Core Web Vitals ranking signals are based on CrUX field data, not lab scores. If you do not have enough traffic to generate CrUX data (typically around 1,500+ visits per metric per 28-day period), Google will fall back to lab data from PageSpeed Insights when assessing your page for ranking purposes.

What Slows Down a Page: Causes in Detail

Diagnosing slow page speed requires understanding the full chain of events in a page load and where delays are introduced. The most common causes are listed below, roughly in order of how frequently they are the primary culprit on real-world websites.

Unoptimised images

Images are the single most common cause of slow pages. A typical unoptimised page might have a 2MB hero image when a compressed WebP version of the same image would be 150KB — a 93% size reduction with no visible quality difference. Problems include: images saved at the wrong dimensions (uploading a 3000px wide image to display at 800px), images saved in inefficient formats (JPEG or PNG instead of WebP or AVIF), images not compressed at all, and images loaded immediately even when they are far below the fold.

The fix involves converting images to WebP format, compressing them before uploading, serving images at the size they will actually be displayed, and adding loading="lazy" to all images below the fold. Use the image dimensions checker and lazy loading checker to identify images that need attention on your pages.

Render-blocking JavaScript

By default, when a browser encounters a <script> tag, it stops parsing the HTML, downloads the script, executes it, and only then continues. This means a script tag in the document head can delay the entire page render by the time it takes to download and execute that script. On pages with multiple synchronous scripts in the head, this can add seconds to FCP and LCP.

The fix is to add async or defer attributes to script tags. defer downloads the script in parallel with HTML parsing and executes it after the HTML is parsed — ideal for most scripts. async downloads and executes as soon as possible, potentially interrupting parsing — suitable for independent scripts like analytics. Use the script loading checker to audit which scripts on your pages are loading synchronously and which have async/defer applied.

Render-blocking CSS

CSS files in the document head are also render-blocking by default — the browser will not render any content until all CSS has been downloaded and parsed. While some CSS is necessary before rendering (critical CSS that styles above-the-fold content), most CSS can be deferred. The recommended approach is to identify the CSS rules that apply to above-the-fold content, inline them directly in the <head>, and load the full stylesheet with a non-blocking method for the rest. Use the CSS/JS checker to see how many stylesheets your pages are loading and how large they are.

Slow server response time (TTFB)

If your server takes more than 600ms to send the first byte of data, every other metric suffers. Slow TTFB is caused by: slow database queries (common on CMS-driven sites without caching), under-resourced shared hosting, no server-side page caching, dynamic page generation that runs complex logic on every request, and physical distance between the user and the server. Solutions include implementing full-page caching at the server level, using a CDN to serve cached pages from edge nodes near users, optimising slow database queries, and upgrading hosting infrastructure if necessary.

Excessive HTTP requests

Every resource a page loads — each image, script, stylesheet, font file, API call — requires a separate HTTP request. While HTTP/2 allows multiple requests to be multiplexed over a single connection, there is still overhead for each request, and the total number of resources directly affects load time. A page loading 80 separate resources will always be slower than an equivalent page loading 20 resources, all else being equal.

Reducing request count involves: combining CSS files into one stylesheet, removing unused third-party scripts, replacing icon fonts with inline SVGs, using CSS sprites for small images, and auditing whether each third-party service (chat widgets, heatmap tools, social sharing buttons) is genuinely necessary.

Web fonts

Custom web fonts require additional HTTP requests to download font files, and if not handled correctly, they cause a period where text is invisible (FOIT — Flash of Invisible Text) or where text displays in a fallback font before swapping to the custom font (FOUT — Flash of Unstyled Text), contributing to CLS. Font files can be large, especially if you are loading multiple weights and styles.

Best practices for web fonts: add font-display: swap to your @font-face declarations so text is visible immediately in a fallback font; preload critical font files using <link rel="preload">; limit the number of font weights you use; consider using system fonts for body text where brand guidelines allow; and self-host fonts rather than loading from Google Fonts to avoid the extra DNS lookup.

Missing or insufficient browser caching

Without cache headers, every visitor — including returning visitors who have already downloaded your resources — re-downloads every file on every visit. Setting appropriate Cache-Control headers tells the browser to store static files locally. Versioned assets (those with a hash in the filename) can be cached for a year or more. Unversioned assets should be cached for at least a week. Proper caching has no impact on first-time visitors but dramatically speeds up all subsequent visits and reduces server load.

Unminified and uncompressed code

HTML, CSS, and JavaScript files contain whitespace, comments, and long variable names that are useful for development but add no function for the browser. Minification removes these, reducing file sizes by 10–30% typically. Separately, your server should apply GZIP or Brotli compression to text-based assets before sending them — this typically reduces file sizes by 60–80%. Both are easy wins that require no code changes, only build tooling or server configuration adjustments.

Third-party scripts

Third-party scripts — analytics, advertising, chat widgets, social sharing buttons, A/B testing tools, heatmaps, conversion tracking — are a major source of performance problems. Each script requires a DNS lookup, a connection, and a download from a different domain. Scripts then execute on the page's main thread, consuming CPU time. Poorly written third-party scripts can block the main thread for hundreds of milliseconds, causing high TBT and poor INP scores.

For each third-party script on your page, ask: is this generating measurable value that outweighs its performance cost? For scripts that must stay, load them with the defer attribute and consider using Partytown or similar solutions to move third-party script execution off the main thread entirely.

Lack of a CDN

If your server is located in London and a user in Sydney makes a request, the data has to travel thousands of kilometres, adding latency. A Content Delivery Network maintains copies of your static assets (and often full cached pages) on servers around the world. When the Sydney user requests your page, they receive it from the nearest CDN node, dramatically reducing round-trip time. Cloudflare's free plan is sufficient for most small to medium sites and provides CDN, DDOS protection, and basic performance optimisation in a single product.

How to Improve Page Speed: 12 Techniques in Detail

For a complete step-by-step implementation guide, see our guide to reducing page load time. The techniques below are ordered roughly by ease of implementation and typical impact.

1. Compress and convert images to WebP

Convert all images to WebP format (or AVIF for maximum compression). Use tools like Squoosh, ImageOptim, or a build pipeline with sharp. WebP delivers 25–35% smaller files than JPEG at equivalent visual quality, and AVIF can deliver a further 20% reduction over WebP. Compress images to the lowest quality setting where the degradation is not visible to the naked eye — for photographic content this is typically 75–85% quality in WebP.

Always size images to their display dimensions. If a product image displays at 400px wide, save it at 400px wide (or 800px for 2x retina screens), not at 2000px. Use the HTML srcset attribute to serve different image sizes to different screen sizes. Check your pages with the image dimensions checker to find images being served at the wrong size.

2. Implement lazy loading for images and iframes

Add loading="lazy" to all <img> and <iframe> elements that are below the fold — images the user will not see until they scroll. The browser natively supports lazy loading and will defer downloading these resources until the user scrolls near them. This dramatically reduces the initial page weight and the number of requests made on page load. Do not add loading="lazy" to the LCP image (the largest above-fold image) as this would delay your LCP score. Check your current lazy loading implementation with the lazy loading checker.

3. Defer non-critical JavaScript

Add the defer attribute to all script tags that do not need to run before the page is visible. This includes analytics scripts, chat widgets, advertising code, and most application JavaScript. The defer attribute tells the browser to download the script in the background without blocking HTML parsing, then execute it after the document has been parsed. For scripts that are completely independent of the DOM, async may be appropriate instead. Check your current script loading with the script loading checker.

4. Inline critical CSS

Identify the CSS rules that apply to above-the-fold content (the content visible without scrolling on a typical viewport) and place them directly in a <style> tag in the document head. Load the full stylesheet non-blocking using a JavaScript-based loading technique or the media="print" trick. This allows the browser to render the visible portion of the page immediately without waiting to download the full CSS file. Tools like Critical (npm package) can automate the extraction of critical CSS from your pages.

5. Enable GZIP or Brotli compression

Configure your web server to compress text-based responses (HTML, CSS, JavaScript, JSON, XML) before sending them. Brotli provides 15–20% better compression than GZIP and is supported by all modern browsers. Most hosting control panels have a checkbox for enabling GZIP. For Brotli on Nginx, add brotli on; to your server configuration. On Apache, install the mod_brotli module. Verify compression is working by checking the Content-Encoding header in your browser's Network tab — it should show br (Brotli) or gzip.

6. Implement a CDN

Use a Content Delivery Network to serve your static assets and, where possible, your full HTML pages from servers close to your users. Cloudflare is the most accessible starting point — the free plan provides global CDN coverage, automatic HTTPS, DDoS protection, and basic image optimisation. For larger sites with higher performance requirements, premium CDNs like Fastly, Cloudfront, or Akamai offer more granular control and higher cache-hit ratios.

7. Implement server-side page caching

For CMS-driven sites (WordPress, Drupal, Magento), every page request by default involves PHP executing, a database being queried, and HTML being generated. Enable full-page caching so the server stores the generated HTML and returns it directly on subsequent requests without re-running that process. On WordPress, plugins like WP Rocket, W3 Total Cache, and LiteSpeed Cache handle this. The impact on TTFB is dramatic — pages that took 800ms to generate may be served in under 50ms from cache.

8. Set long browser cache lifetimes for static assets

Add Cache-Control: max-age=31536000, immutable headers to versioned static assets (JavaScript bundles, CSS files, images with hashes in their filenames). This tells the browser to cache these files for a year and not to revalidate them. For HTML pages, use shorter cache lifetimes (a few minutes to an hour) or no caching if content changes frequently. The combination of long-lived static asset caching and versioned filenames (cache busting via filename hashes when files change) is the standard pattern used by all performance-optimised sites.

9. Reduce and audit third-party scripts

Conduct a thorough audit of every third-party script loading on your pages. For each script, identify: who added it, what it does, and whether it is generating measurable value. It is common to find scripts from tools that were evaluated and abandoned years ago, multiple competing analytics implementations, and widgets that are visible on only one page but loaded globally. Removing even one or two heavy scripts can reduce TBT by hundreds of milliseconds. For scripts that must stay, load them with defer and consider moving them to load after an interaction (a technique called "façade patterns" for chat widgets and video embeds).

10. Preload critical resources

Use <link rel="preload"> to tell the browser to fetch critical resources as early as possible, before the parser would normally discover them. The most impactful uses are preloading the LCP image (the largest above-fold image), preloading critical font files, and preloading critical JavaScript that will be needed immediately. Example: <link rel="preload" as="image" href="/hero.webp">. Do not preload everything — preloading resources the browser would have fetched at the right time anyway can actually hurt performance by competing with more critical requests.

11. Minimise layout shifts (fix CLS)

Every <img> tag should have explicit width and height attributes that match the image's intrinsic dimensions. This allows the browser to reserve the correct space before the image loads, preventing layout shifts. Add CSS aspect-ratio properties to fluid images for responsive layouts. Reserve space for advertisements and embeds with min-height containers. Add font-display: swap to font declarations and preload critical fonts to reduce the magnitude of font-swap layout shifts. See our guide on Cumulative Layout Shift for a complete CLS troubleshooting guide.

12. Eliminate unused CSS and JavaScript

Modern frameworks and libraries often deliver far more code than any individual page uses. Webpack, Rollup, and other bundlers support tree-shaking (removing unused code) and code splitting (loading only the JavaScript needed for the current page). For CSS, tools like PurgeCSS scan your HTML and remove any CSS rules that do not match elements on the page. Reducing JavaScript and CSS payload reduces parse and compile time on mobile devices where CPU performance is limited, directly improving TBT and INP scores. Use the CSS/JS checker to audit file sizes on your pages.

Page Speed by CMS

The platform your site is built on has a significant influence on the baseline page speed you start from and the tools available to you for improvement.

WordPress page speed

WordPress powers around 43% of all websites, and its page speed performance varies enormously depending on hosting, theme, and plugins. Out of the box, WordPress has no page caching, sends unminified assets, and many popular themes include large page builder frameworks that add significant JavaScript payload. However, WordPress also has the most mature ecosystem of performance optimisation plugins of any CMS.

For WordPress performance, the essentials are: a caching plugin (WP Rocket is the most comprehensive paid option; LiteSpeed Cache is excellent if your host supports LiteSpeed; W3 Total Cache is the most popular free option), a fast theme (GeneratePress and Kadence are known for clean, lightweight output), a CDN integration, and image optimisation (Imagify, Smush, or ShortPixel). Avoid heavy page builders like Divi or WPBakery where performance is a priority — they add significant CSS and JavaScript to every page. Block-based themes built on the native WordPress editor (Gutenberg) tend to perform significantly better. Run a site audit to identify which specific WordPress performance issues are affecting your site.

Shopify page speed

Shopify is a hosted platform, which means server configuration, CDN, and caching are managed by Shopify rather than the site owner. Shopify uses Fastly as its global CDN and provides solid baseline infrastructure. The main variables under store owners' control are the theme and installed apps.

Shopify themes range from highly optimised (Dawn, Shopify's official free theme, scores well in most performance tests) to very slow (many third-party premium themes include excessive CSS, JavaScript, and external dependencies). Every app installed on a Shopify store typically adds one or more JavaScript files that load on every storefront page. It is common to find Shopify stores with 20+ app scripts running on every product page, each adding its own loading overhead. Regularly audit which apps are actually used, remove any that are not, and check whether apps can be installed with a lighter footprint. Google Search Console's Core Web Vitals report is the most actionable tool for Shopify merchants because it shows real-user field data grouped by page type (home, collection, product).

Mobile vs. Desktop Page Speed

There are two important distinctions to understand between mobile and desktop speed: the measurement conditions Google uses, and the real-world performance gap most sites experience.

PageSpeed Insights measures mobile performance using a simulated mid-tier Android device (Moto G Power) on a 4G connection with 150ms round-trip latency, 1.6Mbps download speed, and 4x CPU throttling compared to a high-end desktop machine. Desktop is tested with no network throttling and no CPU throttling. This means the same page will almost always score lower on mobile than on desktop, simply due to the testing conditions — not necessarily because the mobile experience is actually worse.

In practice, most websites do have a genuine mobile performance gap beyond what the test conditions explain. Mobile-specific problems include: full-sized desktop images being served to mobile viewports (missing responsive images), JavaScript frameworks and libraries that are expensive to parse and execute on mobile CPUs, and complex CSS layouts that require more calculation on mobile. Fixing the mobile performance gap requires testing on real mobile devices, using responsive images with srcset, and profiling JavaScript execution time specifically on mobile.

For SEO, mobile performance is primary. Because Google uses mobile-first indexing, your mobile Core Web Vitals field data is what feeds into ranking signals. A site with a 95 desktop performance score and a 45 mobile score will be judged primarily on the 45 for ranking purposes.

Page Speed and AI Overviews

Google's AI Overviews (formerly Search Generative Experience, or SGE) add a new dimension to the relationship between page speed and search visibility. AI Overviews are generated from content that Google has already crawled and assessed as high-quality and reliable. Pages that are cited as sources in AI Overviews tend to be fast-loading, authoritative, and well-structured.

While Google has not published specific page speed requirements for AI Overview citations, the general principle holds: pages that are fast, well-structured, and score well on Core Web Vitals are more likely to be frequently crawled, freshly indexed, and considered high-quality signals within Google's quality assessment. Slow pages with poor Core Web Vitals are less likely to be kept current in Google's index — and content that is not freshly indexed is less likely to be used to generate AI Overview answers.

Practically, the same speed optimisations that improve your traditional search rankings also improve your chances of appearing in AI Overviews. Fast TTFB ensures fresh content is quickly discovered when updated. Good Core Web Vitals scores contribute to the overall quality signals that influence which pages Google considers authoritative sources. And clear, well-structured HTML (not JavaScript-rendered content that requires client-side execution to be readable) makes it easier for Google to extract and use your content in generated answers.

Run a full audit: The RankNibbler Site Audit checks your Core Web Vitals, script loading, lazy loading, image dimensions, CSS/JS file counts, and dozens of other on-page SEO factors in one report. Use it to get a prioritised list of what to fix first.

Page Speed Testing Tools: Detailed Comparison

Each testing tool has different strengths and is most useful for different purposes. Understanding what each tool is measuring helps you interpret results correctly and avoid drawing wrong conclusions from discrepancies between tools.

RankNibbler

The RankNibbler SEO checker integrates the PageSpeed Insights API directly into the audit workflow. This means you get Core Web Vitals data, Lighthouse performance scores, and specific recommendations alongside your on-page SEO analysis in a single report. The website speed test tool provides a focused view of performance data. This is the fastest way to get actionable speed data in the context of a broader SEO audit.

Google PageSpeed Insights

PageSpeed Insights (pagespeed.web.dev) is Google's public-facing tool and the most important to check because it uses the same underlying data and methodology that Google uses for ranking signals. It shows both field data from CrUX (real user measurements from Chrome) and lab data from Lighthouse (simulated). The field data section shows a 28-day rolling average for CWV metrics, so changes to your site take time to be reflected. The lab data section shows results from a single run and is more immediately responsive to changes.

Google Search Console Core Web Vitals Report

The Core Web Vitals report in Google Search Console is the definitive source for understanding how Google assesses your pages' real-world performance. It groups pages by URL pattern and shows the proportion of page views that fall into Good, Needs Improvement, and Poor categories for LCP, INP, and CLS. This report is the most actionable for SEO because it tells you which specific pages are failing Core Web Vitals thresholds and therefore receiving a negative ranking signal. Pages flagged here should be prioritised for optimisation.

Chrome DevTools Lighthouse

Opening Chrome DevTools and running Lighthouse from the Performance tab gives you the most detailed diagnostic information available. The Opportunities section lists specific changes and their estimated time savings. The Diagnostics section explains potential issues. The Filmstrip view shows a frame-by-frame visual timeline of how the page loads. This level of detail is most useful for developers diagnosing specific problems rather than for routine monitoring.

Frequently Asked Questions About Page Speed

What is a good page speed score?

In PageSpeed Insights, scores of 90 or above are considered Good (green), 50–89 are Needs Improvement (orange), and below 50 are Poor (red). However, the score itself is less important than the underlying Core Web Vitals metrics. A page with a score of 75 that passes all three Core Web Vitals thresholds (LCP under 2.5s, INP under 200ms, CLS under 0.1) is in a better position for SEO than a page with a score of 85 that fails LCP. Focus on passing Core Web Vitals rather than maximising the headline score.

How much does page speed affect SEO rankings?

Page speed is a ranking factor, but its weight relative to content quality and backlinks is modest for most queries. Google has described it as a tiebreaker for pages that are otherwise equivalent. For highly competitive queries with many strong pages, speed can make a meaningful difference. The indirect effects — through bounce rate, engagement signals, and crawl efficiency — are arguably more significant for most sites than the direct ranking signal. Treat speed as a baseline requirement rather than a shortcut to rankings.

What is the ideal page load time for SEO?

There is no single "ideal" load time for SEO because Google uses specific metric thresholds rather than a raw load time. That said, most guidance points to sub-2-second load time (as measured by LCP specifically) as the target. In Google's own user research, 53% of mobile users abandon a page that takes more than 3 seconds to load. For e-commerce, the conversion rate impact of speed means that sub-1-second LCP should be the aspirational goal for product and checkout pages.

Does page speed affect all pages equally?

No. Google applies page speed signals at the individual URL level, not at the domain level. A slow homepage does not drag down rankings for fast product pages, and vice versa. This is why auditing page speed at the individual page level matters — aggregate site-speed averages can mask pages that are performing very badly. Google Search Console's Core Web Vitals report shows performance by URL group, which helps identify which page types are underperforming.

How do I improve Core Web Vitals specifically?

Each Core Web Vital has different primary causes: LCP is most commonly caused by slow image loading and slow TTFB; CLS is most commonly caused by images without dimensions and dynamically injected content; INP is most commonly caused by JavaScript long tasks on the main thread. Start by running a PageSpeed Insights test, identifying which Core Web Vital is failing, and then addressing the specific causes listed in the Opportunities and Diagnostics sections. Our guides on Core Web Vitals, LCP, and CLS cover specific fixes in depth.

Does hosting affect page speed?

Yes, significantly. Your hosting infrastructure directly determines your TTFB, which is the foundation on which all other timing metrics build. Cheap shared hosting can have TTFBs of 1–2 seconds or more under load; a well-configured VPS or managed WordPress host typically delivers TTFBs under 200ms. Moving from shared hosting to better infrastructure, combined with full-page caching, is often the single biggest improvement available to sites on budget hosting. If your TTFB is consistently above 600ms, address your hosting and caching before tackling other optimisations.

Is page speed the same as Core Web Vitals?

Core Web Vitals are a subset of page speed metrics. Core Web Vitals are the three metrics Google uses as ranking signals: LCP (loading speed), INP (interactivity), and CLS (visual stability). Page speed as a broader concept also includes TTFB, FCP, TBT, Speed Index, and various other timing metrics. All Core Web Vitals are page speed metrics, but not all page speed metrics are Core Web Vitals. For SEO purposes, prioritise the three Core Web Vitals; the other metrics are useful diagnostically but do not directly carry ranking weight.

Can you have a fast page speed score but still rank poorly?

Yes. Page speed is one of many ranking factors, and a fast page with thin content, no backlinks, or poor relevance to a query will rank below a slower page with authoritative, comprehensive content. Speed is a baseline requirement for competitive performance, not a substitute for content quality and relevance. Treat speed optimisation as removing a potential negative signal, not as a primary ranking strategy.

How do images affect page speed SEO?

Images are typically the largest assets on a page and have the most direct impact on load time and LCP. An unoptimised hero image can be the sole reason a page's LCP score is Poor. Beyond file size, images without explicit dimensions cause CLS, and images above the fold that are discovered late in the loading sequence delay LCP. A comprehensive image optimisation strategy — right format (WebP), right size, right dimensions, lazy loading for below-fold images, preloading for the LCP image — typically delivers the largest single improvement in page speed for content-heavy sites.

How often should I test my page speed?

For most sites, monthly testing via Google Search Console's Core Web Vitals report is sufficient for monitoring. After any significant deployment — new theme, new plugins, new third-party scripts, major content changes — test immediately to check whether the change affected performance. Set up Google Search Console alerts for Core Web Vitals regressions so you are notified automatically if a change causes pages to drop from Good to Needs Improvement. For e-commerce sites where conversions directly depend on speed, weekly monitoring is worthwhile.

Does page speed affect AdWords / Google Ads Quality Score?

Yes. Google Ads Quality Score includes landing page experience as a component, and landing page load speed is a factor in that assessment. Slow landing pages receive lower Quality Scores, which increases cost-per-click and reduces ad position. For paid search campaigns, page speed has a direct and measurable financial impact — faster landing pages cost less per click and convert at higher rates simultaneously.

For more on diagnosing and fixing specific page speed issues, see the complete guide to reducing page load time, run a site audit to identify issues across your site, or use the website speed test to check a specific URL.

Last updated: April 2026