What Are Core Web Vitals?
Core Web Vitals are a standardised set of three real-world performance metrics that Google uses to quantify the user experience of a web page. Introduced as part of the broader Page Experience signal, they became a confirmed Google ranking factor in June 2021 and have grown in weight with each subsequent algorithm update. The three metrics — Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — measure how fast the primary content loads, how quickly the page responds to user input, and how stable the visual layout is as the page renders.
Unlike many SEO metrics that are purely technical, core web vitals are designed to reflect genuine user frustration. A page that loads slowly, freezes when you tap a button, or jumps around as images appear is a page that users abandon. Google's large-scale research showed a strong correlation between poor performance on these three dimensions and elevated bounce rates, reduced session depth, and lower conversion rates. That research is why these specific measurements — not arbitrary speed scores — became the basis of a ranking signal.
Understanding what are core web vitals, how they are measured, and what you can do to improve them is now a fundamental part of technical SEO. This guide covers every metric in detail, explains the thresholds, describes how lcp cls inp affect search rankings, walks through lab versus field measurement, and gives you practical techniques for improving each score on any type of website.
A Brief History: From FID to INP
When Core Web Vitals launched in 2021, the interactivity metric was First Input Delay (FID). FID measured the delay between a user's first interaction with a page — a tap or click — and the moment the browser began processing that event handler. FID was a useful proxy for main-thread congestion, but it had a critical blind spot: it only measured the first interaction and it only measured the input delay, not the time it took the browser to actually produce a visible response.
In March 2024, Google replaced FID with Interaction to Next Paint (INP). INP measures the full latency of every interaction during a page visit — from user input through to the next paint that reflects the change — and reports the worst-case interaction (or near-worst at the 98th percentile). This is a substantially harder metric to pass because it captures sluggish interactions that happen mid-session, not just the first click on a still-loading page. Any site that passed FID comfortably may well fail INP if it has complex JavaScript running in response to user actions.
The transition from FID to INP is the most significant change to core web vitals since they launched, and it has caused many previously "good" sites to drop into the "needs improvement" band. If you have not audited your site since March 2024, INP is likely the metric most worth investigating first.
The Three Core Web Vitals: Thresholds and What They Measure
| Metric | What It Measures | Good | Needs Improvement | Poor |
|---|---|---|---|---|
| LCP — Largest Contentful Paint | Time from navigation start until the largest visible content element (image or text block) is rendered | < 2.5s | 2.5s – 4.0s | > 4.0s |
| INP — Interaction to Next Paint | End-to-end latency of the slowest interaction (click, tap, keyboard) during the session | < 200ms | 200ms – 500ms | > 500ms |
| CLS — Cumulative Layout Shift | Sum of all unexpected layout shift scores throughout the page lifetime | < 0.1 | 0.1 – 0.25 | > 0.25 |
Google's pass/fail verdict is applied at the 75th percentile of real-user page loads. This means a page only achieves a "Good" rating if at least 75% of actual visitors experience that metric within the "Good" threshold. The 75th-percentile rule is important: optimising only for your own fast connection or fast device will not move the needle if a large portion of your audience is on slower hardware or mobile networks.
Largest Contentful Paint (LCP) in Depth
LCP marks the point in the page load timeline when the largest content element visible in the viewport has finished rendering. The browser considers the following element types as LCP candidates: <img> elements, <image> elements inside SVG, images loaded via CSS background-image, <video> poster images, and blocks of text (paragraphs, headings, list items). The element with the largest rendered area wins.
In practice, for most pages the LCP element is a hero image, a banner, a product photo, or a large heading. You can confirm exactly which element is triggering LCP in the Chrome DevTools Performance panel or in the Diagnostics section of PageSpeed Insights. Knowing the specific LCP element is the starting point for optimisation — it is meaningless to compress images that are not the LCP candidate.
Why LCP Matters for Users
LCP is the closest single metric to the moment a user feels the page has "arrived." Research by Google found that pages with an LCP under 2.5 seconds have significantly lower bounce rates. Every additional second of LCP delay correlates with an increase in users leaving before they engage with any content. For e-commerce pages, where the LCP element is often the primary product image, slow LCP directly suppresses conversion rate.
Common Causes of Poor LCP
- Slow server response times (TTFB). If the server takes more than 600ms to deliver the first byte of HTML, LCP will almost certainly be poor regardless of other optimisations. This is often the primary bottleneck for hosted CMS platforms and shared hosting.
- Render-blocking resources. CSS and synchronous JavaScript in the
<head>prevent the browser from rendering anything until they are downloaded and processed. Use the script loading checker to identify blocking scripts on your pages. - Unoptimised LCP image. Large uncompressed images, wrong formats (PNG instead of WebP/AVIF), and missing width/height attributes all delay LCP.
- No resource hints on the LCP image. Without a
<link rel="preload">hint, the browser may not discover the LCP image until late in the waterfall. - Client-side rendering. Pages that render their main content via JavaScript frameworks may have a fast HTML delivery but a very slow LCP because the browser must execute JS before any meaningful content appears.
How to Improve LCP: 7 Techniques
- Preload the LCP image. Add
<link rel="preload" as="image" href="hero.webp">in the<head>so the browser discovers and starts downloading the LCP element before the HTML is fully parsed. This single change can cut LCP by 0.5–1 second on image-heavy pages. - Convert images to WebP or AVIF. WebP typically reduces file size by 25–35% vs JPEG at equivalent quality. AVIF goes further, often 40–50% smaller. Use the image dimensions checker to audit whether images have correct sizing and format across your pages.
- Set explicit width and height on all images. Missing dimensions force the browser to re-layout the page once the image loads, contributing to both slower LCP and higher CLS. Always declare both attributes in your
<img>tags. - Reduce Time to First Byte (TTFB). Use server-side caching, upgrade to a faster hosting tier, or deploy a CDN. For dynamic CMS sites, full-page caching (e.g. a Redis object cache for WordPress) is the single highest-ROI infrastructure change available. Learn more in the guide on how to reduce page load time.
- Eliminate render-blocking scripts and stylesheets. Defer or async non-critical JavaScript. Split your CSS so only above-the-fold styles are in the critical path. Use the script loading checker to identify synchronous scripts in your page head.
- Use a CDN with edge caching. Serving static assets from servers geographically close to visitors reduces latency for both the HTML document and the LCP image. A CDN can reduce TTFB for international visitors by several hundred milliseconds.
- Avoid lazy-loading the LCP element. Adding
loading="lazy"to the hero image is one of the most common LCP mistakes. Lazy loading defers image fetches until the element is near the viewport — exactly the wrong behaviour for the primary visible image. Use the lazy loading checker to confirm your above-the-fold images are not being incorrectly deferred.
Interaction to Next Paint (INP) in Depth
INP measures the responsiveness of a page to user interactions throughout the entire session. For every click, tap, or keypress, the browser records the time from the input event until the next frame is painted that reflects a visual response to that interaction. The INP score for the session is the worst-case interaction latency (at the 98th percentile of all interactions logged, if there are many).
The 200ms "Good" threshold is based on human perception research: delays shorter than 200ms feel instantaneous, while delays above 500ms feel sluggish or broken. The gap between FID and INP thresholds (FID was "Good" at under 100ms) exists because INP measures end-to-end interaction time including rendering, not just the initial event queue delay that FID measured.
The Three Phases of an Interaction
Every interaction measured by INP has three components that sum to the total latency:
- Input delay: The time from the user's input to when the event handler starts. This is caused by the main thread being busy with other work — parsing scripts, running timers, or executing other event handlers.
- Processing time: The time spent actually running the event handler JavaScript.
- Presentation delay: The time from when the event handler completes until the browser commits the updated frame to the screen. This is driven by style recalculation, layout, paint, and compositing work.
Understanding which phase is the bottleneck for your specific INP problem determines which technique will help most. Use the Chrome DevTools Performance panel and the PerformanceObserver API (or the web-vitals JavaScript library) to break interactions down into these three phases.
Common Causes of Poor INP
- Long tasks on the main thread. Any JavaScript task that runs for more than 50ms on the main thread without yielding blocks event processing. Third-party scripts — analytics, tag managers, chat widgets, ad systems — are the most common source of main-thread congestion.
- Heavy event handlers. Handlers that synchronously update the DOM in complex ways, trigger forced reflow (by reading layout properties immediately after writing), or invoke expensive computations directly in response to clicks.
- Excessive DOM size. Pages with thousands of DOM nodes require more time for style recalculation and layout whenever any interaction triggers a change. This inflates the presentation delay phase of every interaction.
- Unoptimised hydration in JS frameworks. React, Vue, and Angular applications that hydrate large component trees on load can monopolise the main thread for hundreds of milliseconds, causing high INP for interactions that occur early in the session.
- Synchronous XHR in event handlers. Calling synchronous network requests inside a click handler blocks the main thread entirely until the request completes.
How to Improve INP: 6 Techniques
- Break up long tasks. Refactor monolithic JavaScript functions into smaller chunks. Use
setTimeout(fn, 0),scheduler.yield(), orrequestAnimationFrameto yield control back to the browser between chunks, allowing it to process pending input events between tasks. - Audit and prune third-party scripts. Each third-party script added to a page extends the main thread's workload. Audit every tag in your tag manager, challenge whether each one is earning its performance cost, and remove or defer those that are not. A site audit can identify third-party script load patterns across your pages.
- Defer non-critical JavaScript. Add
deferorasyncto scripts that do not need to execute during initial render. Move tag manager initialisation to fire after the page is interactive rather than during load. - Use Web Workers for expensive computations. Move data processing, search indexing, or complex calculations off the main thread into a Web Worker. Workers run in a separate thread and cannot block input event processing.
- Reduce DOM size and complexity. Pages with over 1,400 DOM nodes start to show measurable presentation delay increases. Use virtual rendering for long lists, remove invisible DOM nodes that are permanently hidden, and simplify component structures in JS frameworks.
- Avoid layout thrashing in event handlers. Interleaving DOM reads and writes forces the browser to recalculate layout repeatedly. Batch all reads before writes (read offsetWidth for all elements, then set all heights in a single pass) to avoid forced synchronous layout in the presentation delay phase.
Cumulative Layout Shift (CLS) in Depth
CLS measures the visual stability of a page — how much its content unexpectedly moves after the initial render. Every time an element shifts position in a way that was not caused by a deliberate user interaction, a layout shift score is calculated. That score is the product of the impact fraction (the proportion of the viewport affected by the shift) multiplied by the distance fraction (how far the shifting element travelled relative to the viewport height). CLS is the sum of all such scores throughout the page's lifetime, with shifts grouped into "session windows" separated by gaps of 1 second or more.
The "Good" threshold of 0.1 sounds small, but a single large image without explicit dimensions loading above the fold can easily produce a CLS of 0.3 or higher, which is well into the "Poor" band. CLS is the metric with the most immediately visible user impact: users click the wrong link because the page jumped, or they lose their place while reading because content shifted below an ad that loaded late.
Common Causes of Poor CLS
- Images without width and height attributes. When the browser does not know an image's dimensions before it loads, it renders a zero-height placeholder. When the image arrives, the page reflowed around it pushes all subsequent content down. This is the single most common cause of high CLS.
- Ads, embeds, and iframes without reserved space. Ad slots that collapse to zero height before the ad loads, or expand to a taller size than reserved, are a chronic source of layout shift — particularly in the mobile experience.
- Late-loading fonts causing FOUT. Flash of Unstyled Text (FOUT) occurs when a web font loads late and replaces a fallback font with different metrics. If the new font is significantly wider or taller than the fallback, surrounding text reflows and produces layout shift.
- Dynamically injected content above existing content. Cookie consent banners, notification bars, or promotional strips that are inserted at the top of the page after initial render push all existing content down, sometimes by hundreds of pixels.
- Animations that use top/left instead of transform. CSS animations that change an element's top, left, margin, or padding properties trigger layout recalculation and contribute to CLS. Animations using
transformandopacityare composited on the GPU and do not trigger layout shifts.
How to Improve CLS: 6 Techniques
- Always set explicit width and height on images and videos. Modern browsers use these attributes to calculate the aspect ratio and reserve the correct space before the media loads. Use the image dimensions checker to identify any images across your site that are missing these attributes.
- Reserve space for ads and dynamic embeds. Use min-height on ad containers to hold space even before the ad content loads. Work with your ad provider to ensure ad units do not expand beyond their reserved dimensions after initial render.
- Use lazy loading correctly.
loading="lazy"on images below the fold is best practice. On above-the-fold images, lazy loading actively harms both LCP and CLS. The lazy loading checker flags images that are incorrectly deferred in the above-the-fold area. - Stabilise web font loading. Use
font-display: optionalorfont-display: swapcombined with a font metric override (size-adjust,ascent-override,descent-override) to minimise the visual difference between the fallback and the web font. Preload critical fonts with<link rel="preload" as="font">to reduce the window in which FOUT can occur. - Inject banners from the bottom of the viewport, not the top. Consent bars and notification strips that slide up from the bottom of the screen do not push existing content and produce no layout shift. If a top banner is required by design, include it in the initial HTML so the browser knows its height before any content renders below it.
- Use CSS transform for all animations. Replace
top/left/marginanimations withtransform: translateX()/translateY(). These run on the compositor thread without touching layout, eliminating layout shift from animations entirely.
How Core Web Vitals Affect Search Rankings
Core Web Vitals are part of Google's Page Experience ranking signal, which also includes HTTPS, mobile-friendliness, and the absence of intrusive interstitials. Within Page Experience, Core Web Vitals carry the most weight. Google has described the signal as a "tiebreaker" — when two pages are equally relevant and authoritative for a query, the one with better Core Web Vitals will rank higher. However, the signal is not powerful enough to allow a technically excellent but content-poor page to outrank a content-rich but slower page.
The practical implications of this framing are significant. On highly competitive queries where the top ten results are all high-quality, trustworthy pages covering the same topic, Core Web Vitals can be the deciding factor for positions 4–7. In verticals like e-commerce, travel, or local services where many pages are structurally similar, this tiebreaker effect is most pronounced. Pages in the "Poor" band for any metric face the greatest risk, since Google has confirmed that pages with a "Good" Page Experience receive a visible boost in ranking algorithms compared to those that fail the assessment.
There is also an indirect ranking effect via user engagement signals. Pages with poor core web vitals generate higher bounce rates and lower dwell times. While Google does not officially use bounce rate as a direct ranking signal, the underlying user frustration that causes bounces — slow content, jumpy layouts, unresponsive interactions — is exactly what poor CWV scores measure. Improving your scores therefore tends to improve both direct ranking signals and the behavioural signals that may influence rankings over time.
Run a site audit to check which pages on your site have performance and technical issues that correlate with poor Core Web Vitals scores, and use a free RankNibbler audit to inspect individual page metrics in detail.
Lab Data vs Field Data: Understanding the Difference
Core Web Vitals can be measured in two fundamentally different ways, and the distinction matters enormously for how you interpret your scores.
Lab Data (Synthetic Testing)
Lab data is collected in a controlled environment — typically a headless Chromium browser running on a fixed device profile with a throttled network connection. Tools that produce lab data include Google Lighthouse, WebPageTest, and the PageSpeed Insights "lab" section. Lab data is deterministic: run the same test twice and you get very similar results. It is excellent for:
- Debugging specific performance issues in a reproducible way
- Before/after comparisons when testing an optimisation
- Catching regressions in a CI/CD pipeline before deployment
- Analysing pages that do not yet have real-user traffic
The limitation of lab data is that it does not reflect real-user conditions. A throttled Moto G4 profile in a London data centre does not capture the experience of a user on a four-year-old Android phone on a congested 4G network in a rural area. Lab scores are directionally useful but should never be treated as the authoritative representation of your Core Web Vitals status for ranking purposes.
Field Data (Real User Monitoring)
Field data is collected from actual page loads by real users in real-world conditions. The primary source of public field data is the Chrome User Experience Report (CrUX), which Google collects from opted-in Chrome users and publishes at URL and origin level. Google Search Console's Core Web Vitals report and the PageSpeed Insights "field data" section both draw on CrUX. Field data is what Google uses for ranking decisions — not lab scores.
Field data is non-deterministic and aggregated. Because it reflects real-world variance — different devices, networks, geographies, and user behaviours — field scores are often significantly worse than lab scores for the same page. A page might score 1.8s LCP in Lighthouse but show a 3.4s LCP at the 75th percentile in CrUX, because a substantial portion of real visitors are on slower devices or connections.
For a complete performance monitoring setup, you need both: lab data for debugging and catching regressions, and field data (via Google Search Console or a Real User Monitoring platform) to understand your actual ranking status. For smaller sites that lack sufficient CrUX data, Google falls back to origin-level data or does not show field data at all — in that case, lab data from PageSpeed Insights is your best available proxy.
Core Web Vitals for Mobile vs Desktop
Google assesses Core Web Vitals separately for mobile and desktop page loads. The thresholds are identical — LCP under 2.5s, INP under 200ms, CLS under 0.1 — but the experience and the challenge of passing them differ considerably between platforms.
Mobile-Specific Challenges
Mobile devices have slower CPUs, less RAM, and are frequently on variable-speed cellular networks. This makes all three metrics harder to achieve on mobile:
- LCP on mobile is almost always worse than desktop because slower network speeds mean the LCP image takes longer to download, and slower CPUs extend the browser's rendering time.
- INP on mobile is the most significant difference. Mobile CPUs are 3–5x slower at executing JavaScript than mid-range desktop CPUs, and touch events have additional processing overhead compared to mouse clicks. A site with adequate INP on desktop may fail badly on mobile.
- CLS on mobile is often worse because mobile viewports are narrow, which means content is taller and a layout shift of the same pixel distance represents a larger fraction of the viewport height — directly inflating the CLS score.
Google uses mobile-first indexing, meaning the mobile version of your page is the one used for ranking. Prioritise your mobile Core Web Vitals scores above desktop when allocating optimisation effort.
Desktop Performance Characteristics
Desktop pages generally achieve better Core Web Vitals scores, but there are desktop-specific failure patterns. Pages with complex data tables, interactive dashboards, or heavy JavaScript applications — where users typically visit on desktop — often fail INP due to the complexity of interactions, even when the hardware is fast. CLS on desktop can be exacerbated by wide-format hero images that have no explicit dimensions declared, as a proportionally larger content area shifts by a larger absolute pixel amount.
Tools for Measuring Core Web Vitals
| Tool | Data Type | Best For | Notes |
|---|---|---|---|
| Google Search Console | Field (CrUX) | Monitoring at scale across entire site | Aggregates by URL group; requires sufficient traffic |
| PageSpeed Insights | Both | Quick per-URL check with both field and lab | Uses CrUX for field; Lighthouse for lab |
| Chrome DevTools (Performance) | Lab | Deep debugging of specific interactions | Use INP debugger extension alongside |
| Lighthouse | Lab | Automated audits and CI integration | CLI version available for build pipelines |
| WebPageTest | Lab | Multi-location, multi-device waterfall analysis | Free public instance; advanced scripting available |
| CrUX Dashboard (Looker Studio) | Field | Historical trend analysis over months | Free template; uses public CrUX API |
| web-vitals JS library | Field (custom RUM) | Sending CWV data to your own analytics | Official Google library; measures INP, LCP, CLS accurately |
For ongoing monitoring, Google Search Console is the most practical starting point because it surfaces failing URLs sorted by impact (number of affected sessions) across your entire property. Combine it with PageSpeed Insights for per-URL diagnosis and a full site audit to identify patterns across page templates.
Common Core Web Vitals Problems by Website Type
WordPress Sites
WordPress is the most common CMS platform and has well-documented CWV failure patterns. LCP problems are almost always caused by unoptimised featured images or hero images from page builder blocks (Elementor, Divi, Beaver Builder) that load images as CSS backgrounds rather than <img> elements — meaning preload hints do not work and images are discovered late. INP problems frequently originate from bloated plugin JavaScript: form validation libraries, slider scripts, and contact form bundles add hundreds of kilobytes of JavaScript that run on every page regardless of whether the feature is used on that page. CLS is often caused by Google Fonts loading late, ads from ad networks without reserved space, or WooCommerce product galleries that shift as thumbnails load. Recommended approach: use a caching plugin (LiteSpeed Cache, WP Rocket, or W3 Total Cache), serve images via a CDN, switch to a system font or preload a critical web font, and audit your active plugins aggressively.
E-Commerce Sites
Product listing pages (PLPs) and product detail pages (PDPs) are the most performance-sensitive pages in e-commerce. PLPs typically render dozens of product images, and if none of them have explicit dimensions, CLS scores are catastrophic. PDPs often have large hero product images that are not preloaded, and complex add-to-cart JavaScript that produces high INP on tap. Common additional issues: A/B testing scripts that inject personalised content above the fold (causing CLS), review widgets from third-party platforms that shift content below them, and size/colour swatches that trigger expensive DOM updates on every hover. The image dimensions checker and lazy loading checker are particularly useful for auditing product image handling at scale.
News and Publishing Sites
News sites face intense CLS pressure from advertising. Above-the-fold ad slots that collapse before the ad loads, then expand when it does, are endemic in ad-supported publishing. The textual content on news pages means LCP is often a text block rather than an image — which is faster to render — but large above-the-fold banner ads can delay render enough to push LCP into "Needs Improvement." INP on news sites is frequently poor due to the volume of analytics and tracking scripts firing on every page load. Many news sites also use infinite scroll, which continuously adds DOM nodes and can cause INP to degrade for users who scroll deeply into a session.
Single Page Applications (SPAs)
React, Vue, Angular, and similar JavaScript-framework SPAs have a distinctive Core Web Vitals profile. LCP is often very poor on initial load because the browser must download, parse, and execute a large JavaScript bundle before any meaningful content is rendered — a phenomenon called "render-blocking hydration." INP can be excellent for in-app interactions that are handled client-side without network round-trips, but can be very poor for complex state updates that trigger full component tree re-renders. CLS is typically well-controlled in SPAs because layouts are defined in JavaScript and do not depend on browser layout inference, but dynamically loaded routes that fetch content before rendering can introduce layout shifts at route transition boundaries. Server-side rendering (SSR) or static site generation (SSG) with progressive hydration is the standard architectural solution to SPA LCP and INP problems.
Landing Pages and Lead Generation Pages
Landing pages built with drag-and-drop tools (Unbounce, Instapage, Leadpages) frequently have poor Core Web Vitals due to the tools' dependency on heavy builder frameworks, inline styles that prevent caching, and large hero images loaded without preload hints. INP is often affected by multi-step form validation JavaScript that runs on every keystroke. CLS is commonly caused by sticky header elements that animate into position after initial render, or progress bar widgets that are inserted above content.
Monitoring Core Web Vitals Over Time
Improving Core Web Vitals is not a one-time project. Any deployment — a new plugin, a third-party script, a redesigned page template — can regress scores. A monitoring process should include:
- Weekly checks in Google Search Console. The Core Web Vitals report shows URL groups moving between Good, Needs Improvement, and Poor. Any new group appearing in the Poor category should trigger an immediate investigation.
- Automated Lighthouse CI in your deployment pipeline. Set performance budgets (e.g. LCP must not exceed 3.0s in lab, CLS must not exceed 0.05) and fail the build if they are breached. This catches regressions before they reach production.
- Real User Monitoring (RUM) via the web-vitals library. Instrumenting your own analytics with the web-vitals library gives you granular field data segmented by page template, device type, geography, and connection speed — far more actionable than CrUX aggregates alone.
- Monthly full-site audits. Run a site audit monthly to catch newly introduced issues across your entire URL inventory, not just the URLs you happen to test manually.
Core Web Vitals and Page Speed: Related but Different
Core Web Vitals are sometimes conflated with general page speed, but they measure specific aspects of perceived performance rather than raw download time. A page can have an excellent 1.1s LCP but still fail Core Web Vitals due to high CLS or poor INP. Conversely, a page might score 80/100 in Lighthouse's overall performance score but pass all three Core Web Vitals in field data because the Lighthouse score blends many metrics with different weights.
For SEO purposes, the Core Web Vitals field data assessment in Google Search Console is the definitive measure of whether your pages "pass" Google's Page Experience signal. The Lighthouse performance score is a useful diagnostic tool but is not what Google uses for ranking. Read the full guide on what is page speed for a complete breakdown of how speed metrics relate to each other, and see the how to reduce page load time guide for infrastructure-level optimisations that benefit both page speed and Core Web Vitals.
For deep dives into individual metrics, see the dedicated guides: What Is Largest Contentful Paint (LCP)? and What Is Cumulative Layout Shift (CLS)?.
Frequently Asked Questions About Core Web Vitals
Are core web vitals a confirmed Google ranking factor?
Yes. Google confirmed Core Web Vitals as a ranking factor when the Page Experience update rolled out in June 2021. They form the measurable performance component of the Page Experience signal alongside HTTPS, mobile-friendliness, and safe browsing. Google has subsequently reaffirmed their importance in multiple algorithm documentation updates.
What happened to FID — is it still a Core Web Vital?
No. First Input Delay (FID) was retired as a Core Web Vital in March 2024 and replaced by Interaction to Next Paint (INP). FID is no longer used in Google's Page Experience ranking signal. INP is a more comprehensive measure of interactivity and is the metric that matters now. If your records show good FID scores, you need to re-audit using INP, as many sites that passed FID fail INP.
What is a "good" Core Web Vitals score?
A page achieves a "Good" assessment only when all three metrics — LCP, INP, and CLS — are in the "Good" band at the 75th percentile of real-user page loads. LCP must be under 2.5 seconds, INP must be under 200ms, and CLS must be under 0.1. If any single metric is in "Needs Improvement" or "Poor," the overall page assessment is not "Good."
How do I check my Core Web Vitals?
The most accessible starting point is Google Search Console (Search Console > Experience > Core Web Vitals), which shows your field data across your entire site. For individual page analysis, use PageSpeed Insights (pagespeed.web.dev) which shows both field data from CrUX and lab data from Lighthouse. The RankNibbler free audit surfaces performance signals and technical issues that affect Core Web Vitals at the page level.
Why is my PageSpeed Insights score high but my Core Web Vitals still "Needs Improvement"?
The Lighthouse performance score (shown as a number out of 100) is a weighted blend of multiple metrics including FCP, Speed Index, TBT, and others, not just the three Core Web Vitals. It is also a lab score, not a field score. Google uses field data (CrUX) for the actual Core Web Vitals ranking assessment. A high lab score does not guarantee good field performance if your real users are on slow devices or if your page behaves differently with real content, cookies, and third-party scripts loaded. Always check the "Field Data" section of PageSpeed Insights, not just the overall score.
How long does it take for Core Web Vitals improvements to affect rankings?
CrUX data is collected over a rolling 28-day window. This means improvements you make today will not be fully reflected in your Core Web Vitals field data for approximately four weeks. After your scores improve in CrUX, the corresponding ranking impact takes a further crawl-and-index cycle to propagate. In practice, expect four to eight weeks between deploying a meaningful improvement and seeing it reflected in Google Search Console and any corresponding ranking movement.
Does Core Web Vitals affect mobile and desktop rankings separately?
Yes. Google assesses Core Web Vitals separately for mobile and desktop using the corresponding segments of CrUX data. Because Google uses mobile-first indexing, the mobile assessment is the primary one for ranking in mobile search results, which represents the majority of Google searches. You should prioritise achieving "Good" status on mobile even if desktop scores are already passing.
My page doesn't have enough CrUX data — what happens?
If a URL does not have sufficient CrUX traffic to generate statistically reliable field data, Google may fall back to origin-level data (aggregated across your entire domain) for that URL's assessment. If there is insufficient origin-level data either, the page may not be assessed for Core Web Vitals at all, and the Page Experience signal is applied at a reduced confidence level. For new or low-traffic pages, lab data from PageSpeed Insights is your best available indicator of likely performance.
Can you fail Core Web Vitals just because of CLS?
Yes. All three metrics must individually pass the "Good" threshold. A page with excellent LCP (1.5s) and INP (120ms) but a CLS of 0.18 will fail the Core Web Vitals assessment because CLS is in the "Needs Improvement" band. There is no averaging or weighting across the three metrics — each one must independently achieve "Good" at the 75th percentile. Use the image dimensions checker to address the most common cause of CLS failures.
Do Core Web Vitals affect all types of searches equally?
Google has indicated that Core Web Vitals influence rankings across web search results broadly. However, the tiebreaker nature of the signal means the practical impact is most pronounced in competitive verticals where many pages are equally relevant — commercial e-commerce queries, local service searches, news results, and competitive informational queries. For very specific or long-tail queries where only a few pages are relevant, content quality and authority will dominate over page experience signals.
What is the relationship between Core Web Vitals and the INP metric specifically?
INP (Interaction to Next Paint) is the newest and often the least well-understood of the three Core Web Vitals. Because it replaced FID only in March 2024, many performance guides and tools still discuss FID. INP is meaningfully harder to pass than FID because it measures every interaction during a session (not just the first) and includes the full time to paint (not just the input delay). Sites with complex JavaScript, heavy third-party scripts, or rich interactive features are the most likely to fail INP. Improving INP typically requires profiling real user interaction traces in Chrome DevTools rather than relying on Lighthouse scores alone, since Lighthouse's TBT metric is only a partial proxy for INP.
Should I use loading="lazy" on images to improve Core Web Vitals?
Yes, but only for below-the-fold images. Lazy loading is one of the most effective techniques for reducing the number of image requests made during initial page load, which indirectly helps LCP by reducing network contention. However, applying loading="lazy" to above-the-fold images — especially the LCP element — actively worsens LCP because the browser delays the fetch. Use the lazy loading checker to verify that your above-the-fold images are not incorrectly set to lazy, and that below-the-fold images have lazy loading enabled.
Last updated: April 2026