How to Check If Google Has Indexed Your Site

If your pages are not in Google's index, they cannot appear in search results. Full stop. You can have the most valuable content on the web, the fastest hosting, the cleanest code, and a portfolio of incoming links, and none of it will matter for organic traffic until Google has actually crawled, processed, and indexed each URL. Checking indexation is therefore the single most fundamental SEO health check you can run — and it's one that many teams get wrong because they rely on a single, incomplete signal.

This guide covers every reliable method for confirming whether Google has indexed a specific page or an entire site, how to interpret the results, and what to do when pages aren't indexed. We'll also cover the common pitfalls (like trusting the site: operator more than it deserves), how to check indexation at scale, and how to diagnose the root cause when pages refuse to show up in search.

By the end of this guide you'll have a repeatable process: a fast check for day-to-day work, an authoritative check for stakeholder reports, and a bulk workflow for auditing hundreds or thousands of URLs without burning a full day. If you just need the quick version, jump straight to the Google Search Console URL Inspection tool — it's the only method Google themselves publish as authoritative.

Shortcut: Run a free audit on the RankNibbler homepage to instantly check for noindex tags, blocked robots directives, missing canonical URLs, and other technical issues that prevent indexing. 30+ checks, no signup.

What Indexing Actually Means

Indexing is the middle stage of Google's three-part pipeline: crawling (Googlebot discovers and fetches a URL), processing/rendering (Google parses the HTML, runs JavaScript, extracts links, and classifies the content), and indexing (the processed content is stored in Google's searchable index). Only URLs that complete all three steps are eligible to appear in search results.

A URL can be crawled but not indexed. This is normal — Google deliberately excludes duplicate, low-quality, or policy-violating content from its index. Read what is indexing for the full conceptual background, and what is crawl budget for why large sites especially need to understand what Google is actually processing.

Indexation is the prerequisite for ranking. Before you worry about keyword positions, click-through rates, or SERP features, make sure every page you care about is in the index. Any time spent optimising a URL that isn't indexed is wasted.

Why Checking Indexation Matters

Three scenarios make indexation checks a permanent part of any SEO workflow:

Indexation is also diagnostic: if a page suddenly stops getting impressions in Search Console, the first question is always "is it still indexed?" Answering that eliminates half the possible causes in one step.

Prerequisites: What You Need Before You Start

To run every check in this guide, you'll want access to the following:

Method 1: Google Search Console URL Inspection (Most Authoritative)

The URL Inspection tool in Google Search Console is the single most authoritative way to check whether a specific page is indexed. It reports the exact status Google has for that URL — whether it's been discovered, whether it was crawled, whether it was indexed, which version Google is showing (canonical vs alternate), and what the last crawl date was.

Because the data comes directly from Google's own systems, this method is what Google engineers themselves recommend and what you should cite in any indexation report.

Step 1: Open URL Inspection

Sign in to search.google.com/search-console, select your property, and either use the search bar at the top of any page or click "URL Inspection" in the left sidebar.

Step 2: Enter the Full URL

Paste the complete canonical URL including the protocol (https://), the correct subdomain, and any trailing slash or query string that forms part of the canonical. Remember: https://www.example.com/page and https://example.com/page/ are two different URLs to Google.

Step 3: Read the Main Status Line

After a few seconds, you'll see one of these headlines:

Step 4: Expand Each Section

Click through "Discovery", "Crawl", "Indexing", and "Enhancements" to see detail. Key fields to check:

FieldWhat It Tells You
Discovery > SitemapsWhether the URL was discovered through a sitemap you submitted
Discovery > Referring pageAn example of an internal link Google used to find the URL
Crawl > Last crawlThe date Google last fetched the page — if it's months old, re-crawling may be needed
Crawl > Crawl allowed?Whether robots.txt permits crawling
Crawl > Page fetchWhether the fetch succeeded (any 4xx or 5xx appears here)
Indexing > Indexing allowed?Whether a meta robots noindex directive is present — check your robots directives
Indexing > User-declared canonicalThe canonical URL you declared on the page
Indexing > Google-selected canonicalThe URL Google actually chose as canonical — if this differs from yours, you have a canonical conflict

Step 5: Test Live URL (Optional)

Click "Test Live URL" at the top-right to have Google fetch the page in real time, as it exists right now, rather than showing you the last-known indexed version. This is invaluable when you've just fixed an issue and want to confirm the fix is live before requesting re-indexing.

Step 6: Request Indexing

If the page is new, or if you've just fixed a blocker, click "Request Indexing" at the top-right. Google queues the URL for priority crawling. This is not a guarantee — Google may still choose not to index — but it accelerates the evaluation.

There is a quota on Request Indexing submissions (historically around 10-20 per day), so don't waste them on pages with unresolved blockers. Fix the underlying issue first, confirm with Test Live URL that the fix is in place, then request indexing.

Method 2: The site: Search Operator

The site: operator filters Google search results to a single domain. It's the fastest indexation check — one line, no login — but it's not authoritative. Google has said multiple times that site: results are approximate and should not be used as a precise index coverage metric.

How to Use It

Open google.com and search:

site:example.com

To check a specific page:

site:example.com/blog/my-post

To check all pages in a subdirectory:

site:example.com/blog/

Google returns the URLs it currently has indexed that match the filter, ordered by its own relevance criteria. A roughly-accurate total count appears at the top of the results ("About 2,340 results").

Interpreting the Results

If your domain or page appears, it's indexed. If nothing appears, it may not be indexed — but the absence of results in site: is not a definitive proof of non-indexation. The sampled count can be off by large margins, especially on large sites.

John Mueller at Google has been clear: use site: for directional signals, not precise counts. For authoritative indexation data, use Search Console.

When site: Is Useful

When site: Misleads

Method 3: Search the Exact Title or a Unique Phrase

If a page is indexed, you can find it by searching a distinctive string from its content. Wrap a unique sentence in quotation marks and search:

"a very specific sentence from the page"

Pick a phrase that's unlikely to exist on any other page on the web — 8-12 words from the body copy works well. If your page returns in the results, it's indexed and Google has processed the content.

This method is particularly useful when you suspect a page has been indexed but the site: operator is not showing it (for example, if Google has filtered it out of the domain-filtered results but still has it in the broader index).

Searching the Title Tag

Search for your exact title tag in quotes. If your page appears, it's indexed and Google is using your declared title. If a different URL appears for the same title, you may have title duplication between pages — run the title tag checker across your top URLs to find duplicates.

Method 4: Google Search Console Pages Report (Site-Wide View)

For a domain-wide view of indexation, the Pages report in Search Console is the single best source of truth. It breaks down every known URL into "Indexed" and "Not indexed" categories, with reasons attached to each non-indexed URL.

Step 1: Open the Pages Report

In Search Console, go to Indexing > Pages. The top of the report shows two numbers: the count of indexed URLs and the count of non-indexed URLs.

Step 2: Review the Reasons

Scroll down to "Why pages aren't indexed". Every non-indexed URL is assigned a reason, such as:

ReasonMeaning
Crawled — currently not indexedGoogle fetched the page but chose not to index it, usually a quality signal
Discovered — currently not indexedGoogle knows the URL exists but hasn't crawled it yet, often a crawl budget or priority issue
Alternate page with proper canonical tagThe page correctly points to a canonical URL elsewhere — this is expected behaviour, not an error
Duplicate without user-selected canonicalGoogle found duplicate content and picked a canonical itself; you should declare one explicitly
Duplicate, Google chose different canonicalYou declared a canonical, but Google disagreed and picked a different URL
Page with redirectThe URL redirects — the destination is what gets indexed, not this URL
Not found (404)Server returns 404 — see what is a 404 error
Blocked by robots.txtYour robots.txt disallows crawling
Excluded by 'noindex' tagThe page's meta robots tag says noindex
Soft 404Page returns 200 but Google classified it as a "page not found" response
Server error (5xx)Intermittent server errors during Google's crawl

Step 3: Drill Into Each Reason

Click a reason to see up to 1,000 example URLs affected. Export the list as CSV if you need to process it in a spreadsheet. For sites with more than 1,000 URLs in a single reason, rely on the overall counts plus a representative sample.

Step 4: Monitor Trends

The Pages report shows a 3-month trend graph. A steady rise in "Indexed" pages and a flat or falling "Not indexed" line is healthy. A sudden drop in indexed pages is a flag to investigate immediately — usually a robots.txt accident, a noindex header rollout gone wrong, or a server reliability issue during Google's crawl.

Method 5: Submit and Check Your Sitemap

Submitting an XML sitemap is both a way to tell Google about every URL you want indexed and a way to track indexation progress. Each submitted sitemap gets its own status page in Search Console showing how many URLs it contained and how many have been indexed.

Read how to submit a sitemap to Google for the full setup. Once submitted, go to Indexing > Sitemaps in Search Console and click a sitemap to see its coverage. You'll see:

A healthy sitemap should have indexed count within a few percent of the submitted count. Large gaps indicate coverage problems that need investigation.

Method 6: Bulk Indexation Checking

For sites with hundreds or thousands of URLs, one-at-a-time checks are impractical. Here's how to audit indexation at scale.

Step 1: Assemble Your URL List

Pull the list of URLs you want to verify. Sources include:

Step 2: Use the RankNibbler Bulk Checker

The bulk checker lets you paste up to 100 URLs at a time and run indexability checks across them. For each URL, it reports whether the URL is accessible (200 status), whether robots.txt allows crawling, whether a noindex directive is present, and what canonical URL is declared. This gives you the indexability picture — whether the URL could theoretically be indexed — before you verify actual indexation.

Step 3: Cross-Reference With Search Console

For each URL flagged as indexable by the bulk checker, check its actual indexed status using URL Inspection (one at a time for the highest priority URLs) or by exporting the Pages report for a comprehensive list.

Step 4: Automate With the Search Console API

For ongoing monitoring of very large sites, the Google Search Console API exposes the same URL Inspection data programmatically. Technical teams can build a scheduled job that checks a sample of high-priority URLs every week and alerts when indexed counts shift unexpectedly.

What Happened to the cache: Operator?

Google historically exposed a cache: search operator that showed you the cached version of a URL. In 2024, Google deprecated this operator entirely. If you try cache:example.com/page today, you will not get a result page. Google has recommended using the Internet Archive (Wayback Machine) for historical snapshots of public URLs instead.

For the "is this indexed?" question, the cache operator was never authoritative anyway — a URL could be cached but not in the main index, or indexed but with no visible cache. URL Inspection in Search Console has always been the correct tool, and the cache deprecation just removes a distracting shortcut.

Common Reasons a Page Isn't Indexed

If you've confirmed a page is not indexed, one of the following is almost always the cause. Work through them in order.

1. The Page Is Too New

Google needs time to discover and process new URLs. For a brand-new page on an established site, indexation typically takes a few days. For a new site with few inbound links, it can take weeks. Submitting a sitemap and requesting indexing via URL Inspection both accelerate the process.

2. A Noindex Tag Is Present

Check the page's HTML source for <meta name="robots" content="noindex"> in the head, or an X-Robots-Tag: noindex HTTP response header. Either tells Google explicitly to exclude the page. Use the robots directives checker to verify.

3. Robots.txt Blocks Crawling

If your robots.txt disallows the URL path, Googlebot cannot fetch it. A disallowed URL generally will not be indexed — with the rare exception of a URL that's linked externally, in which case Google may index a URL-only entry with no title or description. Fix with the robots.txt generator.

4. A Canonical Tag Points Elsewhere

If the page declares a different URL as its canonical, Google will index the canonical target, not this URL. Use the canonical URL checker and review canonical URL best practices. Either the canonical is intentional and correct (the page is correctly a duplicate), or it was set wrong and needs fixing.

5. The Page Is Effectively Duplicate

If the content substantially overlaps another page, Google may consolidate them and index only one. Review for thin or boilerplate content, and add unique, substantive material. See what is duplicate content.

6. Low Quality or Thin Content

Pages Google classifies as low value get "Crawled — currently not indexed" status. Signs include very short content, boilerplate or template-heavy pages, and pages with minimal unique information. Strengthen the content, improve internal linking, and the page is more likely to be promoted into the index on re-crawl.

7. Discovery Problems

Google has to know a URL exists before it can index it. Make sure the page is linked from at least one other page on your site (the more, the better), included in your XML sitemap, and not orphaned. Review internal linking to ensure every important page is reachable from the homepage in a reasonable number of clicks.

8. Crawl Budget Exhaustion

Very large sites (hundreds of thousands of URLs or more) can have pages that Google has discovered but hasn't crawled yet because other pages on the site consume the crawl budget. Reduce low-value crawlable URLs, fix redirect chains, and improve server response times — see what is crawl budget.

9. Server Errors During Crawl

If your server returned 5xx errors when Googlebot tried to fetch the URL, the attempt counts against indexation. Check server logs for the user-agent "Googlebot" and verify there are no errors. Your site speed and response reliability matter here.

10. Penalties or Manual Actions

If your site has received a manual action from Google's web spam team, pages can be excluded or deindexed. Check the Manual Actions report in Search Console (under Security & Manual actions). Rare, but always worth ruling out.

Step-by-Step Troubleshooting Workflow

When a page isn't indexed, run through this checklist in order. Each step eliminates one possible cause.

  1. Confirm the exact URL you're checking. Normalise trailing slashes, protocol (http vs https), and www vs non-www. Use the version Google would see.
  2. Run URL Inspection. Note the status headline and each section's detail.
  3. Check robots.txt. Test the URL in the robots.txt Tester (legacy Search Console) or manually check your robots.txt for a Disallow matching the path.
  4. Fetch the page and inspect the HTML. Look for a meta robots noindex, a conflicting canonical, or missing essential content.
  5. Inspect HTTP response headers. Use browser DevTools (Network tab) or a curl command — look for X-Robots-Tag: noindex.
  6. Run a RankNibbler audit. Catches all the common blockers in one report.
  7. Check the server status. Is the URL returning 200? Check at different times of day to rule out intermittent 5xx.
  8. Check internal linking. How many internal links point to this URL? If it's zero or one, strengthen the link graph.
  9. Check the sitemap. Is this URL listed in your submitted XML sitemap? Is the sitemap itself free of errors?
  10. Request Indexing. After fixes, use URL Inspection's Test Live URL, then Request Indexing.
  11. Wait 7-14 days, then re-check. If still not indexed, consider content quality and inbound signals.

How to Verify a Fix Worked

After you fix an indexation blocker, there's a lag before Google reflects the change. Here's how to confirm the fix stuck without guessing.

Step 1: Test Live URL

In URL Inspection, click Test Live URL. This forces Google to fetch the current live version. Check the Indexing section — if the fix is in place (e.g. the noindex is removed), Test Live URL will report "Indexing allowed? Yes" even though the last-crawled version still says "No".

Step 2: Request Indexing

Click Request Indexing. Google queues the URL for priority re-crawl.

Step 3: Monitor for Status Change

Re-run URL Inspection after a few days. Once Google has re-crawled, the "last crawl" date updates and the cached version reflects the fix. The Coverage section should change from excluded to indexed.

Step 4: Search For the URL

Once indexed, you can confirm by searching site:example.com/your-fixed-url. If the URL appears, you're done.

Ongoing Indexation Monitoring

Indexation isn't a one-time check — it's a continuous health signal. Here's a sustainable monitoring rhythm for teams of different sizes.

Small Sites (Under 100 Pages)

Medium Sites (100-10,000 Pages)

Large Sites (10,000+ Pages)

Indexation vs Crawling vs Ranking: Don't Confuse Them

These three concepts are related but distinct, and confusing them leads to misdiagnosis.

ConceptMeansCheck With
CrawlingGooglebot has fetched the URLServer logs for Googlebot user-agent; URL Inspection "Last crawl" date
IndexingThe fetched URL is stored in Google's searchable indexURL Inspection "URL is on Google"; Pages report
RankingThe indexed URL appears for specific queries at specific positionsPerformance report in Search Console; rank tracking tools

A URL can be crawled but not indexed (quality decision), indexed but not ranking for target queries (relevance and competition), or ranking in position 80 rather than position 5 (a ranking problem, not an indexation problem). Start every diagnosis by separating these three questions.

Tools to Speed Up Indexation Work

Here's a concise toolkit for everything covered in this guide:

Common Pitfalls to Avoid

Trusting site: Totals as a Coverage Metric

The count shown by site:example.com is sampled and approximate. Use Search Console's Pages report for any real reporting.

Checking the Wrong URL Variant

Trailing slash, protocol, and subdomain differences matter. If your canonical is https://www.example.com/page/ but you check https://example.com/page, you may get a misleading result.

Requesting Indexing Without Fixing the Blocker

Request Indexing is not a magic incantation. If a noindex tag is still present, Google will continue to exclude the page. Fix first, confirm with Test Live URL, then request.

Forgetting About Mobile-First Indexing

Google primarily uses the mobile version of your content for indexing. If your mobile page has a noindex tag or robots block that the desktop page doesn't, the mobile block wins. Always check the mobile user-agent behaviour, not just desktop.

Assuming "Not indexed" Means "Broken"

Some non-indexed URLs are correctly excluded — e.g., "Alternate page with proper canonical tag" means the page is a non-canonical duplicate and shouldn't be indexed. Don't treat the non-indexed count in Search Console as a pure error count.

Over-Submitting URLs for Indexing

Spamming Request Indexing doesn't accelerate things further and may exhaust your daily quota. Use it deliberately on the pages that matter.

Advanced: The Indexing API

Google offers an Indexing API that lets certain publishers notify Google of new or updated URLs programmatically. Currently the Indexing API is restricted to Job Posting and Broadcast Event content types — it's not a general-purpose indexing accelerator.

IndexNow, a separate protocol supported by Bing and Yandex, does offer a general-purpose URL submission mechanism. Google does not currently consume IndexNow. For Google, your tools are Search Console's Request Indexing, sitemap submission, and inbound link building.

Comparing the Methods: When to Use Which

ScenarioBest Method
Confirming a single page's indexed statusURL Inspection in Search Console
Quick spot-check during content QAsite: operator or exact-title search
Site-wide indexation coverage reportPages report in Search Console
Checking hundreds of URLs at onceBulk checker + Search Console Pages export
Diagnosing why a URL isn't indexedURL Inspection plus RankNibbler audit
Accelerating indexation of new contentSitemap submission + Request Indexing
Continuous monitoring of large siteSearch Console API + server log analysis
Identifying duplicate content issuessite: with exact phrases; Pages report duplicate reasons

Frequently Asked Questions

How long does it take for Google to index a new page?

For an established site with healthy crawl patterns, new pages are typically indexed within a few days. For brand-new domains or pages with no inbound links, it can take two to four weeks. Submitting the URL through a sitemap and requesting indexing via URL Inspection both accelerate the process.

Why does site: show different counts than Search Console?

The site: operator returns a sampled, approximate count that can diverge significantly from Search Console's Pages report. Search Console reflects Google's internal view, which is the authoritative number. If the two disagree, trust Search Console.

Can I check indexation without Search Console access?

Yes, using site: and exact-phrase searches, but without Search Console you lose the authoritative view. For any serious analysis, Search Console verification is essential — and free.

What's the difference between "Crawled - currently not indexed" and "Discovered - currently not indexed"?

"Discovered" means Google knows the URL exists but has not yet fetched it. Often crawl budget or priority. "Crawled - currently not indexed" means Google fetched the page, processed it, and chose not to index it — almost always a quality signal.

Does removing a noindex tag automatically re-index the page?

No. Google has to re-crawl the page to see the change. You can accelerate this by requesting indexing via URL Inspection, but it still depends on Google scheduling the crawl.

Can one page deindex and other pages stay indexed?

Yes. Indexation is per-URL. A quality signal affecting one page, a noindex tag on one template, or a canonical pointing away from one URL will deindex that URL alone while leaving the rest of the site unchanged.

Do I need to request indexing for every page of a new site?

No. Submit a sitemap, make sure internal linking is solid, and Google will discover and index most URLs on its own. Use Request Indexing only for the handful of highest-priority pages.

How many URLs does Google typically index per sitemap submission?

Sitemaps can contain up to 50,000 URLs and 50MB uncompressed. Indexation rate depends on site quality, crawl budget, and content value — not sitemap size. A healthy site should see most sitemap URLs indexed within weeks.

Can I deindex a page quickly?

Yes. Add a noindex meta tag (or X-Robots-Tag header), ensure the page is still crawlable so Google can see the directive, and use the Removals tool in Search Console for an immediate (temporary) hide from search results. Permanent removal happens when Google next re-crawls and honours the noindex.

Is it bad if not 100% of my sitemap URLs are indexed?

Not necessarily. A small gap (a few percent) is normal due to duplicates, thin content, and timing. A large gap (10%+) suggests coverage problems worth investigating. Compare the reasons in the Pages report to diagnose.

Does HTTPS affect indexation?

Sites should be on HTTPS for ranking and trust reasons — see what is HTTPS/SSL. Google indexes HTTP and HTTPS URLs but considers them distinct; consolidate with 301 redirects from HTTP to HTTPS to avoid duplication.

Can Google index pages with JavaScript-rendered content?

Yes, Google renders JavaScript as part of the indexing process. However, JS-heavy pages take longer to be indexed (sometimes days longer) than static HTML. If content is critical, render it server-side or ensure it's present in the initial HTML response.

Is there a way to force Google to re-crawl my whole site?

No. Crawl scheduling is Google's decision. You can influence it by submitting an updated sitemap (which signals changes), improving server speed (allowing more crawling within the same budget), and adding fresh content or links. Request Indexing works URL-by-URL but is rate-limited.

Final Thoughts

Indexation is the foundation of search visibility. Everything else — rankings, traffic, conversions — starts with Google having processed and stored your content. Build a lightweight habit of checking indexation on every new publish, monitoring the Pages report weekly, and running a full audit quarterly, and indexation problems will rarely get far enough to affect traffic.

When issues do arise, the workflow is consistent: confirm the exact URL, run URL Inspection, identify the blocker from the reason code, fix the blocker, verify with Test Live URL, and request indexing. That sequence catches every cause covered in this guide. Pair it with the RankNibbler audit for a fast pre-flight check before every publish, and indexation becomes a solved problem rather than an ongoing anxiety.

Run a free indexation check: Paste any URL into the RankNibbler homepage and we'll check for noindex tags, robots.txt blocks, canonical conflicts, and all other common indexation blockers in one report. No signup required.

Last updated: March 2026