SEO Glossary: 80+ SEO Terms, Definitions & Terminology
This SEO glossary covers more than 80 of the most important SEO terms, definitions, and concepts used in search engine optimisation today. Every definition is written in plain English so beginners can follow along, while the level of detail is thorough enough to be useful for marketing managers, developers, and experienced SEO practitioners.
Whether you have just started learning SEO or you are reviewing terminology before a technical audit, this SEO dictionary is designed to be your go-to reference. Use the letter-group navigation below to jump straight to the terms you need, or read through from top to bottom for a solid grounding in SEO terminology.
Who is this glossary for? If you are new to SEO, start with the SEO for beginners guide and use this glossary alongside it. If you are more experienced, bookmark this page as a quick reference when you encounter unfamiliar SEO terms in reports, tools, or client briefs. Developers will find the technical SEO terms in the T section particularly useful. Content writers and marketers will get the most value from the keyword and on-page sections.
Jump to a section:
A–C |
D–F |
G–I |
J–L |
M–O |
P–R |
S–Z |
FAQ
A – C
| Term | Definition |
|---|---|
| Alt Text | Alt text (short for alternative text) is a written description of an image added inside the HTML img tag using the alt attribute. Search engines cannot see images the way humans do, so they rely on alt text to understand what an image depicts and to determine whether it is relevant to the surrounding content. Good alt text is descriptive, concise (typically under 125 characters), and includes a target keyword where it fits naturally — without keyword stuffing. It also improves accessibility for users who rely on screen readers. Use the RankNibbler alt text checker to audit every image on your page. |
| Anchor Text | Anchor text is the clickable, visible words that form a hyperlink. For example, in the phrase "read our internal linking guide", the words "internal linking guide" are the anchor text. Search engines use anchor text to understand the topic of the destination page, which makes it an important on-page SEO signal. Descriptive, keyword-rich anchor text helps both users and search engines, while vague phrases like "click here" or "read more" provide very little context. Vary your anchor text naturally; exact-match anchors used too often can look manipulative to Google. |
| Backlink | A backlink is a hyperlink from one website that points to a page on a different website. Backlinks are one of the most powerful ranking factors in SEO because Google treats each link as a vote of trust or authority from one site to another. Not all backlinks are equal — a link from a well-respected, relevant site in your industry is worth far more than a link from a low-quality or unrelated directory. The practice of deliberately acquiring backlinks is called link building. You can disavow harmful backlinks through Google Search Console if you believe they are hurting your rankings. |
| Bounce Rate | Bounce rate is the percentage of sessions where a visitor arrives on a page and then leaves without interacting further — without clicking a link, submitting a form, or visiting any other page on the site. A high bounce rate is not automatically bad; if a user finds exactly what they needed on a single page (such as a phone number or a quick answer), that is a successful visit. However, a high bounce rate on pages where you want deeper engagement can signal poor content relevance, slow load times, or a confusing user experience. In GA4, the equivalent metric is the "engagement rate", which measures the inverse. |
| Broken Link | A broken link is a hyperlink that points to a URL that no longer exists or cannot be reached, typically returning a 404 (Not Found) error. Broken links create a poor user experience and can waste your crawl budget — Google's bots follow links to discover pages, and landing on dead ends is inefficient. Broken links can occur when pages are deleted, URLs change without a redirect being set up, or external sites remove content you linked to. Use the RankNibbler broken link checker to find and fix broken links on your site, and see our guide on how to fix broken links. |
| Canonical URL | A canonical URL (indicated by a rel="canonical" tag in the page <head>) tells search engines which version of a page is the preferred, definitive copy when similar or duplicate content exists across multiple URLs. For example, if your product page is accessible at both /product and /product?ref=newsletter, the canonical tag points Google to the clean version so ranking signals are not split. Canonical tags are essential on e-commerce sites where the same product might appear under multiple category paths or with different URL parameters. They are a recommendation, not a directive — Google may still choose a different canonical if it disagrees. |
| CLS (Cumulative Layout Shift) | Cumulative Layout Shift is one of Google's Core Web Vitals. It measures the visual stability of a page — specifically, how much the content unexpectedly shifts around as the page loads. A high CLS score means elements are jumping around while the page renders, which is frustrating for users who might accidentally tap the wrong button or lose their place while reading. Common causes include images without declared dimensions, ads that load late, and web fonts that trigger a layout change. Google's target CLS score is 0.1 or below for a "good" experience. |
| Core Web Vitals | Core Web Vitals are a set of real-world performance metrics that Google uses as part of its page experience ranking signals. The three metrics are LCP (loading speed), INP (interactivity), and CLS (visual stability). Google officially rolled Core Web Vitals into its ranking algorithm in 2021 and replaced FID with INP in 2024. You can view your site's Core Web Vitals in Google Search Console under the "Page Experience" report, or test individual URLs with tools such as Google PageSpeed Insights. Sites that pass all three thresholds earn a "good" page experience badge. |
| Crawl Budget | Crawl budget refers to the number of pages Googlebot (or another search engine crawler) will crawl on your website within a given period. Google allocates crawl budget based on your site's size, authority, and server health. If your site has millions of pages, low-quality pages, or a slow server, Google may not crawl all your content regularly — meaning new or updated pages take longer to be indexed. You can help Google prioritise important pages by using a well-structured XML sitemap, fixing broken pages, removing thin or duplicate content, and using the noindex directive on pages that don't need to rank. |
| Crawling | Crawling is the process by which search engine bots (also called spiders or web crawlers) systematically discover and read web pages across the internet. Googlebot, for example, starts from a known set of URLs and then follows hyperlinks on those pages to find new ones, continuously adding newly discovered URLs to a queue. The content found during crawling is then processed and (if eligible) added to the search engine's index. You can control which parts of your site crawlers can access using a robots.txt file. Pages blocked from crawling cannot be indexed, and pages that are not linked to from anywhere are very difficult for crawlers to find. |
| CSS & JavaScript (render-blocking) | Render-blocking CSS and JavaScript are resources that prevent a browser from displaying the visible content of a page until they have fully loaded and processed. This directly hurts load time metrics like LCP and creates a poor user experience, especially on mobile. Search engines care about page speed as a ranking factor, so minimising render-blocking resources is an important technical SEO task. Common fixes include deferring non-critical JavaScript, inlining critical CSS, and using the async or defer attributes on script tags. Use the RankNibbler CSS and JS checker to identify render-blocking issues on your pages. |
D – F
| Term | Definition |
|---|---|
| Black Hat SEO | Black hat SEO refers to optimisation techniques that violate Google's Webmaster Guidelines in an attempt to manipulate search rankings artificially and quickly. Common black hat tactics include cloaking (showing different content to users and bots), link schemes (buying or exchanging links at scale), keyword stuffing, content scraping, and using private blog networks (PBNs) to build artificial backlinks. While these tactics can produce short-term ranking gains, they carry a high risk of manual penalties or algorithmic demotions that can wipe a site's rankings entirely. The safer, more sustainable approach is white hat SEO — creating genuine value for users in line with Google's guidelines. |
| 404 Error | A 404 error (Not Found) is an HTTP status code returned when a server cannot find the page a user or crawler is requesting — typically because the page has been deleted, the URL has changed, or the link is incorrect. From an SEO perspective, a high number of 404 errors can waste crawl budget and create a poor user experience. If a deleted page had valuable backlinks pointing to it, those links are wasted once the page 404s. The correct fix is to set up a 301 redirect from the old URL to the most relevant live page, preserving any link equity. Use the broken link checker to identify 404s across your site. |
| Click-Through Rate (CTR) | Click-through rate (CTR) is the percentage of people who see your page listed in search results and then click on it. It is calculated as clicks divided by impressions, expressed as a percentage. A higher CTR means your title tag and meta description are compelling and relevant to the searcher's query. Google uses CTR data as one of many signals when evaluating the quality and relevance of a search result, though the exact weighting is debated. You can view your pages' CTR data in Google Search Console under the Performance report. Improving CTR typically involves writing more compelling title tags, adding numbers or power words, and ensuring meta descriptions offer a clear reason to click. |
| Cloaking | Cloaking is a black hat SEO technique where a website shows different content to search engine crawlers than it shows to human users. For example, a page might display keyword-rich text to Googlebot while showing an image or sales page to real visitors. Google explicitly forbids cloaking in its Webmaster Guidelines because it deceives both users and search engines. When discovered, cloaking almost always results in a manual action that removes the offending pages from Google's index. Legitimate A/B testing frameworks (such as those approved by Google Optimize) are not considered cloaking as long as they are implemented correctly and not used to show deceptive content to bots. |
| De-indexing | De-indexing is the removal of a URL from a search engine's index, meaning it will no longer appear in search results. De-indexing can happen intentionally (by adding a noindex tag, or submitting a URL removal request in Google Search Console) or unintentionally (due to crawl errors, a blocking robots.txt rule, or a manual action penalty). If important pages on your site have been de-indexed unexpectedly, check Google Search Console's Coverage report for errors, and use the URL Inspection tool to see how Google views those pages. |
| Domain Authority (DA) | Domain Authority is a proprietary metric developed by Moz that attempts to predict how likely a website is to rank well in search results, expressed on a logarithmic scale from 1 to 100. A higher DA generally correlates with more and stronger backlinks pointing to that domain. It is important to understand that DA is a third-party metric — it is not used by Google and has no direct influence on rankings. However, it is a useful proxy when comparing the relative link strength of different websites. Similar metrics from other tools include Ahrefs' Domain Rating (DR) and Semrush's Authority Score. |
| Dofollow Link | A dofollow link is a standard hyperlink that passes link equity (also called "link juice") from the linking page to the destination page. When no link attribute is specified, all links are dofollow by default. Search engines follow these links and count them as votes of authority, which contributes to the destination page's ability to rank. Dofollow links from high-authority, relevant websites are among the most valuable assets in SEO. The opposite of a dofollow link is a nofollow link, which carries the rel="nofollow" attribute and does not pass link equity. |
| Duplicate Content | Duplicate content refers to blocks of content that appear on multiple URLs, either within the same website or across different websites. Search engines struggle to decide which version to rank when identical content exists in multiple places, which can dilute the ranking potential of all affected pages. On-site duplicate content is often caused by URL parameters, HTTP vs. HTTPS versions, trailing slashes, or printer-friendly page variants. The preferred solution is to use a canonical tag to signal the preferred URL. Intentional plagiarism of content from other sites can lead to a manual action penalty. |
| Dwell Time | Dwell time is the amount of time a user spends on a web page after clicking on it from a search result, before returning to the SERP. It is an implied engagement signal — if a user spends 30 seconds on a page before bouncing back, that suggests the page did not satisfy their query well. If they spend five minutes reading and then close the tab entirely, that suggests a much better match. While Google has not officially confirmed dwell time as a direct ranking factor, behavioural signals like this are believed to influence how Google's machine learning systems evaluate content quality over time. Improving dwell time involves making your content more engaging, readable, and genuinely useful. |
| E-E-A-T | E-E-A-T stands for Experience, Expertise, Authoritativeness, and Trustworthiness. It is the framework Google's quality raters use to evaluate the overall quality of web content and the credibility of its authors and publishers. The first "E" (Experience) was added in 2022 to recognise content written by people with first-hand experience of the subject matter. While E-E-A-T is not a direct algorithmic ranking factor, it underlies many of Google's quality guidelines and is especially important for YMYL (Your Money or Your Life) topics such as health, finance, and legal advice. Demonstrating E-E-A-T involves crediting qualified authors, earning authoritative backlinks, and building a strong online reputation. |
| External Link | An external link is a hyperlink on your website that points to a page on a different domain. Linking out to high-quality, relevant external sources can be beneficial for SEO because it signals to search engines that you have researched the topic thoroughly and are providing genuine value to users. However, every external link you add is also a route through which users can leave your site, so link out purposefully and consider opening external links in a new tab. You should use the rel="nofollow" or rel="sponsored" attribute on paid links and links you do not wish to endorse. |
| Featured Snippet | A featured snippet (sometimes called "position zero") is a special search result that Google displays at the very top of the results page, above the regular organic listings, in a highlighted box. Featured snippets pull a short excerpt directly from a web page and display it alongside the page's title and URL. They appear most often for queries that are phrased as questions or seek a specific definition, step-by-step process, or list. To optimise for featured snippets, structure your content with clear headings and provide concise, direct answers immediately below the relevant heading. Featured snippets can significantly boost click-through rates for the pages that earn them. |
G – I
| Term | Definition |
|---|---|
| Google Algorithm | Google's search algorithm is the complex, proprietary system of rules and machine learning models that determines how web pages are ranked in search results for any given query. Google processes hundreds of ranking signals simultaneously, including keyword relevance, page quality, backlink authority, page experience, user engagement, and freshness. The algorithm is updated thousands of times each year, ranging from minor tweaks to major "core updates" that can significantly shift rankings across many sites at once. Major named updates — such as Panda (content quality), Penguin (link spam), Hummingbird (semantic search), and the Helpful Content system — have reshaped how SEO is practised over the past decade. |
| Google Analytics | Google Analytics (GA) is Google's free web analytics platform that tracks and reports website traffic and user behaviour. It shows where your visitors come from (organic search, social media, direct, referral), what pages they visit, how long they stay, and whether they complete key goals like form submissions or purchases. The current version, GA4, uses an event-based data model rather than the session-based model used in Universal Analytics (which was sunset in 2023). For SEO purposes, GA4 is used alongside Google Search Console to understand both the traffic SEO generates and how well that traffic engages once on the site. |
| Google Search Console | Google Search Console (GSC) is a free tool provided by Google that lets website owners monitor how their site performs in Google Search. GSC shows which queries drive impressions and clicks, which pages are indexed, whether there are any crawl errors or manual actions, and how Core Web Vitals look across the site. It is an essential, free tool for every SEO practitioner. You can also use GSC to submit new XML sitemaps, request re-indexing of updated pages, and remove URLs from Google's index. Read our guide on how to use Google Search Console for a full walkthrough. |
| H1 Tag | The H1 tag is the primary HTML heading on a web page and should describe the main topic of the page's content. Every page should have exactly one H1 tag — multiple H1s on the same page can confuse search engines about the page's primary focus. The H1 should naturally contain your primary target keyword and should closely match the page's title tag while not necessarily being identical. Think of the H1 as the headline of a newspaper article: it sets expectations for the reader and signals the page's core subject to search engines. For guidance on writing effective headings, see how to write H1 tags. |
| H2–H6 Tags | H2 through H6 tags are subheading elements that create a hierarchical structure within a page's content. H2 tags represent the main sections of the page, H3 tags are subsections within those sections, and so on. This heading structure (sometimes called "content hierarchy") helps users scan and navigate the page, and helps search engines understand how topics and subtopics relate to each other. Using your secondary and related keywords naturally in H2 and H3 tags is a sound on-page SEO practice. Use the heading extractor tool to view and audit the heading structure of any URL. |
| HTTPS / SSL | HTTPS (Hypertext Transfer Protocol Secure) is the encrypted version of HTTP. It uses SSL/TLS certificates to create a secure, encrypted connection between a user's browser and the web server, protecting data in transit from interception. Google confirmed HTTPS as a lightweight ranking signal in 2014, and today virtually all reputable websites use HTTPS as standard. Browsers like Chrome actively warn users when they visit non-HTTPS pages, which can significantly hurt trust and conversion rates. Use the HTTPS checker to verify your site's SSL configuration is correct and that all pages redirect properly from HTTP to HTTPS. |
| Indexing | Indexing is the process by which a search engine processes and stores a web page in its enormous database (the "index") after crawling it. Only indexed pages can appear in search results. When Googlebot crawls a page, it analyses the content, extracts links, and (assuming the page is eligible) adds it to Google's index. Pages can be prevented from being indexed using the noindex meta tag or X-Robots-Tag HTTP header. If a page is not appearing in search results, the first step is to check whether it is actually indexed using the site: operator in Google or the URL Inspection tool in Search Console. See our guide on what indexing means for more detail. |
| Internal Link | An internal link is a hyperlink that connects one page on your website to another page on the same website. Internal links serve two important SEO functions: they help users navigate to related content, and they help search engines discover and understand the relationship between pages on your site. A well-structured internal linking architecture distributes link equity from high-authority pages (like your homepage) to deeper pages that need a ranking boost. Use descriptive, keyword-rich anchor text for internal links rather than generic phrases. See the full internal linking guide for strategy and best practices. |
| INP (Interaction to Next Paint) | Interaction to Next Paint is the Core Web Vital that replaced First Input Delay (FID) in March 2024. INP measures the responsiveness of a page to all user interactions — clicks, taps, and keyboard inputs — throughout the entire page lifecycle, not just the first interaction. It reports the worst-case interaction delay observed during a visit, giving a more complete picture of how interactive a page feels. A good INP score is 200 milliseconds or less. INP issues are typically caused by heavy JavaScript execution that blocks the browser's main thread. Optimising INP often requires code splitting, deferring non-critical scripts, and breaking up long tasks. |
J – L
| Term | Definition |
|---|---|
| JSON-LD | JSON-LD (JavaScript Object Notation for Linked Data) is the recommended format for adding structured data (schema markup) to web pages. Unlike older methods of embedding schema that required altering existing HTML attributes, JSON-LD is added as a separate <script type="application/ld+json"> block in the page's <head> or <body>, making it cleaner and easier to maintain. Google strongly recommends JSON-LD over Microdata and RDFa. Use the RankNibbler schema generator to create JSON-LD markup for common content types including articles, FAQs, local businesses, and products. |
| Hreflang | Hreflang is an HTML attribute (or HTTP header / XML sitemap tag) that tells search engines which language and regional version of a page to show to users in different countries. For example, a business with separate English pages for the UK and the US would use hreflang tags to ensure UK searchers see the UK version and US searchers see the US version. Without hreflang, Google may show the wrong regional page to users or treat the pages as duplicate content. Hreflang implementation can be complex on large sites with many language variants, and errors are common — mismatched return tags are a frequent issue to check. |
| Keyword Research | Keyword research is the process of identifying the words and phrases that people type into search engines when looking for products, services, or information related to your website. Effective keyword research reveals which terms have enough search volume to be worth targeting, how competitive those terms are (i.e., how hard it will be to rank for them), and what the searcher's intent is behind each query. Tools like Google Keyword Planner, Ahrefs, and Semrush are commonly used for keyword research. The output of keyword research should directly inform your content strategy, page titles, and on-page optimisation. See the full guide to keyword research. |
| Keyword Density | Keyword density is the percentage of times a specific keyword or phrase appears in the total word count of a page. For example, a keyword appearing 10 times in a 500-word article has a density of 2%. While keyword density was a significant ranking factor in early search engines, modern algorithms are far more sophisticated and instead look for semantic relevance and natural language patterns. There is no magic keyword density percentage to aim for — writing naturally for humans is more effective than hitting a specific number. Very high keyword density is a red flag for keyword stuffing, which can trigger a Google penalty. Use the keyword density checker to audit your content. |
| Keyword Stuffing | Keyword stuffing is the practice of overloading a page with the same keyword (or close variants) in an unnatural, repetitive way in an attempt to manipulate search rankings. Common examples include jamming a keyword into every heading, footer, and meta tag, or hiding keyword-dense text by making it the same colour as the background. Google's algorithms are sophisticated enough to detect and penalise keyword stuffing, and it creates a terrible reading experience for users. The modern SEO approach is to write comprehensive, natural content that covers a topic thoroughly, using semantically related terms and synonyms rather than repeating the exact keyword. |
| Lazy Loading | Lazy loading is a performance technique where images, videos, or other off-screen resources are only loaded when the user scrolls close to them, rather than all at once when the page first loads. This reduces initial page load time and bandwidth usage, particularly on pages with many images. Since 2020, the HTML loading="lazy" attribute has been supported natively in all major browsers, making implementation straightforward. While lazy loading improves performance, you should never lazy-load the largest image visible when the page first loads (the LCP element), as this will hurt your LCP score. Use the lazy loading checker to verify implementation on your pages. |
| LCP (Largest Contentful Paint) | Largest Contentful Paint is the Core Web Vital that measures perceived loading speed. It records the time from when the page starts loading to when the largest visible element (usually a hero image, banner, or large block of text) is rendered in the viewport. Google's threshold for a "good" LCP is 2.5 seconds or less. LCP is heavily influenced by server response time, render-blocking resources, resource load times, and client-side rendering. It is often the Core Web Vital that requires the most effort to improve on image-heavy pages. See the dedicated guide to Largest Contentful Paint for optimisation techniques. |
| Link Building | Link building is the practice of acquiring backlinks from other websites to your own as a deliberate SEO strategy. Because backlinks are one of Google's most important ranking signals, a strong backlink profile can significantly improve your site's ability to rank for competitive keywords. Common link building techniques include creating high-quality content that others naturally want to link to ("earning" links), digital PR campaigns, guest posting on relevant industry blogs, broken link building (finding broken links on other sites and offering your content as a replacement), and building relationships with journalists and bloggers. See how to build backlinks for a practical strategy guide. |
| Keyword Mapping | Keyword mapping is the process of assigning target keywords to specific pages on your website to ensure each page is optimised for a distinct set of search terms. A good keyword map prevents keyword cannibalisation — where two or more pages on your site compete against each other for the same keyword — which can split ranking signals and cause both pages to rank lower than they should. Keyword mapping is typically done in a spreadsheet, matching each URL to its primary keyword, secondary keywords, and the search intent those keywords represent. It forms the foundation of a structured on-page SEO plan. |
| Long-Tail Keyword | A long-tail keyword is a specific search phrase, typically three or more words long, that targets a narrower audience than a broad "head" keyword. For example, "running shoes" is a head keyword with enormous search volume and very high competition, while "best trail running shoes for wide feet" is a long-tail keyword with lower volume but much more specific intent. Long-tail keywords collectively account for the majority of all search queries. They are often easier to rank for because there is less competition, and they tend to attract users who are further along in the buying process and therefore more likely to convert. A solid keyword research strategy targets a mix of head terms and long-tail phrases. |
M – O
| Term | Definition |
|---|---|
| Local SEO | Local SEO is the practice of optimising a website (and its associated online presence) to rank in geographically relevant search results — particularly the Google "local pack" (the map-based results that appear for location-specific queries like "plumber near me" or "coffee shop in Leeds"). Key local SEO signals include a complete and accurate Google Business Profile, consistent NAP (Name, Address, Phone number) citations across directories, location-specific content on the website, and reviews. Local SEO is essential for businesses that serve a specific geographic area and want to appear when nearby customers search for their services. Read the full local SEO guide. |
| Manual Action | A manual action is a penalty applied by a human reviewer at Google when a site is found to violate Google's Webmaster Guidelines. Unlike algorithmic penalties (which are applied automatically), manual actions are issued by Google's spam team after a manual review. Common causes include unnatural link schemes, thin or auto-generated content, cloaking, and hidden text. Manual actions are reported in Google Search Console under "Security & Manual Actions". They can result in some or all of your pages being removed from search results. To recover, you must fix the underlying issues and submit a reconsideration request through Search Console. |
| Meta Description | A meta description is an HTML tag (placed in the <head> of a page) that provides a brief summary of the page's content. While meta descriptions are not a direct ranking factor, they appear as the descriptive snippet below a page's title in search results and have a major influence on click-through rate (CTR). A well-written meta description should be 150–160 characters long, include the target keyword (which Google bolds when it matches the search query), and contain a clear value proposition or call to action. Google may rewrite your meta description if it deems another excerpt from the page more relevant to a particular query. Read the guide on how to write meta descriptions. |
| Meta Tags | Meta tags are HTML elements placed inside the <head> section of a page that provide information about the page to search engines and browsers. The most SEO-relevant meta tags include the meta title (also called the title tag), meta description, meta robots tag, and meta viewport tag. Meta tags are not visible to users when they read the page, but they play an important role in how the page is indexed, displayed in search results, and rendered on different devices. Use the meta tag checker or the meta tag generator to audit and create optimised meta tags. |
| Mobile-First Indexing | Mobile-first indexing means Google predominantly uses the mobile version of a page's content for indexing and ranking, rather than the desktop version. Google switched to mobile-first indexing for all new sites in 2020 and completed the full transition in 2023. This means that if your mobile site has less content, different meta tags, or slower performance than your desktop site, it is the mobile experience that Google evaluates. Having a fully responsive website that delivers identical content across mobile and desktop is essential. You can test your site's mobile-friendliness using Google's Mobile-Friendly Test or PageSpeed Insights. See the mobile SEO guide for more. |
| Link Equity (Link Juice) | Link equity (sometimes called "link juice") is the ranking value or authority that a hyperlink passes from one page to another. When a high-authority page links to your page, some of that page's authority flows to you, improving your ability to rank. The amount of equity passed through a link is influenced by the linking page's authority, the number of other links on that page (equity is divided among all outbound links), whether the link is dofollow or nofollow, and the relevance of the linking page to yours. Understanding link equity is important for both building backlinks and planning your site's internal linking structure to ensure your most important pages receive the most authority. |
| Nofollow | A nofollow link is a hyperlink with the rel="nofollow" attribute added to the HTML. This attribute tells search engines not to follow the link or pass link equity (PageRank) to the destination URL. Google introduced the nofollow attribute in 2005 as a way to combat comment spam. In 2019, Google updated its guidance to treat nofollow as a "hint" rather than a strict directive, meaning it may choose to crawl and index nofollow links at its discretion. Two additional link attributes were introduced at the same time: rel="ugc" for user-generated content and rel="sponsored" for paid links. Use the nofollow checker to audit your pages. |
| Noindex | Noindex is a directive added to a page's meta robots tag (<meta name="robots" content="noindex">) or the HTTP X-Robots-Tag header to instruct search engines not to include the page in their search index. A noindexed page will not appear in search results. This is useful for pages you do not want ranked — such as thank-you pages, admin pages, duplicate content, and staging environments. Note that a noindex tag does not prevent crawlers from visiting the page; it only prevents the page from appearing in search results. To prevent crawling entirely, use your robots.txt file — but be aware that blocked pages cannot be noindexed if crawlers cannot read the tag. |
| Off-Page SEO | Off-page SEO refers to all the optimisation activity that takes place outside your own website to improve your search rankings. The most influential off-page factor is acquiring high-quality backlinks from other reputable sites, but off-page SEO also encompasses brand mentions, social media signals, online reviews, digital PR, and local citations. While you have less control over off-page SEO than on-page factors, it is critically important — particularly for competing in high-authority niches where content quality alone is not enough to outrank established competitors. A strong off-page profile is built over time through consistent content creation, digital PR, and genuine relationship building. |
| Orphan Page | An orphan page is a page on your website that has no internal links pointing to it from any other page on the site. Because search engine crawlers discover pages primarily by following links, orphan pages are very difficult for bots to find and may not be crawled or indexed even if they are listed in your XML sitemap. Orphan pages also receive no internal link equity, which limits their ability to rank. Fixing orphan pages involves identifying them through a site crawl and then adding appropriate internal links from relevant pages. A well-planned internal linking structure prevents orphan pages from occurring in the first place. |
| On-Page SEO | On-page SEO refers to all the optimisation actions you take directly on a web page to improve its relevance and visibility in search engines. This includes optimising the title tag, meta description, headings, body content, images, internal links, URL structure, and page speed. On-page SEO is contrasted with off-page SEO (which covers backlinks and external signals) and technical SEO (which covers site architecture, crawlability, and indexing). A strong on-page SEO foundation is essential before pursuing off-page efforts — there is little point building links to a page that has not been optimised for its target keyword. Use the RankNibbler free audit tool to check your on-page SEO instantly. See also: what is on-page SEO and the on-page SEO checklist. |
| Open Graph | Open Graph (OG) is a protocol developed by Facebook that uses meta tags to control how your web pages appear when shared on social media platforms such as Facebook, LinkedIn, and X (Twitter). The key Open Graph tags include og:title, og:description, og:image, and og:url. Without Open Graph tags, social platforms will try to guess how to display your content, often with poor results. While OG tags do not directly influence Google search rankings, they can significantly increase click-through rates on social media, which drives traffic. Use the OG preview tool to see how your pages look when shared. |
| Organic Traffic | Organic traffic refers to visitors who arrive on your website by clicking on an unpaid (non-advertised) search result. It is the primary goal of most SEO efforts. Organic traffic is often considered the most valuable type of traffic because it is free (in terms of cost-per-click), tends to have high intent, and is sustainable over time compared to paid advertising, which stops generating traffic the moment you stop spending. Organic traffic is tracked in Google Analytics under the "Organic Search" channel. Growing organic traffic requires a combination of technical SEO, on-page optimisation, content creation, and link building. For an overview, see the guide on what is organic traffic. |
P – R
| Term | Definition |
|---|---|
| People Also Ask (PAA) | People Also Ask (PAA) is a Google SERP feature that displays a dynamically expandable set of questions related to the user's original query. Each question, when clicked, reveals a short answer pulled from a web page along with a link to that page. PAA boxes appear in a large proportion of Google searches and represent a significant opportunity for additional organic visibility — even for pages that do not rank in the top positions for the main keyword. To target PAA placements, identify the related questions Google surfaces for your target queries and structure your content to provide direct, concise answers to those questions, ideally immediately after a clearly marked heading. |
| PPC (Pay-Per-Click) | Pay-per-click (PPC) is an online advertising model where advertisers pay a fee each time a user clicks on their ad. In the context of search engines, PPC most commonly refers to Google Ads campaigns where advertisers bid on keywords to have their ads displayed at the top or bottom of the search results page. Unlike SEO, which earns organic visibility over time, PPC delivers immediate traffic at a direct monetary cost. PPC and SEO are complementary — PPC can provide instant visibility for new pages or highly competitive keywords while organic rankings are being built. Paid results are labelled "Sponsored" in Google search results and do not improve organic rankings. |
| Page Speed | Page speed refers to how quickly the content of a web page loads for a user. It is measured by a variety of metrics, including Time to First Byte (TTFB), First Contentful Paint (FCP), and the Core Web Vitals (LCP, INP, CLS). Google has used page speed as a ranking factor since 2010 for desktop and 2018 for mobile. Slow pages not only rank lower but also suffer higher bounce rates and lower conversion rates. Key factors that affect page speed include server response time, image file sizes, render-blocking scripts, and the use (or absence) of caching. See the page speed guide and use the website speed test to diagnose issues. |
| PageRank | PageRank is Google's original algorithm for measuring the importance and authority of a web page based on the number and quality of links pointing to it. Named after Google co-founder Larry Page, PageRank treats each link as a vote, with links from high-authority pages counting more than links from low-authority pages. While Google stopped publicly displaying PageRank scores in 2016, the underlying concept remains a core part of how Google evaluates pages. PageRank flows through links — both internal and external — which is why a strong internal linking strategy is important for distributing authority across your site. |
| Pagination | Pagination is the practice of splitting a large body of content (such as a blog archive, product category, or article) across multiple pages, each accessed via a sequential URL (e.g., /blog/page/2). From an SEO perspective, pagination can cause issues if each paginated page is seen as thin or duplicate content. Best practice is to ensure paginated pages are crawlable and indexed (if they contain unique content), use self-referencing canonical tags, and link clearly from page to page. Google no longer supports the rel="prev" and rel="next" link attributes it once recommended for pagination. |
| Readability | Readability in SEO refers to how easy it is for a typical user to read and understand the content on a page. While readability is not a direct ranking factor, it has significant indirect effects: well-written, clear content keeps users engaged for longer (improving dwell time), reduces bounce rate, and earns more social shares and backlinks. Readability is commonly measured using formulas such as the Flesch-Kincaid Reading Ease Score, which takes sentence length and word complexity into account. For most web content aimed at a general audience, a reading age of around 12–14 years is appropriate. Use the readability checker to score your content and get specific suggestions for improvement. |
| Redirect (301 & 302) | A redirect automatically sends both users and search engines from one URL to another. A 301 redirect is a permanent redirect — it signals to search engines that the old URL has moved permanently and passes most of the link equity accumulated by the old URL to the new destination. A 302 redirect is temporary — it tells search engines the move is provisional and preserves the original URL's status in the index. Use 301 redirects when permanently deleting or moving pages, changing your URL structure, or consolidating content. Use the redirect checker to test any redirect chain and verify the correct status code is being returned. See also: what is a 301 redirect and the .htaccess redirect generator. |
| Robots.txt | A robots.txt file is a plain-text file placed at the root of a website (e.g., yourdomain.com/robots.txt) that instructs search engine crawlers which pages or sections of the site they are and are not allowed to access. Robots.txt rules use Allow and Disallow directives targeted at specific user agents (crawlers). For example, you might disallow crawlers from accessing /admin/, /staging/, or URL parameter paths that generate duplicate content. A robots.txt file does not prevent pages from being indexed — it only controls crawling access. Critically, if a page is blocked in robots.txt, any noindex tag on that page cannot be read. Use the robots.txt generator and see the guide on what is robots.txt. |
| RSS Feed | An RSS (Really Simple Syndication) feed is an XML file that provides a structured, machine-readable list of a site's latest content — typically blog posts, news articles, or podcast episodes. RSS feeds allow readers to subscribe to content updates without visiting the site directly. From an SEO perspective, RSS feeds help search engines discover new content quickly, particularly for news publishers. Some aggregator sites also use RSS feeds to republish content, which can generate backlinks. Use the RSS feed checker to validate your feed's structure and ensure it is properly formatted. |
S – Z
| Term | Definition |
|---|---|
| Schema Markup | Schema markup (also called structured data) is code added to a web page that explicitly tells search engines what the content means, not just what it says. Schema uses a standardised vocabulary from Schema.org and is most commonly implemented using the JSON-LD format. By adding schema, you can help Google understand that a piece of content is a recipe, a product, an FAQ, a review, an event, or a local business — which can unlock rich results (enhanced SERP features) such as star ratings, price information, and FAQ accordions. While schema markup does not directly boost rankings, rich results can dramatically improve click-through rates. See what is structured data and use the schema generator. |
| SEM (Search Engine Marketing) | Search Engine Marketing (SEM) is the practice of gaining visibility in search engine results through paid advertising, most commonly Google Ads (Pay-Per-Click or PPC). SEM and organic SEO are complementary strategies — SEO earns unpaid traffic over time, while SEM delivers immediate paid visibility at a cost-per-click. SEM campaigns allow you to target specific keywords, audiences, locations, and times of day with precision. Unlike SEO, SEM results disappear the moment you stop paying. The distinction between SEO and SEM is explained in more detail in the SEO vs SEM guide. |
| SERP (Search Engine Results Page) | A SERP is the page of results that a search engine displays in response to a user's query. A typical Google SERP may contain organic listings, paid ads (Google Ads), featured snippets, local map packs, image carousels, People Also Ask boxes, video results, and knowledge panels, depending on the query type. Understanding the SERP landscape for your target keywords — specifically, what result types appear and who currently ranks — is fundamental to building an effective SEO strategy. Use the SERP snippet generator to preview how your title tag and meta description will look in search results before you publish. |
| Sitemap (XML) | An XML sitemap is a file that lists all the important URLs on your website that you want search engines to crawl and index. It acts as a roadmap for bots, telling them which pages exist, when they were last updated, and (optionally) how frequently they change. Sitemaps are especially important for large sites, new sites with few backlinks (which are harder for bots to discover through crawling alone), and sites with complex URL structures. Submit your sitemap to Google via Google Search Console. The URL of your sitemap should also be referenced in your robots.txt file. See the guide on how to submit a sitemap to Google. |
| Search Volume | Search volume is the average number of times a specific keyword or phrase is searched in a search engine within a given time period (usually per month). Search volume data is used in keyword research to gauge how much traffic a keyword could potentially deliver if you rank well for it. Very high search volume keywords tend to be more competitive and harder to rank for, while lower-volume long-tail keywords are often more accessible. Search volume data is available from tools such as Google Keyword Planner, Ahrefs, and Semrush, though figures vary between tools and are typically reported as averages over 12 months. Seasonal keywords can have highly variable monthly volumes throughout the year. |
| Semantic SEO | Semantic SEO is the practice of optimising content to match the meaning and intent behind search queries, rather than focusing on exact keyword matches. Modern search engines like Google use natural language processing (NLP) and machine learning to understand the relationships between words, concepts, and entities — not just individual keyword occurrences. Writing for semantic SEO means covering a topic comprehensively, using naturally related terms and synonyms, answering the full range of questions a user might have, and organising content logically so Google can understand the relationships between subtopics. Semantic SEO aligns closely with Google's Hummingbird, RankBrain, and BERT algorithm updates. |
| Technical SEO | Technical SEO refers to the optimisation of a website's technical infrastructure to make it easier for search engines to crawl, index, and render its pages. Unlike on-page SEO, which focuses on content and HTML elements, technical SEO deals with aspects such as site architecture, page speed, mobile-friendliness, HTTPS, structured data, robots.txt, XML sitemaps, canonical tags, redirect chains, and Core Web Vitals. Technical SEO issues can prevent even excellent content from ranking if they stop search engines from properly accessing or understanding your pages. Use the site audit tool to identify and prioritise technical SEO problems. |
| Thin Content | Thin content refers to web pages that provide little or no value to users — typically characterised by very low word counts, automatically generated text, scraped content from other sites, or pages that exist purely for SEO purposes rather than to serve the user. Google's Panda algorithm update (first released in 2011 and now part of Google's core algorithm) specifically targets thin content and can cause site-wide ranking drops if a large proportion of a site's pages are deemed low quality. A page does not need thousands of words to avoid being "thin" — a short, highly accurate and useful answer can outperform a long, padded article. Use the word count checker to audit page lengths across your site. |
| Title Tag | The title tag is the HTML element (written as <title>Your Page Title</title> in the <head>) that defines the title of a web page. It is displayed in the browser tab, used by social media platforms as the default link title, and most importantly, shown as the blue clickable headline in search engine results. The title tag is one of the strongest on-page ranking signals and should include your primary keyword, ideally near the beginning. Google typically displays between 50–60 characters before truncating, so keep titles concise. Google may rewrite your title tag in search results if it finds the original misleading or overly keyword-stuffed. Use the SERP snippet generator to preview your title, and read how to write title tags. |
| Trust Signals | Trust signals are elements on a website that help build confidence and credibility with both users and search engines. For users, trust signals include HTTPS security, clear contact information, genuine customer reviews, professional design, clear privacy policies, and accurate author credentials. For search engines, trust signals are evaluated as part of Google's E-E-A-T framework. Sites that rank for YMYL (Your Money or Your Life) topics — such as medical advice, financial guidance, or legal information — must demonstrate especially high levels of trustworthiness. Building trust signals takes time but has a compounding positive effect on rankings and conversion rates. |
| URL Parameters | URL parameters (also called query strings) are the variables added to the end of a URL after a ? symbol, used to pass information to a web server. For example, in /products?colour=blue&sort=price, "colour" and "sort" are parameters. URL parameters are common in e-commerce sites, faceted navigation, tracking links, and pagination. They are a frequent source of duplicate content problems in SEO because the same underlying page content can be accessible via hundreds of different parameter combinations. Solutions include using canonical tags, configuring parameter handling in Google Search Console, and blocking parameter URLs in robots.txt where appropriate. |
| URL Slug | A URL slug is the part of a URL that comes after the domain name and identifies a specific page. For example, in ranknibbler.com/seo-glossary, the slug is /seo-glossary. Good URL slugs are short, descriptive, lowercase, use hyphens (not underscores) to separate words, and include the target keyword for the page. Avoid using dates in URLs unless they are genuinely necessary, as this makes future URL changes and redirects more likely. See the guide to what is a URL slug for more on URL best practices. |
| User Intent (Search Intent) | Search intent (also called user intent) is the underlying reason behind a search query — what the user is actually trying to accomplish. Google's primary goal is to match each query to the type of content that best satisfies that intent. There are four main categories of search intent: informational (the user wants to learn something), navigational (the user is trying to find a specific site or page), commercial (the user is researching before making a purchase), and transactional (the user is ready to buy or act). Misaligning your content with the dominant intent for a keyword is one of the most common reasons pages fail to rank, regardless of how well they are technically optimised. |
| Web Vitals | Web Vitals is Google's initiative to provide unified guidance on the quality signals that matter for delivering a great experience on the web. The Core Web Vitals — LCP, INP, and CLS — are the subset of Web Vitals that Google uses as ranking signals. Other Web Vitals metrics (such as TTFB and FCP) are diagnostic metrics that help identify the root causes of Core Web Vitals issues. Web Vitals data is collected from real users via the Chrome User Experience Report (CrUX) and can be viewed in Google Search Console, PageSpeed Insights, and the Chrome DevTools Performance panel. Read the full guide on Core Web Vitals. |
| White Hat SEO | White hat SEO refers to ethical, Google-guideline-compliant optimisation practices that focus on creating genuine value for users rather than attempting to game the algorithm. Examples include publishing high-quality original content, earning backlinks naturally, using descriptive title tags and meta descriptions, optimising page speed, and building a well-structured website. White hat SEO produces slower results than manipulative tactics but is far more sustainable. The opposite approach — black hat SEO — uses techniques like cloaking, link schemes, and keyword stuffing that violate Google's guidelines and risk severe ranking penalties. |
| XML Sitemap | An XML sitemap is the machine-readable version of your site's page inventory, formatted in XML (Extensible Markup Language) specifically for search engine crawlers. Unlike an HTML sitemap (which is a user-facing page listing links), an XML sitemap is read by bots and is not typically browsed by humans. A well-maintained XML sitemap helps search engines discover all your important pages efficiently, prioritise recently updated content, and understand your site's overall structure. Most modern CMS platforms (WordPress, Shopify, Wix) generate XML sitemaps automatically. See the complete guide to XML sitemaps and learn how to submit your sitemap to Google. |
| YMYL (Your Money or Your Life) | YMYL is a classification used in Google's Search Quality Evaluator Guidelines for pages whose content could significantly impact a person's health, financial stability, safety, or happiness. Examples of YMYL topics include medical diagnoses and treatments, financial advice, legal guidance, news about significant events, and safety-critical information. Google holds YMYL pages to an exceptionally high standard of quality and applies rigorous E-E-A-T criteria when evaluating them. If your site covers YMYL topics, you should ensure that content is written or reviewed by qualified professionals, that authors are clearly identified, and that the site has a strong trustworthy reputation backed by reputable backlinks. |
Frequently Asked Questions About SEO Terms
What is the difference between SEO and SEM?
SEO (Search Engine Optimisation) focuses on earning unpaid, organic traffic from search engines by optimising your website's content, structure, and authority. SEM (Search Engine Marketing) typically refers to paid advertising in search results, such as Google Ads. The two strategies are complementary — SEO delivers sustainable long-term traffic while SEM can provide immediate visibility for competitive keywords. See the SEO vs SEM guide for a detailed comparison.
What is the most important on-page SEO element?
The title tag is widely considered the single most important on-page SEO element because it signals the page's primary topic to both search engines and users, and it is the first thing a user sees in the search results. After the title tag, the most important elements are the H1 heading, the meta description (for click-through rate), the quality and relevance of the body content, and internal links. Use the RankNibbler free audit to check all these elements at once.
What are Core Web Vitals and do they affect rankings?
Core Web Vitals are three Google-defined metrics that measure the real-world user experience of a page: Largest Contentful Paint (LCP) for loading speed, Interaction to Next Paint (INP) for interactivity, and Cumulative Layout Shift (CLS) for visual stability. Google confirmed in 2021 that Core Web Vitals are an official ranking signal, though they carry less weight than content relevance and backlinks. Sites that pass all three thresholds earn a "good" page experience rating, which can provide a ranking edge in competitive SERPs. Read the full Core Web Vitals guide for targets and optimisation tips.
How many words should an SEO page have?
There is no universal minimum word count for SEO. The right length for a page depends entirely on what is needed to fully satisfy the search intent for your target keyword. For competitive, research-heavy topics, longer, comprehensive content (1,500–3,000+ words) tends to perform best. For simple informational queries, a concise 300–500 word answer may outrank long, padded articles. Avoid thin content (pages with very little value) and avoid padding content with filler just to hit an arbitrary word count. Focus on being the most comprehensive, accurate, and user-friendly result for your target query.
What is the difference between noindex and robots.txt?
Both noindex and robots.txt are tools for controlling how search engines interact with your pages, but they work differently. A noindex meta tag tells a search engine crawler that it can visit the page but should not include it in the search index. A robots.txt rule tells crawlers they are not allowed to access the page at all. A common mistake is blocking a page in robots.txt that also has a noindex tag — if the crawler cannot access the page, it cannot read the noindex tag, and the page may still be indexed. For pages you want to exclude from search results, use noindex without robots.txt blocking.
What is E-E-A-T and how do I improve it?
E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is the quality framework Google's human quality raters use to evaluate content. To improve E-E-A-T: identify and credit qualified authors with bios and credentials; earn backlinks from reputable, authoritative sites in your industry; keep content accurate and regularly updated; ensure your site has a clear About page, contact information, and a privacy policy; and gather genuine reviews and testimonials. For YMYL topics, having content reviewed by a certified professional is particularly important.
What is the difference between a 301 and 302 redirect?
A 301 redirect is a permanent redirect that tells search engines the original URL has moved to a new location permanently. It passes most of the original URL's accumulated link equity (PageRank) to the new destination, making it the correct choice for most situations involving moved or deleted pages. A 302 redirect is temporary — it tells search engines the move is provisional and typically preserves the original URL's status in the index. Using 302 when you mean 301 (or vice versa) is a common SEO mistake. When in doubt about whether a move is permanent, use a 301. Check any redirect with the redirect checker tool.
How do I check if my pages are indexed by Google?
The quickest way to check if a specific page is indexed is to enter site:yourdomain.com/your-page-url into Google's search bar. If the page appears in the results, it is indexed. For a more reliable check, use the URL Inspection tool inside Google Search Console, which shows exactly how Googlebot sees the page, when it was last crawled, and whether it is eligible to appear in results. See our guide on how to check if a page is indexed by Google.
Looking to put this SEO terminology into practice? Start with the SEO for beginners guide, run a free on-page audit on any URL, or work through the SEO audit checklist to identify and fix the most impactful issues on your site.
Last updated: April 2026