Best SEO Singapore
SEO Insights

How to Fix the “Excluded by Noindex Tag” Error in Google Search Console

Jim Ng
Jim Ng
·
Fix Noindex Tag Error
Export all 'Excluded by noindex' URLs from Search Console
Sort URLs into a spreadsheet by page type
?Should this page appear in Google search results?
Yes
Trace noindex source: meta tag or X-Robots-Tag header?
No
Leave noindex in place — it's working correctly
?Is a plugin or theme injecting the noindex automatically?
Yes
Disable noindex setting in plugin (e.g. Yoast, Rank Math)
No
Manually remove noindex from HTML head or server config
Request re-indexing in Search Console and verify fix

If you’ve opened Google Search Console and found a stack of pages flagged as “Excluded by ‘noindex’ tag,” you’re probably wondering whether your site is broken. I see this reaction from Singapore business owners at least twice a week. The good news: this is not a penalty, and learning how to fix the excluded by noindex tag error is something you can do yourself in most cases, without touching a single line of server configuration.

But here’s what most guides won’t tell you. Not every noindex flag is a mistake. Some of those pages are supposed to be hidden. The real skill is knowing which ones to fix and which ones to leave alone. That’s what this guide is about.

I’ll walk you through the exact diagnostic process we use at bestseo.sg when auditing client sites. We’ll cover where the noindex directive can hide (there are more places than you think), how to trace it back to its source, how to remove it properly, and how to verify the fix actually worked.

What the “Excluded by Noindex Tag” Status Actually Means

Google Search Console’s indexing report groups your URLs by status. When a URL shows “Excluded by ‘noindex’ tag,” it means Googlebot visited that page, found an explicit instruction telling it not to index the content, and obeyed.

That instruction can come from two places. The first is an HTML meta robots tag sitting in the <head> section of your page. The second is an HTTP response header called X-Robots-Tag. Both carry the same weight. Both tell Google the same thing: “Do not add this URL to your search index.”

Why This Is Not a Penalty

A penalty (or manual action, in Google’s terminology) means Google has determined your site violates their guidelines. The noindex exclusion is the opposite. Google is simply following instructions that exist on your page. It’s like putting a “Closed” sign on your hawker stall door and then wondering why no customers are coming in. The sign is doing exactly what signs do.

The critical question is: did you put that sign there on purpose, or did someone else hang it up without telling you?

Common Legitimate Uses of Noindex

Before you rush to remove every noindex tag on your site, understand that many pages genuinely should not appear in search results. Here are the most common ones I see on Singapore websites:

  • WordPress login and admin pages (/wp-admin/, /wp-login.php)
  • Thank you pages that appear after form submissions or WooCommerce purchases
  • Internal site search result pages (/?s=keyword)
  • Paginated archive pages beyond page 1 (debatable, but common practice)
  • Staging or development URLs that accidentally got crawled
  • Tag archive pages with only one or two posts
  • Customer account and profile pages on e-commerce sites
  • PDF documents or media attachment pages that duplicate content from main pages

If the flagged URLs fall into these categories, you can safely ignore the Search Console report for those specific pages. The problem only exists when the noindex tag appears on pages you actually want ranking in Google.

Step 1: Build Your List of Affected URLs

Open Google Search Console and navigate to Indexing → Pages. Click the “Not indexed” tab. Below the chart, you’ll see a table listing reasons why pages aren’t indexed. Find the row labelled “Excluded by ‘noindex’ tag” and click it.

You’ll now see every URL Google has flagged. Here’s what most people miss: this list might not be complete. Google Search Console caps the displayed URLs at 1,000. If your site has more than 1,000 noindexed pages, you’ll need to export the data using the download button at the top of the table.

Organise the URLs by Type

Don’t just scan the list randomly. Export it to a spreadsheet and sort the URLs into categories. I typically create four columns:

  1. URL
  2. Page type (blog post, product page, category archive, thank you page, etc.)
  3. Should be indexed? (Yes / No)
  4. Noindex source (to be filled in during Step 3)

This takes 15 to 30 minutes for a typical SME website with 50 to 200 flagged URLs. For larger e-commerce sites, it can take longer, but the structure saves you hours of confusion later.

Go through each URL and ask yourself one question: “If someone in Singapore searched for the content on this page, would I want them to land here?” If yes, mark it for fixing. If no, leave it alone.

Step 2: Confirm the Noindex Directive Exists (And Find Where It Lives)

Google Search Console tells you a page has a noindex tag, but it doesn’t always tell you exactly where that directive is coming from. You need to verify this yourself, because the removal method depends entirely on the source.

Method 1: Check the HTML Meta Tag

This is the most common location. Open one of your affected URLs in Chrome. Right-click anywhere on the page and select “View Page Source.” Press Ctrl+F (or Cmd+F on Mac) and search for noindex.

You’re looking for something like this inside the <head> section:

<meta name="robots" content="noindex">

Or variations like:

<meta name="robots" content="noindex, nofollow">
<meta name="robots" content="noindex, follow">
<meta name="googlebot" content="noindex">

Note the last one. The googlebot variant specifically targets Google’s crawler while potentially allowing other search engines to index the page. If you’re only checking for name="robots", you might miss this.

Method 2: Check the HTTP Response Header

If you don’t find a meta tag in the HTML source, the noindex directive might be delivered via an HTTP header. This is common for non-HTML resources like PDFs, but I’ve also seen it applied site-wide through misconfigured server rules.

To check this, open Chrome DevTools (press F12), click the Network tab, then reload the page. Click on the first request in the list (it should be the page URL itself). Look at the Response Headers section for:

X-Robots-Tag: noindex

Alternatively, use the URL Inspection Tool in Google Search Console. Paste the affected URL into the search bar at the top, wait for the results, then click “Test Live URL.” After the test completes, click “View Tested Page” and check the “More Info” tab. The HTTP response section will show you the headers Google actually received.

Method 3: Check for JavaScript-Injected Noindex Tags

This is the one that catches even experienced SEOs off guard. Some plugins, themes, or custom scripts inject the noindex meta tag via JavaScript after the initial HTML loads. If you only check the raw page source (Method 1), you won’t see it.

To catch this, use Chrome DevTools. Right-click on the page and select “Inspect” (not “View Page Source”). This shows you the rendered DOM, which includes any changes made by JavaScript. Search for noindex in the Elements panel. If it appears here but not in the raw source, you’ve got a JavaScript injection issue.

Google renders JavaScript before indexing, so a JS-injected noindex tag is just as effective as one in the raw HTML. This is particularly common with certain WordPress caching plugins that inject headers dynamically.

Step 3: Trace the Noindex Tag to Its Source

Finding the tag is one thing. Knowing what put it there is another. If you remove the tag without addressing the source, it will often come back the next time a plugin updates or a cache clears.

Source 1: WordPress Site-Wide Settings

This is the most common culprit I encounter on Singapore WordPress sites, especially those that were recently migrated from a staging environment. Go to Settings → Reading in your WordPress dashboard. Look for the checkbox that says “Discourage search engines from indexing this site.”

If this box is ticked, WordPress adds a noindex meta tag to every single page on your site. I’ve seen businesses run for months with this ticked, wondering why their organic traffic was zero. One client, a tuition centre in Bukit Timah, had this enabled for 11 months after their developer forgot to untick it post-launch. They lost an estimated 340 organic visits per month during that period.

Fix: Untick the box and click “Save Changes.” That’s it.

Source 2: SEO Plugin Settings (Page-Level)

If the noindex tag only appears on specific pages rather than site-wide, your SEO plugin is the most likely source.

In Rank Math:

  1. Edit the affected page or post.
  2. Click the Rank Math icon in the top-right corner of the editor, or scroll to the Rank Math meta box below the content.
  3. Go to the Advanced tab.
  4. Under Robots Meta, check whether “No Index” is ticked. If it is, untick it and ensure “Index” is selected instead.
  5. Click Update to save.

In Yoast SEO:

  1. Edit the affected page or post.
  2. Scroll to the Yoast SEO meta box below the content editor.
  3. Click the Advanced tab (the one with the gear/cog icon).
  4. Find the dropdown labelled “Allow search engines to show this post in search results?”
  5. If it’s set to “No,” change it to “Yes” (or “Default” if your global defaults are set correctly).
  6. Click Update.

In All in One SEO (AIOSEO):

  1. Edit the page.
  2. Scroll to the AIOSEO Settings section.
  3. Click the Advanced tab.
  4. Toggle the “Use Default Settings” off if needed, then ensure “No Index” is not enabled under the Robots Meta settings.
  5. Save.

Source 3: SEO Plugin Settings (Post-Type or Taxonomy Level)

This is where bulk noindex issues usually originate. SEO plugins let you set default indexing rules for entire post types, categories, tags, and custom taxonomies. If someone set “Products” or “Blog Posts” to noindex at the global level, every new page of that type inherits the noindex tag automatically.

In Rank Math: Go to Rank Math → Titles & Meta. Check each tab (Posts, Pages, Products, Categories, Tags, etc.) and look at the “Robots Meta” setting for each. Make sure “No Index” is not the default for any content type you want indexed.

In Yoast SEO: Go to SEO → Search Appearance. Check each content type tab. Look for the toggle “Show [content type] in search results?” and ensure it’s set to “Yes” for everything you want Google to index.

I recently audited a Singapore e-commerce site selling electronics where all 847 product pages were noindexed because someone had toggled the WooCommerce Products post type to “noindex” in Rank Math’s global settings. The fix took 30 seconds. The recovery took 6 weeks.

Source 4: Server-Level Configuration

If the noindex directive is coming from an HTTP header (as identified in Step 2, Method 2), the source is your web server configuration, not WordPress. This is typically found in:

  • .htaccess file (Apache servers): Look for a line like Header set X-Robots-Tag "noindex"
  • nginx.conf (Nginx servers): Look for add_header X-Robots-Tag "noindex";
  • wp-config.php or a custom plugin: Some developers add PHP code that sends the header programmatically using header('X-Robots-Tag: noindex');

If you’re on shared hosting (common for Singapore SMEs using providers like SiteGround, Vodien, or Exabytes), you may not have direct access to nginx configuration. Contact your hosting provider and ask them to check for any X-Robots-Tag headers being applied.

Source 5: CDN or Caching Layer

This is the sneakiest source of all. If you’re using Cloudflare, Sucuri, or another CDN/security layer, it’s possible that a page rule or worker script is injecting the X-Robots-Tag header. I’ve seen this happen when someone sets up a Cloudflare page rule with a wildcard that accidentally matches production URLs.

Check your CDN’s dashboard for any page rules, transform rules, or worker scripts that modify response headers. In Cloudflare specifically, go to Rules → Transform Rules → Modify Response Header and look for anything referencing X-Robots-Tag.

Step 4: Handle Edge Cases

The steps above cover 90% of noindex issues. But if you’ve checked everything and the tag is still appearing, here are the less obvious causes I’ve encountered.

Caching Plugins Serving Stale Pages

If you removed the noindex tag but it’s still showing in the page source, your caching plugin might be serving an old cached version of the page. This is extremely common with WP Rocket, W3 Total Cache, LiteSpeed Cache, and similar plugins.

Fix: Purge your entire page cache after making any noindex changes. In WP Rocket, go to Settings → WP Rocket and click “Clear Cache.” In LiteSpeed Cache (popular on Singapore hosting), go to LiteSpeed Cache → Toolbox → Purge All. Then verify the page source again.

Theme-Level Noindex Tags

Some WordPress themes, particularly older or poorly coded ones, hardcode noindex tags into their header.php template file. If you’ve checked your SEO plugin settings and WordPress reading settings and everything looks correct, open Appearance → Theme File Editor and check the header.php file for any meta robots tags.

Better yet, search your entire theme folder for the string “noindex” using your hosting file manager or an FTP client. This catches instances buried in template partials or functions.php.

Conflicting SEO Plugins

Running two SEO plugins simultaneously (for example, Yoast and Rank Math together) can produce conflicting directives. One plugin might output an index tag while the other outputs noindex. Google will always respect the most restrictive directive. If one says “index” and another says “noindex,” the page will not be indexed.

Check your active plugins list. If you see more than one SEO plugin active, deactivate the one you’re not using. This includes plugins like “All in One SEO,” “SEOPress,” “The SEO Framework,” or even header/footer script plugins that might contain manually added meta tags.

Robots.txt Confusion

A quick clarification that trips up many site owners: robots.txt and noindex are completely different mechanisms. A robots.txt disallow rule prevents Googlebot from crawling a page. A noindex tag prevents Google from indexing a page it has already crawled.

Here’s the counterintuitive part. If your robots.txt blocks Googlebot from accessing a page that also has a noindex tag, Google cannot see the noindex tag (because it can’t crawl the page). In some cases, Google may actually index the URL based on external signals like inbound links, even though you intended to hide it. If you want a page excluded from search results, the noindex tag is the correct approach, not robots.txt.

Step 5: Request Re-Indexing in Google Search Console

After removing the noindex tag and clearing your cache, you need to tell Google to come back and re-check the page. Google will eventually re-crawl it on its own schedule, but you can speed things up.

  1. Go to Google Search Console.
  2. Paste the fixed URL into the inspection bar at the top and press Enter.
  3. The tool will show the old status (likely “URL is not on Google” with the noindex reason). Click “Test Live URL” to fetch a fresh version.
  4. If the live test shows the page is now indexable (no noindex detected), click “Request Indexing.”

You can request indexing for individual URLs this way. Google limits you to roughly 10 to 12 requests per day per property, so if you have hundreds of pages to fix, prioritise your most important ones first.

For Bulk Fixes: Submit Your Sitemap

If you’ve fixed noindex tags across many pages (for example, by changing a global SEO plugin setting), submitting your sitemap is more efficient than requesting indexing one URL at a time. Go to Indexing → Sitemaps in Search Console and click on your existing sitemap to resubmit it. This signals to Google that your site has changed and encourages a fresh crawl.

Make sure the pages you’ve fixed are actually included in your XML sitemap. If your SEO plugin was set to noindex those pages, it likely also excluded them from the sitemap automatically. After changing the setting, regenerate your sitemap (most plugins do this automatically) and verify the URLs are present by opening your sitemap URL directly in a browser.

Step 6: Validate the Fix in Search Console

Google Search Console has a built-in validation feature specifically designed for this. After you’ve made your fixes, go back to the Indexing → Pages report, click on “Excluded by ‘noindex’ tag,” and click the “Validate Fix” button.

Google will then re-crawl a sample of the affected URLs over the following days. You’ll receive an email notification with the results. The validation can take anywhere from a few days to two weeks, depending on your site’s size and crawl frequency.

What the Validation Statuses Mean

  • Started: Google has acknowledged your request and will begin re-crawling.
  • Looking good: The sample URLs Google has re-crawled so far no longer have the noindex tag. This is promising but not final.
  • Passed: All sampled URLs have been verified as fixed. Your pages should start appearing in search results.
  • Failed: Some URLs still have the noindex tag. Check which specific URLs failed and investigate further.

Don’t panic if validation takes time. For a medium-sized Singapore business website (100 to 500 pages), I typically see full validation complete within 5 to 10 days.

Step 7: Monitor and Prevent Recurrence

Fixing the noindex issue once is good. Making sure it doesn’t come back is better. Here’s how to set up ongoing monitoring.

Set Up Search Console Email Alerts

Google Search Console sends email notifications when it detects new indexing issues. Make sure the email address associated with your Search Console account is one you actually check. I’ve seen Singapore business owners register Search Console with a developer’s email, then never receive the alerts.

Run Monthly Indexing Audits

Add a recurring task to your calendar: once a month, log into Search Console and check the Indexing → Pages report. Look for any sudden spikes in the “Not indexed” count. A jump from 20 to 200 noindexed pages overnight usually means a plugin update changed a default setting or someone modified a template file.

Document Your Indexing Decisions

Create a simple document (even a Google Sheet works) that lists which page types should be indexed and which should not. Share it with anyone who has access to your WordPress dashboard. This prevents the scenario where a well-meaning team member or freelancer changes settings without understanding the consequences.

For example:

Page Type Should Be Indexed? Notes
Blog posts Yes All published posts
Service pages Yes Core revenue pages
Category archives Yes Only if they have 5+ posts
Tag archives No Too thin, noindex by default
Thank you pages No Post-conversion only
Author archives No Single author site, duplicates blog page

Use a Crawling Tool for Larger Sites

If your site has more than 500 pages, manual checking becomes impractical. Tools like Screaming Frog SEO Spider (free for up to 500 URLs) or Sitebulb can crawl your entire site and flag every page with a noindex tag. Run a crawl after any major site update, plugin change, or theme switch.

In Screaming Frog, after crawling your site, go to the Directives tab and filter by “Noindex.” This gives you a complete picture that’s independent of Google Search Console’s data, which can sometimes lag by a few days.

Real Timelines: How Long Until Your Pages Appear in Google?

This is the question everyone asks, and the honest answer is: it depends. Based on the sites I’ve worked on in Singapore over the past three years, here are realistic timelines.

Small sites (under 50 pages): After requesting indexing, most pages reappear in Google within 2 to 5 days.

Medium sites (50 to 500 pages): Expect 1 to 3 weeks for the majority of pages to be re-indexed. High-authority pages (those with backlinks) tend to come back faster.

Large e-commerce sites (500+ pages): Full re-indexing can take 3 to 6 weeks. Google prioritises pages it considers more important, so your homepage and top category pages will return first, followed by individual product pages.

One important note: re-indexing does not guarantee your previous rankings will return immediately. If your pages were noindexed for an extended period, Google essentially “forgot” their ranking signals. It may take additional weeks for your pages to climb back to their previous positions. During this recovery period, do not make additional major changes to those pages. Let Google re-evaluate them with stable content.

A Quick Checklist You Can Bookmark

Here’s the condensed version of everything above. Save this for the next time you spot the issue:

  1. Export the full list of noindexed URLs from Google Search Console.
  2. Categorise each URL as “should be indexed” or “correctly noindexed.”
  3. For pages that should be indexed, check the HTML source for <meta name="robots" content="noindex">.
  4. Check HTTP response headers for X-Robots-Tag: noindex.
  5. Check the rendered DOM (via Chrome DevTools Inspect) for JavaScript-injected noindex tags.
  6. Trace the source: WordPress Reading settings, SEO plugin page-level settings, SEO plugin global settings, server config, CDN rules, or theme files.
  7. Remove the noindex directive at its source.
  8. Purge all caching layers (page cache, CDN cache, server-level cache).
  9. Verify the fix by re-checking the page source and using Search Console’s “Test Live URL.”
  10. Request indexing for priority pages. Resubmit your sitemap for bulk fixes.
  11. Click “Validate Fix” in the Search Console indexing report.
  12. Monitor monthly. Document your indexing rules.

When to Call in Professional Help

Most noindex issues are straightforward. But there are situations where the fix requires deeper technical knowledge:

  • The noindex tag is being injected by server-level configuration you don’t have access to.
  • You have conflicting directives across multiple plugins, theme files, and server rules.
  • Your site uses a headless CMS or JavaScript framework (React, Next.js, Vue) where meta tags are rendered dynamically.
  • You’ve fixed the tag but pages still aren’t being indexed after 4+ weeks, which could indicate other crawlability or quality issues.
  • You’re running a large e-commerce site with thousands of product pages and need a systematic approach to indexing management.

These scenarios require someone who can read server logs, trace HTTP headers through multiple proxy layers, and diagnose rendering issues. If you’ve followed every step in this guide and the problem persists, it’s worth getting a technical SEO audit rather than guessing.

We run these audits regularly for Singapore businesses at bestseo.sg. If you’d like us to take a look at your indexing issues, drop us a message through our contact page. No obligation, and we’ll tell you honestly whether it’s something you can sort out yourself or whether it needs deeper investigation.

Jim Ng, Founder of Best SEO Singapore
Jim Ng

Founder of Best Marketing Agency and Best SEO Singapore. Started in 2019 cold-calling 70 businesses a day, scaled to 14, then leaned out to a 9-person AI-first team serving 146+ clients across 43 industries. Acquired Singapore Florist in 2024 and grew it to #1 rankings for competitive keywords. Every SEO strategy ships with his personal review.

Connect on LinkedIn

Want Results Like These for Your Site?

Book a free 30-minute strategy session. No pitch, just a real look at what is holding your organic traffic back.

Book A Free Growth Audit(Worth $2,500)