Note: Originally published on http://janeandrobot.com

Controlling what content is blocked from being found in search engines is crucial for many websites. Fortunately, the major search engines and other well-behaved robots observe the Robots Exclusion Protocol (REP), which has evolved organically since the early 1990′s to provide a set of controls over what parts of a web site search engines robots can crawl and index.

Article Sections:

Capabilities of the REP

The Robots Exclusion Protocol provides controls that can be applied at the site level (robots.txt), at the page level (META tag, or X-Robots-Tag), or at the HTML element level to control both the crawl of your site and the way it’s listed in the search engine results pages (SERPs). Below is a table listing the common scenarios, directives, and which search engines support them.

Use Case Robots.txt META Tag
X-Robots-Tag
Other Supported By
Allow access to your content Allow FOLLOWINDEX Google, Microsoft
Disallow access to your content Disallow NOINDEXNOFOLLOW Google, Microsoft
Disallow access to index images on the page NOIMAGEINDEX Google
Disallow the display of a cached version of your content in the SERP NOARCHIVE Google, Microsoft
Disallow the creation of a description for this content in the SERP NOSNIPPET Google, Microsoft
Disallow the translation of your content into other languages NOTRANSLATE Google
Do not follow or give weight to links within this content NOFOLLOW rel=NOFOLLOW Google, Microsoft
Do not use the Open Directory Project (ODP) to create descriptions for your content in the SERP NOODP Google, Microsoft
Stop indexing this content after a specific date UNAVAILABLE_AFTER Google
Disallow the creation of enhanced captions NOPREVIEW Microsoft
Specify a sitemap file or a sitemap index file Sitemap Google, Microsoft
Specify how frequently a crawler may access your website Crawl-Delay Google WMT Microsoft
Authenticate the identity of the crawler Reverse DNS Lookup Google, Microsoft
Request removal of your content from the engine’s index Google WMT, Microsoft WMT Google, Microsoft

 

Deciding What Should be Public vs. Private

One of the first steps in managing the robots is knowing what type of content should be public vs. private. Start with the assumption that by default, everything is public, then explicitly identify the items that are private.

If you want search engines to access all the content on your site, you don’t need a robots.txt file at all. When a search engine tries to access the robots.txt file on your site and the server can’t return one (ideally by returning a 404 HTTP status code), the search engine treats this the same as a robots.txt file that allows access to everything.

Every website and every business has a different set of needs, so there’s no blanket rule for what to make private, but some common elements may apply.

  • Private data – You may have content on your site that you don’t want to be searchable in search engines. For instance, you may have private user information (such as addresses) that you don’t want surfaced. For this type of content, you may want to use a more secure approach that keeps all visitors from the pages (such as password protection). However, some types of content are fine for visitor access, but not search engine access. For instance, you may run a discussion forum that is open for public viewing, but you may not want individual posts to appear in search results for forum member names.
  • Non-content content – Some content, like images used for navigation, provides little value to searchers. It’s not harmful to include these items in search engine indices, but since search engines allocate limited bandwidth to crawl each site and limited space to store content from each site, it may make sense to block these items to help direct the bots to the content on your site that you do want indexed.
  • Printer-friendly pages – if you have specific pages (URLs) that are formatted for printing you may want to block them out to avoid duplicate content issues. The drawback to allowing the printer-friendly page to be indexed is that it could potentially be listed in the search results instead of the default version of the page, which wouldn’t provide an ideal user experience for a visitor coming to the site through search.
  • Affiliate links and advertising – If you include advertising on your site, you can keep search engine robots from following the links by redirecting them to a blocked page, then on to the destination page. (There are other methods for implementing advertising-based links as well.)
  • Landing pages – Your site may include multiple variations of entry pages used for advertising purposes. For instance, you may run AdWords campaigns that link to a particular version of a page based on the ad, or you may print different URLs for different print ad campaigns (either for tracking purposes or to provide a custom experience related to the ad). Since these pages are meant to be an extension of the ad, and are generally near duplicates of the default version of the page, you may want to block these landing pages from being indexed.
  • Experimental pages – As you try new ideas on your site (for instance, using A/B testing), you likely want to block all but the original page from being indexed during the experiment.

Implementing the REP

REP is flexible and can be implemented a number of ways. This flexibility lets you easily specify some policies for your entire site (or subdomain) and then enhance them more granularly at the page or link level as needed.

Site Level Implementation (Robots.txt)

Site wide directives are stored in a robots.txt file, which must be located in the root directory of each domain or sub-domain (e.g. http://janeandrobot.com/robots.txt.) Note that robots.txt files only apply to the hostname where they are placed, and do not apply to subdomains. So a robots.txt file located on http://microsoft.com/robots.txt will not apply to the MSDN subdomain http://msdn.microsoft.com. However, the robots.txt file does apply to all subfolders and pages within the specified hostname.

A robots.txt file is a UTF-8 encoded file that contains entries that consist of a user-agent line (that tells the search engine robot if the entry is directed at it) and one or more directives that specify content that the search engine robot is blocked from crawling or indexing. A simple robots.txt file is shown below.

User-agent: *
Disallow: /private

user-agent: – Specifies which robots the entry applies to.

  • Set this to * to specify that this entry applies to all search engine robots.
  • Set this to a specific robot name to provide instructions for just that robot. You can find a complete list of robot names at robotstxt.org.
  • If you direct an entry at a particular robot, then it obeys that entry instead of any entries defined for user-agent: * (rather than in addition to those entries).

The major search engines have multiple robots that crawl the web for different types of content (such as images or mobile). They generally begin all robots with the same name so that if you block the major robot, all robots for that search engine are blocked as well. However, if you want to block only the more specific robot, you can block it directly and still allow web crawl access.

  • Google – The primary search engine robot is Googlebot.
  • Yahoo! – The primary search engine robot is Slurp.
  • Bing – The primary search engine robots is MSNbot.

Disallow: - Specifies what content is blocked

  • Must begin with a slash (/ ).
  • Blocks access to any URLs that begin with the characters after the /. For instance, Disallow: /images blocks access to /images/ , /images/image1.jpg, and /images10.

You can specify other rules for search engine robots in addition to the standard instructions that block access to content as noted in other robot instructions.

Some things to note about robots.txt implementation:

  • The major search engines support pattern matching using the asterisk character (*) for wildcard match and the dollar sign ($) for end of sequence matching as described below in using pattern matching.
  • The robots.txt file is case sensitive, so Disallow: /images would block http://www.example.com/images but not http://www.example.com/Images.
  • If conflicts exist in the file, the robot obeys the longest (and therefore generally more specific) line.

Basic Samples

Block all robots

Useful when your site is in pre-launch development and isn’t ready for search traffic.

# This keeps out all well-behaved robots.
# Disallow: * is not valid.
User-agent: *
Disallow: /

Keep out all bots by default

Blocks all pages except those specified. Not recommended as is difficult to maintain and diagnose.

# Stay out unless otherwise stated
User-agent: *
Disallow: /
Allow: /Public/
Allow: /articles/
Allow: /images/

Block specific content

The most common usage of robots.txt.

# Block access to the images folder
User-agent: *
Disallow: /images/

Allow specific content

Block a folder, but allow access to selected pages in that folder.

# Block everything in the images folder
# Except allow images/image1.jpg
User-agent: *
Disallow: /images/
Allow: /images/image1.jpg

Allow specific robot

Block a class of robots (for instance, Googlebot), but allow a specific bot in that class (for instance, Googlebot-Mobile).

# Block Googlebot access
# Allow Googlebot-Mobile access
User-agent: Googlebot
Disallow: /
User-agent: Googlebot-Mobile
Allow: /

Pattern Matching Examples

The major engines support two types of pattern matching.

  • *matches any sequence of characters
  • $ matches the end of URL

Block access to URLs that contain a set of characters

Use the asterisk (*) to specify a wildcard.

# Block access to all URLs that include an ampersand
User-agent: *
Disallow: /*&

This directive would block search engines from crawling http://www.example.com/page1.asp?id=5&sessionid=xyz.

Block access to URLs that end with a set of characters

Use the dollar sign ($) to specify end of line.

# Block access to all URLs that end in .cgi
User-agent: *
Disallow: /*.cgi$

This directive would block search engines from crawling http://www.example.com/script1.cgi but not from crawling http://www.example.com/script1.cgi?value=1.

Selectively allow access to a URL that matches a blocked pattern

Use the Allow directive in conjunction with pattern matching for more complex implementations.

# Block access to URLs that contain ?
# Allow access to URLs that end in ?
User-agent: *
Disallow: /*?
Allow: /*?$

That directive blocks all URLs that contain ? except those that end in ? . In this example, the default version of the page will be indexable: http://www.example.com/productlisting.aspx?

Variations of the page will be blocked: * http://www.example.com/productlisting.aspx?nav=price * http://www.example.com/productlisting.aspx?sort=alpha

Other robot instructions

Specify a Sitemap or Sitemap index file

If you’d like to provide search engines with a comprehensive list of your best URLs, you can provide one or more Sitemap autodiscovery directives. Note, user-agent does not apply to this directive so you cannot use this to specify a Sitemap to some but not all search engines.


# Please take my sitemap and index everything!
Sitemap: http://janeandrobot.com/sitemap.axd

Reduce the crawling load

This only works with Microsoft and Yahoo. For Google you’ll need to specify a slower crawling speed through their Webmaster Tools. Be careful when implementing this because if you slow down the crawl too much, robots won’t be able to get to all of your site and you may lose pages from the index.


# MSNBot, please wait 5 seconds in between visits
User-agent: msnbot
Crawl-delay: 5

# Yahoo's Slurp, please wait 12 seconds in between visits
User-agent: slurp
Crawl-delay: 12

Page Level Implementation (META Tags)

The REP page-level directives allow you to refine the site wide policies on a page-by-page basis

Placing a meta tag on the page

Place the meta tag in the head tag. Each directive should be comma delimited inside the tag. E.g. <meta name=”ROBOTS” content=”Directive1, Directive 2>.

<html>
 <head>
 <title>Your title here</title>
 <meta name="ROBOTS" content="NOINDEX">
 </head>
 <body>Your page here</body>
</html>

Targeting a specific search engine

Within the meta tag you can specify which search engine you would like to target, or you can target them all.

<!-- Applies to All Robots -->
<meta name="ROBOTS" content="NOINDEX">
 
<!-- ONLY GoogleBot -->
<meta name="Googlebot" content="NOINDEX">
 
<!-- ONLY Slurp (Yahoo) -->
<meta name="Slurp" content="NOINDEX">
 
<!-- ONLY MSNBot (Microsoft) -->
<meta name="MSNBot" content="NOINDEX">

Control how your listings appear – there are a set of options you can use to determine how your site will show up on the SERP. You can exert some control over how the description is created, and remove the “Cached page” link.

example-serp

<!-- Do not show a description for this page -->
<meta name="ROBOTS" content="NOSNIPPET">
 
<!-- Do not use http://dmoz.org to create a description -->
<meta name="ROBOTS" content="NOODP">
 
<!-- Do not present a cached version of the document in a search result -->
<meta name="ROBOTS" content="NOARCHIVE">

Using other directives

Other meta robots directives are shown below.

<!-- Do not trust links on this page, could be user generated content (UCG) -->
<meta name="ROBOTS" content="NOFOLLOW">
<!-- Do not index this page -->
<meta name="ROBOTS" content="NOINDEX">
 
<!-- Do not index any images on this page (will still index the if they are linked
 elsewhere) Better to use Robots.txt if you really want them safe.
 This is a Google Only tag. -->
<meta name="GOOGLEBOT" content="NOIMAGEINDEX">
 
<!-- Do not translate this page into other languages-->
<meta name="ROBOTS" content="NOTRANSLATE">
 
<!-- NOT RECOMMENDED, there really isn't much point in using these -->
<meta name="ROBOTS" content="FOLLOW">
<meta name="ROBOT" content="UNAVAILABLE_AFTER">

HTTP Header Implementation (X-ROBOTS-Tag)

Allows developers to specify page-level REP directives for non text/html content types like PDF, DOC, PPT, or dynamically generated images.

Using the X-Robots-Tag

Use the X-Robots-Tag, simply add it to your header as shown below. To specify multiple directives you can either comma delimit them, or add them as separate header items.

HTTP/1.x 200 OK
Cache-Control: private
Content-Length: 2199552
Content-Type: application/octet-stream
Server: Microsoft-IIS/7.0
content-disposition: inline; filename=01 - The truth about SEO.ppt
X-Robots-Tag: noindex, nosnippet
X-Powered-By: ASP.NET
Date: Sun, 01 Jun 2008 19:25:47 GMT

The X-Robots-Tag directive supports most of the same directives as the meta tag. The only limitation with this method over the meta tag implementation is that there is no way to target a specific robot – though that probably isn’t a big deal for most use cases.

  • X-Robots-Tag: noindex
  • X-Robots-Tag: nosnippet
  • X-Robots-Tag: notranslate
  • X-Robots-Tag: noarchive
  • X-Robots-Tag: unavailable_after: 7 Jul 2007 16:30:00 GMT

Content Level Implementation

You can further refine your site level and page level directives within several content tags.

Each anchor tag (link) can be modified to tell search engines that you do not trust where this URL is pointing to. This is typically used for links within user generated content (UCG) like wikis, blog comments, reviews and other community sites.

<a href="#" rel="NOFOLLOW">My Hyperlink</a>

Also, in Yahoo Search you can specify which <div> elements on a page you would not like indexed using the class=robots-nocontent attribute. However, we don’t highly recommend using this tag because it is not supported in any other engine, making it not super-useful.

<div class="robots-nocontent">
No content for you! (or at least Yahoo!)
</div>

Common Mistakes

While implementing the REP is generally straight-forward, there are a few common mistakes.

GoogleBot follows the most specific directive, ignoring all others

In the robots.txt file, if you specify a section for all user-agents (user-agent: * ) and also declare a section for Googlebot (user-agent: Googlebot ), Google will disregard all sections in the robots.txt file except the Googlebot section. This could potentially leave you exposing much more content to Google that you might have thought.

# This keeps out all well-behaved robots
User-agent: *
Disallow: /

# This looks like it is giving Google access to only this directory, but since it is a
# GoogleBot specific section, Google will disregard the previous section
# and access the whole site.
User-agent: Googlebot
Allow: /Content_For_Google/

NOFOLLOW will most likely not prevent indexing

If you use NOFOLLOW at either the page or the link level, it is still possible for the links from the page to be indexed because the search engine may have found a reference to them from another source. Another note, using rel="NOFOLLOW" within your anchor text is still perceived as a recommendation by the search engines, not a command.To ensure that content is not indexed, either use the Disallow directive at the site level, or use NOINDEX at the page level.

Directives that are not recommended

Directives in the REP are all about exceptions, by default the robots assume they can crawl your whole site. Therefore, you do not need to explicitly use the FOLLOW and INDEX directives as they will not be taken into account by the search engines. It sounds silly but I’ve seen a few sites that have implemented these on every page and every link.Another directive that is not recommended is the NOCACHE directive. This was created by Microsoft, and is synonymous with NOARCHIVE . While they will most likely always continue to support the directive, it is better to use NOARCHIVE so it will work on all the search engines.

Be cognizant of case

When referencing files and URLs in the Robots.txt file, use a defensive approace to URL case, as the major engines do not handle it the same way. (e.g. /Files does not always equal /files).

Testing Your Implementation

As you’re implementing your REP design, you should test it both before you deploy it and after. The easiest way to test this is to use the robots validator in either Google or Microsoft’s Webmaster Tools. These tools are generally good enough test beds for most folks, however advanced developers (or paranoid ones with critical business requirements) will want to definitively know what the robots are doing, not simply rely on what the robots say they are doing. These folks will want to look at their tools as well look at their server logs.

In addition to using validation tools, reporting tools from the search engines on what they couldn’t acces, and looking at logs data to see what the search engine robots are crawling, you should check the search engine results to see if any pages you are intending to block are being indexed. If they are, use the methods described in this section to ensure you are blocking them correctly and use the search engine tools to request that the pages be removed.

When Blocked Content Appears to be Indexed

If search engines are blocked from crawling pages, they may still index the URL if the robot finds a link to that URL on a page that isn’t blocked. The listing may display the URL only, such as shown below.

urlonly

Or, it may include a title and in some instances, a description. This makes it appear as though the search engine robot is disregarding the directive that blocks access to the page, but the search engine is in fact obeying the directive not to crawl the page and is using anchor text from the link to that page and descriptive details from either the page that contains the link or a source such as the Open Directory Project.

For more details, see:

The Easy Way

Both Google and Microsoft provide some tools as part of their Webmaster Centers to help you verify if you’ve configured your REP the way you expect. Let’s start with Google’s tools:

The first thing you should check are the list of URLs that Google has seen from your website and not indexed due to the REP. Note you can also download the list and filter, sort, and have-your-way-with-it in Excel.

webmaster-robotstxt-blocked1

The next step is to use their interactive robots.txt tool to analyze your rules and test specific URLs for blockage. When you pull up the tool they already should have it pre-populated with the robots.txt file they have on file from the last time they crawled. You can input a list of URLs you’d like to check below, select the user-agent you’d like to check against and the tool will tell you if they are blocked or not. You can also use the tool to test changes to your robots.txt file to see how Google would interpret things.

google-analyze-robotstxt

Microsoft has a similar tool in their Webmaster Center that will validate a robots.txt file against the standard that MSNBot supports. To use the tool, simply log in copy & paste your robots.txt file into the top field and select Validate. A list of all detectable issues are displayed in the bottom box.

microsoft-robotstxt-validat

The Hard Way

More Accurate Views of Robot Access Through Your Logs

If you have a specific business need to ensure that the robots are following your rules, (or you’re just paranoid) then you should not simply rely on the tools they provide to test compliance. You’re going to need to go straight to the horse’s mouth and analyze your web server logs to see exactly what they are doing. There is no one easy tool for doing this, you’ll likely have to use an existing tool like one of these (Microsoft HTTP Log Parser) or write your own. It isn’t difficult, it will simply take some time to implement. A useful reference for this is a list of all the robot user agents, and more complete list of bots from Google, and Microsoft.

Verifying Robot Identity

Another thing you’ll likely want to consider in this endeavor is to validate that the robots are who they actually say they are. Google, Yahoo and Microsoft all support Reverse DNS authentication of their robots. The process is pretty simple and described here by Google, Yahoo and Microsoft, essentially you simply find out what range their robot’s DNS is hosted in, and use that in your tool. This way, if the address changes (which it will), you don’t need to update your code.

Should you find any issues, where one of the robots are not minding the REP, or are misbehaving in some other way, you can always communicate directly with each engine through one of their forums:

Removing Content From Search Engine Indices

If you find that you haven’t implemented the techniques described here correctly and private content from your site is indexed, each of the major search engines has methods available for requesting that it be removed. For more information, see:

Additional Resources:

Revision History

  • 02/12/2009 – Google, Yahoo and Microsoft make a joint announcement of the rel=’Canonical’ tag to make it easier for publishers to specify the canonical URLs.
  • 06/04/2009 – Added NOPREVIEW tag announced this week by Microsoft. Used to disable the ‘hover preview’ feature on their SERP.
  • 03/13/2013 - Removed Yahoo references because they are now powered by Microsoft, and renamed Live Search to Bing