Google Works To Make Robots Exclusion Protocol A Real Standard

Google’s webmaster channel is on a series of posts every hour around the Robots Exclusion Protocol – in short, an hour ago, Google announced that after 25 years of being a de-facto standard, Google has worked with Martijn Koster, webmasters, and other search engines to make the Robots Exclusion Protocol an official standard.

Here are the posts starting at 3am and going every hour thus far:

Google said “it doesn’t change the rules created in 1994, but rather defines essentially all undefined scenarios for robots.txt parsing and matching, and extends it for the modern web. Notably:”

  • Any URI based transfer protocol can use robots.txt. For example, it’s not limited to HTTP anymore and can be used for FTP or CoAP as well.
  • Developers must parse at least the first 500 kibibytes of a robots.txt. Defining a maximum file size ensures that connections are not open for too long, alleviating unnecessary strain on servers.
  • A new maximum caching time of 24 hours or cache directive value if available, gives website owners the flexibility to update their robots.txt whenever they want, and crawlers aren’t overloading websites with robots.txt requests. For example, in the case of HTTP, Cache-Control headers could be used for determining caching time.
  • The specification now provisions that when a previously accessible robots.txt file becomes inaccessible due to server failures, known disallowed pages are not crawled for a reasonably long period of time.

This was a big deal for the folks at Google and the partners to make happen:

Just to be clear – nothing is changing with this announcement for you:

You might also like
Leave A Reply

Your email address will not be published.