Better Robots.txt Rules for WordPress
Cleaning up my files during the recent redesign, I realized that several years had somehow passed since the last time I even looked at the site’s
robots.txt file. I guess that’s a good thing, but with all of the changes to site structure and content, it was time again for a delightful romp through robots.txt.
This post summarizes my research and gives you a near-perfect robots file, so you can copy/paste completely “as-is”, or use a template to give you a starting point for your own customization.
Robots.txt in 30 seconds
Primarily, robots directives disallow obedient spiders access to specified parts of your site. They can also explicitly “allow” access to specific files and directories. So basically they’re used to let Google, Bing et al know where they can go when visiting your site. You can also do nifty stuff like instruct specific user-agents and declare sitemaps. For just a simple text file,
robots.txt wields considerable power. And we want to use whatever power we can get to our greatest advantage.
Better robots.txt for WordPress
Running WordPress, you want search engines to crawl and index your posts and pages, but not your core WP files and directories. You also want to make sure that feeds and trackbacks aren’t included in the search results. It’s also good practice to declare a sitemap. With that in mind, here are the new and improved robots.txt rules for WordPress:
User-agent: * Disallow: /wp-admin/ Disallow: /trackback/ Disallow: /xmlrpc.php Disallow: /feed/ Allow: /wp-admin/admin-ajax.php Sitemap: https://example.com/sitemap.xml
Only one small edit is required: change the
Sitemap to match the location of your sitemap (or remove the line if no sitemap is available).
I use this exact code on nearly all of my major sites. It’s also fine to customize the rules, say if you need to exclude any custom directories and/or files, based on your actual site structure and SEO strategy.
To add the robots rules code to your WordPress-powered site, just copy/paste the code into a blank file named
robots.txt. Then add the file to your web-accessible root directory, for example:
If you take a look at the contents of the robots.txt file for Perishable Press, you’ll notice an additional robots directive that forbids crawl access to the site’s blackhole for bad bots. Let’s have a look:
User-agent: * Disallow: /wp-admin/ Disallow: /trackback/ Disallow: /xmlrpc.php Disallow: /feed/ Disallow: /blackhole/ Allow: /wp-admin/admin-ajax.php Sitemap: https://perishablepress.com/wp-sitemap.xml
Spiders don’t need to be crawling around anything in
/wp-admin/, so that’s disallowed. Likewise, trackbacks, xmlrpc, and feeds don’t need to be crawled, so we disallow those as well. Also, notice that we add an explicit
Allow directive that allows access to the WordPress Ajax file, so crawlers and bots have access to any Ajax-generated content. Lastly, we make sure to declare the location of our sitemap, just to make it official.
Notes & Updates
Update! The following directives have been removed from the tried and true robots.txt rules in order to appease Google’s new requirements that googlebot always is allowed complete crawl access to any publicly available file.
Disallow: /wp-content/ Disallow: /wp-includes/
Apparently Google is so hardcore about this new requirement1 that they actually are penalizing sites (a LOT) for non-compliance2. Bad news for hundreds of thousands of site owners who have better things to do than keep up with Google’s constant, often arbitrary changes.
- 1 Google demands complete access to all publicly accessible files.
- 2 Note that it may be acceptable to disallow bot access to
/wp-includes/for other (non-Google) bots. Do your research though, before making any assumptions.
Previously on robots.txt..
As mentioned, my previous
robots.txt file went unchanged for several years (which just vanished in the blink of an eye). The previous rules proved quite effective, especially with compliant spiders like
googlebot. Unfortunately, it contains language that only a few of the bigger search engines understand (and thus obey). Consider the following robots rules, which were used here at Perishable Press way back in the day.
User-agent: * Disallow: /mint/ Disallow: /labs/ Disallow: /*/wp-* Disallow: /*/feed/* Disallow: /*/*?s=* Disallow: /*/*.js$ Disallow: /*/*.inc$ Disallow: /transfer/ Disallow: /*/cgi-bin/* Disallow: /*/blackhole/* Disallow: /*/trackback/* Disallow: /*/xmlrpc.php Allow: /*/20*/wp-* Allow: /press/feed/$ Allow: /press/tag/feed/$ Allow: /*/wp-content/online/* Sitemap: https://perishablepress.com/sitemap.xml User-agent: ia_archiver Disallow: /
Apparently, the wildcard character isn’t recognized by lesser bots, and I’m thinking that the end-pattern symbol (dollar sign
$) is probably not well-supported either, although Google certainly gets it.
These patterns may be better supported in the future, but going forward there is no reason to include them. As seen in the “better robots” rules (above), the same pattern-matching is possible without using wildcards and dollar signs, enabling all compliant bots to understand your crawl preferences.
Check out the following recommended sources to learn more about robots.txt, SEO, and more:
WordPress has a hook to modify the robots.txt data programmatically, I think. Would be nice to have this as a plugin that could be updated as you improve the method. A more advanced plugin would allow for turning rules on & off as desired, adding custom rules, etc.
That’s a great idea, I wish I had the time!
Now that I think about it, I think there is already a plugin that does this to some degree, but if not somebody should definitely do it.
Yep, it’s the do_robots hook.
I add a few rules using a standard plugin I put in the must use directory (/wp-content/mu-plugins/). I might add a few more from Jeff’s post above once I’ve had the chance to consider it.
It’s tuned for a WordPress Network but I’ve added it to pastebin anyway. http://wordpress.pastebin.com/j9W2JYTr
Not so sure that blocking the feed is a great move. Google is generally pretty good at parsing feed content.
It’s a close call, with duplicate content vs having your feed indexed. Unless feed content is different than the site content, blocking
/feed/is a good move because it preserves page rank and keeps the focus on the site.
You’ll see in my previous robots file that I allowed the main feed to be indexed. These days however, I’m trying to keep duplicate content down to a minimum.
I really doubt that. Google crawls RSS feeds for Google Reader and it knows it’s RSS or ATOM. It even finds your RSS feeds and allows you to add them as sitemaps in Google Webmaster Tools.
And that’s why there is
rel="alternate"in the link to the feed (in head).
Perhaps, but eliminating duplicate content in the search index should take precedence over a bit of convenience in the Webmaster Tools area.
rel="alternate"is meaningless if your feed content is identical to your blog content, which is the case 99.99% of the time.
Yeah, but it’s often duplicate content and you’d prefer someone land on the html article than an xml feed from a search query.
There are other considerations, of course, but that’s typically why I disallow feeds.
Now that’s a clean and mean robots.txt file!
Glad you eventually buried the paranoia that must have gripped you back in the day ;)
UR so a dude, dude.
Great post. Could you elaborate on why you use Disallow:
?wptheme=I’ve not seen that one before. Is it a directive specific to your particular theme?
?wptheme=string is for the WP Theme Switch plugin. The goal of course to keep duplicate versions of your site out of the search index.
Jeff, thank you. May I ask one more question? What is the rationale for disallowing
xmlrpc.php? My understanding is the it is an API primarily for remote publishing so I am unclear on what a crawler would glean from it? Thanks in advance for your thoughts…
As far as I know, there is no reason the
xmlrpc.phpfile needs to be crawled and indexed. The API is there for scripts and apps to work with directly. Disallowing robots access in no way affects the xmlrpc functionality.
Why disallow sitemap? Isn’t the whole point of the sitemap so spiders can crawl it?
Nope, you want spiders to crawl your canonical content, posts, pages, and etc.
Unless feed content is different than the site content, blocking
/feed/is a good move because it preserves page rank and keeps the focus on the site.
Where do posts and pages actually live? In /wp-content/ ?
In the database, and the URLs are dynamically generated by WordPress.
What’s /wp-content/online/ that you allow it?
/online/directory houses demos, scripts, and other assets for articles and such.
It’s a directory you created for your own site, correct? It’s not universal for all WP installations.
If you disallow ia_archiver how is it that according to the SearchStatus Firefox addon, you still have an Alexa rating?
Maybe because they don’t obey
robots.txt..? Remember, robots rules are merely suggestions.
What should the file permission for robots.txt be?
rw- r-- r--(644) should work fine :)
Hi Jeff, thanks for this great post, I have already modified my robots.txt file according to this post. thanks a lot.