What Is Robots.txt?
Learn about the Robots.txt file and how robots view a website. We can help, learn about how robots read websites and improve your SEO!
What is Robots.txt? A robots.txt file tells search engine crawlers, or “robots,” what parts of the website they can evaluate when ranking a website.
This file is very useful for website owners, since it can prevent crawlers from indexing content that negatively affects search engine ranking, such as duplicate pages or certain scripts.
This brief guide will cover the basics of search engine crawlers and robots.txt files, and how search engines use the two to rank websites.
Начните с этого пошагового руководства: 1. How “robots” view a website 2. Robots.txt and search engine crawlers 3. Creating and formatting a robots.txt file
1. How “robots” view a website Also known as “robots,” search engine crawlers regularly scan websites for factors that affect search engine ranking. While these factors are always changing, most websites have essential components that could harm their rank if seen by a crawler. For example, with content quality being a major ranking factor in search engine optimization (SEO), websites with duplicate content often receive a lower search engine ranking. However, not all duplicate content is bad: some websites need duplicate content in order to run certain variants of their website, such as a printable version of the page. With this in mind, it’s easy to see why website owners might want a crawler to ignore certain parts of their website.
2. Robots.txt and search engine crawlers At a basic level, most crawlers tend to avoid any page with a robots metatag. However, these can be unreliable since many search engines will ignore metadata when indexing. As a result, websites should use a robots.txt file to send crawlers in the right direction.
With the robots.txt file, website owners can choose which parts of their websites crawlers can index. From the robot’s perspective, a robots.txt file – with the right formatting – is essentially a “do not disturb” sign for certain areas of the website. Of course, this only stops search engine crawlers; anyone and anything else can still access “off-limits” pages, and robots.txt files should not be used as a security measure.
3. Creating and formatting a robots.txt file Thankfully, making a robots.txt file is a relatively simple process. For first-time website owners, the coding process may seem daunting, but it’s very easy with the help of a number of tools. For the sake of understanding robots.txt files, however, it’s important to take a moment to know how the file tells robots what they cannot index.
Robots.txt files have two variables: “User-agent¬” and "¬disallow,” which refer to a search engine crawler and one or several directories of a website, respectively. In the example below, the robots.txt file blocks all crawlers, regardless of search engine, from indexing pages under the “example” directory. In this example, bear in mind that the input “*” refers to “all.”
This is a relatively simple example, however, it is possible to specify more variables. Other variables may include the type of search engine crawler (for example, “GoogleBot” refers to Google’s search engine crawlers) and a specific selection of directories. However, robots.txt files are likely to develop errors as they become increasingly complicated.
Creating a robots.txt file is as simple as opening up a basic text editor such as Notepad on computers running a Windows operating system. There are also a number of tools available online that can check for errors in robots.txt files, ensuring that everything goes according to plan and crawlers only index the right parts of the website.