robots.txt Generator
Generate robots.txt files to control search engine crawler access
Quick Templates
Crawler Rules
Additional Settings
Entering a sitemap URL will add a Sitemap directive to the robots.txt.
Sets the crawler request interval in seconds. (optional)
Specifies the preferred domain. Supported by some crawlers only. (optional)
Generated robots.txt
Copy the above content and save it as a robots.txt file in your website's root directory.
What is robots.txt?
robots.txt is a text file located in the root directory of a website that tells search engine crawlers (bots) which pages they can crawl. Through this file, you can allow or block crawler access to specific directories or files. It plays an important role in SEO optimization by reducing server load and preventing unnecessary pages from appearing in search results.
Frequently Asked Questions
The robots.txt file must be stored in the root directory of your website. For example, it should be accessible at https://example.com/robots.txt. Crawlers will not recognize it if stored in a subdirectory.
No. robots.txt only provides recommendations to crawlers and is not enforceable. Malicious crawlers can ignore it. To completely exclude a page from search results, you should use the noindex directive in the meta robots tag or the HTTP X-Robots-Tag header.
Crawl-delay is not a standard directive and is not supported by all crawlers. Bing, Yandex, etc. support it, but Google ignores Crawl-delay. To control Google's crawl rate, you need to use Google Search Console.
