Robots.txt Generator

Create a robots.txt file to control how search engines crawl your website.

Used for your sitemap URL in the robots.txt file
Leave empty if you don't have a sitemap
Note: Google and Bing typically ignore crawl-delay directives

About the Robots.txt Generator Tool

This free tool helps you create a robots.txt file for your website. A robots.txt file tells search engine crawlers which pages or files the crawler can or can't request from your site.

What is Robots.txt?

The robots.txt file is part of the Robots Exclusion Protocol (REP), a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users. The REP also includes directives like meta robots, as well as page-, subdirectory-, or site-wide instructions for how search engines should treat links (such as "follow" or "nofollow").

Common Uses of Robots.txt:

  • Block Indexing of Private Content: Prevent search engines from indexing admin pages, login areas, etc.
  • Crawl Rate Control: Specify a crawl delay to control how quickly bots can crawl your site
  • Sitemap Declaration: Tell search engines where to find your XML sitemap
  • Resource Management: Block crawling of resource-heavy pages or prevent duplicate content issues

Important Considerations:

  • Robots.txt is not a security measure - it only provides instructions that well-behaved crawlers will follow
  • Disallowing pages in robots.txt doesn't guarantee they won't be indexed if linked from other pages
  • For sensitive content, use password protection or meta robots tags with "noindex"
  • Be careful with blocking - incorrectly configured robots.txt can harm your SEO by preventing engines from crawling important pages

Robots.txt Syntax:

  • User-agent: Specifies which crawler the rules apply to (e.g., Googlebot, Bingbot, or * for all)
  • Disallow: Tells the crawler not to access specific pages or directories
  • Allow: Explicitly allows the crawler to access a page or directory (even if within a disallowed section)
  • Crawl-delay: Suggests how many seconds a crawler should wait between requests
  • Sitemap: Tells search engines where to find your XML sitemap

Privacy Information

This tool operates entirely in your browser. We do not store your settings or the generated robots.txt content on our servers.