EveryFreeTool

Robots.txt Generator

Build a perfect robots.txt file with visual rules, templates, and real-time preview. Validate existing files and test specific paths. Free, instant, no signup.

Quick Templates

1Site URL

2Rule Builder

3Additional Directives

Only Bing/Yandex respect this directive

Live Preview

robots.txt
# robots.txt generated by EveryFreeTool.com
# April 1, 2026
Β 
User-agent: *
Disallow:

Pro Tips

Always include a wildcard

Start with User-agent: * to set default rules. Without it, unlisted bots have no restrictions and no guidance.

Add a Sitemap directive

The Sitemap line helps search engines discover your sitemap without needing to check /sitemap.xml manually.

Block AI training bots

Add separate User-agent blocks for GPTBot, Google-Extended, ClaudeBot, and CCBot with Disallow: / to opt out of AI training.

Test before deploying

Use the Validate tab to check syntax and test specific paths. Google Search Console also has a robots.txt tester.

Last updated: March 2026

What Is the Robots.txt Generator?

This free robots.txt generator helps you create a properly formatted robots.txt file for your website in seconds. Instead of writing the file manually and risking syntax errors, the visual rule builder lets you add user-agent blocks, set Allow and Disallow directives, and configure additional settings like Sitemap and Crawl-delay β€” all with a live preview that updates as you type.

The tool includes six pre-built templates for common scenarios: standard websites, WordPress sites, e-commerce stores, AI bot blocking, maximum SEO, and development/staging environments. Select a template and it instantly populates all the rules, which you can then customize to your needs.

How It Works

Step 1: Enter your site URL. This is used to auto-fill the Sitemap directive with your sitemap.xml location. You can edit the sitemap URL manually if it is at a different path.

Step 2: Build your rules. Add user-agent blocks for different crawlers and set Allow or Disallow rules for each. Use the common paths dropdown to quickly add standard paths like /admin, /api, or /wp-admin. Add as many user-agent blocks and rules as you need.

Step 3: Configure additional directives. Set Crawl-delay for Bing and Yandex, add a Sitemap URL, and optionally specify a Host directive. The live preview on the right shows your complete robots.txt with syntax highlighting.

Step 4: Download or copy. Download the file as robots.txt, copy the content to your clipboard, or copy with the auto-generated header comments included. Upload the file to the root of your web server.

Blocking AI Bots in 2026

As AI companies train large language models on web content, many website owners want to opt out. The robots.txt file is the standard mechanism for this. The key bots to block are GPTBot (OpenAI/ChatGPT), Google-Extended (Google AI training, separate from Googlebot), ClaudeBot (Anthropic), and CCBot (Common Crawl, used by many AI companies).

Our β€œBlock AI Training” template adds Disallow: / rules for all four of these bots while keeping your site fully accessible to regular search engine crawlers. Keep in mind that robots.txt is a voluntary protocol β€” it relies on bots choosing to respect it.

Frequently Asked Questions

What is a robots.txt file?

A robots.txt file is a plain text file placed at the root of your website (e.g., yoursite.com/robots.txt) that tells search engine crawlers which pages or sections they can and cannot access. It follows the Robots Exclusion Protocol, which all major search engines respect. It does not prevent pages from being indexed if they are linked from other sites β€” use a noindex meta tag for that.

How do I block AI bots like ChatGPT and Claude from crawling my site?

Add separate User-agent blocks for GPTBot (OpenAI), Google-Extended (Google AI training), ClaudeBot (Anthropic), and CCBot (Common Crawl) with Disallow: /. Our 'Block AI Training' template does this automatically. Note that this only blocks bots that respect robots.txt β€” it is a signal, not a technical barrier.

Where do I put the robots.txt file?

Upload robots.txt to the root directory of your website so it is accessible at https://yoursite.com/robots.txt. It must be at the root β€” placing it in a subdirectory will not work. Most web hosts let you upload files via FTP, file manager, or your CMS. For platforms like WordPress, plugins such as Yoast SEO manage it automatically.

Does robots.txt affect SEO?

Yes, indirectly. Robots.txt controls which pages search engines can crawl, which affects what gets indexed. Blocking important pages accidentally can remove them from search results. Conversely, blocking low-value pages (like admin panels, search results, or duplicate content) helps search engines focus their crawl budget on your best content.

What is the difference between Allow and Disallow?

Disallow tells crawlers not to access a specific path or pattern. Allow explicitly permits crawling of a path, which is useful for creating exceptions within a broader Disallow rule. For example, you might Disallow: /wp-admin/ but Allow: /wp-admin/admin-ajax.php. When rules conflict, the most specific (longest) path match wins.

What is Crawl-delay and should I use it?

Crawl-delay tells crawlers to wait a specified number of seconds between requests. Google ignores this directive entirely (use Google Search Console to adjust crawl rate instead). Bing and Yandex respect it. Only use Crawl-delay if your server is slow or under heavy load β€” otherwise it unnecessarily slows down indexing.

Related Tools