🤖 SEO Tool

Robots.txt Generator

Free online robots.txt generator, robots txt maker and robots txt file generator. Create robots txt files visually - build robots txt file online with Allow, Disallow and Crawl-delay rules. Validate against Google guidelines. Use as a SEO robots.txt file creator and SEO robots txt file creator with WordPress and Shopify presets. No login required.

Visual rule builder Google validation WordPress & Shopify presets Multiple user agents Live preview robots txt checker
AdSense — 728×90 Leaderboard
Free Robots.txt Generator

Build Your robots.txt File Visually

Preset:
User Agent Rules
Sitemap URLs
Live Preview
robots.txt
Validation
AdSense — 728×90 Leaderboard
🏷
Generate your SEO meta tags too
Use the free Meta Tag Generator to create SEO title, description, Open Graph and Twitter Card tags with a live SERP snippet preview.
Meta Tag Generator →
⭐ Ratings

Rate this tool

4.9
★★★★★
Based on 4,180 ratings
5
3,887
4
167
3
84
2
42
1
0
Was this robots.txt generator helpful?
Thank you!
Key features

Everything the Robots.txt Generator Does

🤖
Visual Rule Builder
Add Allow, Disallow and Crawl-delay rules through a clean point-and-click interface - no manual syntax needed.
Google Validation
Real-time validation warns about conflicting rules, incorrect Disallow: / usage and Googlebot-incompatible directives.
📋
6 Platform Presets
Load instant presets for WordPress, Shopify, generic eCommerce, block all, allow all and block AI crawlers.
🤖
Multiple User Agents
Add separate rule sections for Googlebot, Bingbot, GPTBot and any other crawler with different instructions per agent.
📍
Sitemap Declaration
Add one or more Sitemap: directives so search engines can discover your XML sitemap directly from robots.txt.
👁
Syntax-Highlighted Preview
Live preview renders your robots.txt with colour-coded directives - purple for User-agent, green for Allow, red for Disallow.
One-Click Download
Download the finished robots.txt file directly to your computer, ready to upload to your website root.
📋
Copy to Clipboard
Copy the complete robots.txt output with one click for pasting directly into your server or CMS.
💡
Crawl-delay Support
Add Crawl-delay directives for crawlers that respect them, with a note that Googlebot ignores this directive.
How to use

How to Use the Robots.txt Generator

1
Choose a preset
Select WordPress, Shopify, block AI crawlers or another platform preset from the dropdown to load a recommended starting configuration instantly.
2
Review user-agent sections
Each User-agent block controls which crawler the following rules apply to. Use * for all crawlers or enter a specific name like Googlebot.
3
Add or edit rules
Click + Add Rule inside any user-agent section. Choose Disallow (block), Allow (permit) or Crawl-delay and enter the path or value.
4
Add your sitemap URL
In the Sitemap URLs section, enter the full URL of your XML sitemap so crawlers can discover all your pages efficiently.
5
Check the validation panel
Review the validation panel on the right. Fix any red errors before downloading. Yellow warnings explain Googlebot behaviour differences.
6
Download your robots.txt
Click Download robots.txt to save the file to your computer, then upload it to the root directory of your website at yourdomain.com/robots.txt.
Competitor comparison

Robots.txt Generator: LazyTools vs Competitors

See how LazyTools compares to other popular tools. Our free robots.txt generator is the only option that combines all key features with no login required and complete browser-side privacy.

FeatureLazyToolsGoogle Search ConsoleYoast SEO Pluginseoptimer.com
Visual rule builderYesNo (text only)BasicNo
Validation warningsYes (real-time)BasicNoNo
Platform presetsYes (6 presets)NoWordPress onlyNo
Block AI crawlers presetYesNoNoNo
Multiple user agentsYesYesLimitedNo
Sitemap directiveYesYesYesYes
Download fileYesNoYes (via plugin)Yes
No login requiredYesRequires accountRequires WordPressYes
Syntax guide

Robots.txt Syntax and Directives Explained

DirectiveSyntaxEffectGooglebot support
User-agentUser-agent: *Specifies which crawler the following rules apply to. * means all crawlers.Yes
DisallowDisallow: /path/Tells the crawler not to crawl this path or URL. Empty value means allow all.Yes
AllowAllow: /path/Creates an exception to a Disallow rule. More specific path wins.Yes
SitemapSitemap: https://example.com/sitemap.xmlTells crawlers the location of your XML sitemap. Can appear multiple times.Yes
Crawl-delayCrawl-delay: 10Requests crawlers wait N seconds between requests. Googlebot ignores this.No (use Search Console)
# Comment# This is a commentLines starting with # are ignored by crawlers. Useful for documentation.Ignored

How Allow and Disallow interact

When an Allow and a Disallow rule conflict, the more specific rule wins (the one with the longer path). For example, if you have Disallow: /admin/ and Allow: /admin/public/, Googlebot will not crawl anything in /admin/ except /admin/public/. If two rules are the same length, Googlebot uses the Allow rule.

# Correct: Allow overrides Disallow for a specific sub-path User-agent: * Disallow: /admin/ Allow: /admin/public/ # This works -- longer path wins # Common mistake: Disallow: / blocks EVERYTHING User-agent: * Disallow: / # Blocks all pages -- entire site hidden from crawlers

What robots.txt cannot do

Robots.txt controls crawling, not indexing. A page blocked in robots.txt can still appear in search results if it is linked from other crawlable pages, because Google can infer the page exists without crawling it. To prevent a page from being indexed, use a noindex meta tag or X-Robots-Tag HTTP header instead. Additionally, robots.txt only applies to well-behaved crawlers that follow the protocol. Malicious bots ignore it entirely.

Platform templates

Robots.txt Templates for WordPress and Shopify

WordPress robots.txt

This robots.txt generator for WordPress produces a recommended configuration. WordPress sites should block admin pages, search results and duplicate parameter URLs while allowing Google to access theme and plugin CSS and JavaScript files. Load the WordPress preset in the generator above to get a fully configured starting point.

# WordPress recommended robots.txt User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php Disallow: /wp-login.php Disallow: /?s= Disallow: /search Disallow: /trackback Disallow: */feed Allow: /wp-content/ Sitemap: https://example.com/sitemap.xml

Shopify robots.txt

The robots.txt generator for Shopify preset creates a recommended configuration. Shopify automatically generates a robots.txt file for your store. However, you can customise it via the Shopify admin under Online Store > Preferences > robots.txt. Key rules for Shopify block checkout pages, account pages and internal search results. Load the Shopify preset above to see the recommended configuration.

Blocking AI web crawlers

Use this robots txt template to block Googlebot-Extended, GPTBot and other AI training crawlers. AI companies use web crawlers to collect training data. GPTBot (OpenAI), Claude-Web (Anthropic), CCBot (Common Crawl) and other AI crawlers can be blocked using their User-agent names. The generator's Block AI Crawlers preset adds rules for the major AI training crawlers. Note that not all AI crawlers respect robots.txt.

FAQ

Frequently Asked Questions

A robots.txt file is placed at the root of a website (e.g. https://example.com/robots.txt) and instructs search engine crawlers which pages to crawl or skip. It follows the Robots Exclusion Protocol. Robots.txt controls crawling, not indexing - to prevent indexing use a noindex meta tag.
Disallow blocks a crawler from a path. Allow creates an exception to a Disallow rule. When rules conflict, the more specific (longer) path wins. Use Allow to permit a sub-path that is inside a broader Disallow block, for example allowing /wp-admin/admin-ajax.php while blocking /wp-admin/.
Yes significantly. Blocking important pages prevents them from being indexed and ranking. Accidentally blocking CSS and JavaScript files stops Google from rendering pages correctly, harming rankings. Never block pages you want indexed. Use robots.txt only for admin pages, duplicate content, staging environments and search result pages.
User-agent specifies which crawler the following rules apply to. Use * for all crawlers. Use specific names like Googlebot, Bingbot or GPTBot for individual crawlers. You can have multiple User-agent sections with different rules. Googlebot respects the * section if no specific Googlebot section exists.
The robots.txt file must be at the root of your domain: https://yourdomain.com/robots.txt. It cannot be in a subdirectory. Each subdomain needs its own robots.txt file. The file must return a 200 HTTP status code and be publicly accessible without authentication.
The Sitemap directive tells search engines the location of your XML sitemap file. Add the full URL: Sitemap: https://example.com/sitemap.xml. Supported by Google, Bing and other major search engines. Adding your sitemap in robots.txt helps crawlers discover all your pages efficiently.
Crawl-delay suggests how many seconds a crawler should wait between requests. Googlebot ignores Crawl-delay entirely - to control Googlebot robots txt crawl rate, use the Crawl Rate settings in Google Search Console instead. Some other crawlers like Bingbot do respect it. Use sparingly - overly aggressive crawl-delay values can slow search engine discovery of your content.
WordPress robots.txt should block /wp-admin/ (with Allow for admin-ajax.php), /wp-login.php, search results (/?s=), trackbacks and feeds. Always allow /wp-content/ so Google can crawl theme CSS, plugin JavaScript and images. Load the WordPress preset in the generator above for a ready-to-use configuration.
Related tools

More free SEO and developer tools