ztabs.digital services

robots.txt Generator

Create a robots.txt file to control how search engines crawl your website. Configure user-agents, allow/disallow rules, and sitemap URL.

Preset Templates
Configuration

Note: Most major crawlers (Google, Bing) ignore Crawl-delay. Used mainly for smaller crawlers.

Preview
User-agent: *
Disallow: /admin/

Sitemap: https://example.com/sitemap.xml

What is robots.txt?

The robots.txt file tells search engine crawlers which pages or directories they can or cannot request from your site. It is placed in the root of your website (e.g., https://yoursite.com/robots.txt) and follows a simple directive-based format.

User-agent and Directives

User-agent identifies the crawler (e.g., * for all, Googlebot for Google). Allow and Disallow specify paths that crawlers may or may not access. Disallow rules help you block admin panels, API endpoints, and other areas you do not want indexed.

Sitemap and Crawl-delay

Including your Sitemap URL helps search engines discover your pages efficiently. Crawl-delay (in seconds) limits request frequency, but most major search engines ignore it; it is mainly useful for smaller bots.

Related Resources