robots.txt
API Reference for robots.txt file.
Add or generate a robots.txt file that matches the Robots Exclusion Standard in the root of app directory to tell search engine crawlers which URLs they can access on your site.
Static robots.txt
Generate a Robots file
Add a robots.js or robots.ts file that returns a Robots object.
Good to know: robots.js is a special Route Handler that is cached by default unless it uses a Request-time API or dynamic config option.
Output:
Customizing specific user agents
You can customize how individual search engine bots crawl your site by passing an array of user agents to the rules property. For example:
Output:
Non-standard directives
Some search engines support directives that aren't part of the Robots Exclusion Standard, such as Request-Rate (Seznam) or Clean-param (Yandex). Pass these through the other field on a rule. Keys preserve their casing and array values emit one line per entry, scoped to the rule's User-Agent block.
Output:
Good to know: Values in other are passed through verbatim. Next.js does not validate directive names or values, so refer to the target search engine's documentation for the exact syntax.
Robots object
Version History
| Version | Changes |
|---|---|
v16.3.0 | Added other field for non-standard per-agent directives. |
v13.3.0 | robots introduced. |