Robots.txt in Next.js 15+ App Router: The 2026 Guide

Next.js 15+
12 min read

In the old days of the pages/ directory, we dumped a static robots.txt file in the public/ folder and called it a day.

But in 2026, with Next.js App Router, dynamic deployments, and "Preview Mode" URLs, a static file is a liability. You need a robots.txt that adapts to its environment—blocking indexing on staging, allowing it on production, and updating sitemap links dynamically.

Enter the robots.ts file.

1. Setting up robots.ts (App Router)

In the App Router, Next.js looks for special files in the app/ root. By creating app/robots.ts, Next.js will automatically generate the standard supported format at build time (or request time).

app/robots.ts
import { MetadataRoute } from 'next'
 
export default function robots(): MetadataRoute.Robots {
  return {
    rules: {
      userAgent: '*',
      allow: '/',
      disallow: ['/private/', '/admin/'],
    },
    sitemap: 'https://acme.com/sitemap.xml',
  }
}

Result: Visiting /robots.txt will now return the properly formatted text file.

2. Preventing Staging Indexing (Vital)

This is the #1 reason to use robots.ts over a static file. You can check environment variables to see if you are on production. If not, block everything.

import { MetadataRoute } from 'next'
 
export default function robots(): MetadataRoute.Robots {
  const isProduction = process.env.VERCEL_ENV === 'production';
  const baseUrl = 'https://www.yourdomain.com';

  return {
    rules: {
      userAgent: '*',
      // Allow only if production, otherwise disallow everything
      allow: isProduction ? '/' : [], 
      disallow: isProduction ? ['/private/', '/api/'] : ['/'],
    },
    sitemap: `${baseUrl}/sitemap.xml`,
  }
}

Why this matters

Without this, Google finds your project-git-branch-preview.vercel.app URL and indexes it. This causes "Duplicate Content" penalties because you now have two identical sites (your main domain and your preview domain) fighting for ranking.

3. Dynamic Sitemap Linking

Hardcoding your sitemap URL is fine, until you change your domain name or migrate to a new hosting provider.

By using a base URL constant (or an environment variable like NEXT_PUBLIC_BASE_URL), your robots.txt always points to the correct sitemap location, whether you are testing on localhost:3000 or deploying to `www.example.com`.

4. Static Output vs. Vercel Edge

Static Export (`output: export`)

If you are using `output: 'export'` (for hosting on S3 or generic static hosts), `robots.ts` runs once at build time.

The output is a physical `robots.txt` file in the `out/` folder. The environment logic still works, but it freezes at the moment of building.

Dynamic / Edge (Vercel)

If deploying to Vercel/Node.js, `robots.ts` can be dynamic. You can technically run logic per request, although for robots.txt, Next.js caches it heavily by default for performance.

5. Common Gotchas in Next.js

Blocking /api/ routes

You generally should block /api/ routes (Disallow: /api/) as they return JSON, not HTML, which is useless for SEO indices. However, don't block API routes that generate your sitemaps!

The `public/robots.txt` Conflict

If you have both a public/robots.txt file AND an app/robots.ts file, the static file in `public/` usually wins or causes a build conflict. Delete the static file if you switch to the dynamic TS method.

Buy Me a Coffee

If you find these tools helpful, consider supporting the project! Your support helps us maintain and improve our free tools for everyone.

Support Us