Google Crawl Limit Checker

Googlebot stops parsing HTML after 2MB. Check if your page exceeds this limit and identify what's consuming the most space.

We'll analyze the raw HTML size as Googlebot sees it

About This Tool

Googlebot has a technical limit when crawling web pages: it stops parsing HTML after 2MB (2,097,152 bytes). This means if your page's HTML exceeds this limit, parts of your content won't be indexed by Google, potentially affecting your SEO performance.

How It Works

  1. Fetches your page's HTML as Googlebot sees it (before JavaScript execution)
  2. Calculates the exact byte size using UTF-8 encoding
  3. Analyzes what's consuming space: inline scripts, styles, base64 images, SVG graphics, etc.
  4. Identifies the top 10 largest HTML elements with specific CSS selectors
  5. Generates actionable recommendations ranked by priority
  6. Shows external resources (scripts, CSS, images) that DON'T count toward the limit

What Counts Toward the 2MB Limit?

Counts Toward Limit

  • • Inline JavaScript (script tags without src)
  • • Inline CSS (style tags and style attributes)
  • • Base64-encoded images in HTML/CSS
  • • Inline SVG graphics
  • • All text content and HTML structure
  • • HTML comments and whitespace

Does NOT Count

  • • External JavaScript files (script src="...")
  • • External CSS files (link rel="stylesheet")
  • • External images (img src="...")
  • • External fonts, videos, and other media
  • • Content loaded via AJAX/fetch
  • • Content rendered by JavaScript

Pro Tip: The best way to stay under the limit is to externalize large inline scripts and styles into separate .js and .css files. This also improves caching and page load performance!

Frequently Asked Questions