indexing
Single-page apps and the 2MB indexing limit
SPAs are uniquely exposed to Google's 2MB rendered-HTML cap. Here's why client-rendered routes are the highest-risk category, how to diagnose it, and what to do.

If your site is a single-page app (a React, Vue, or Svelte app where routing happens entirely in the browser), you're in the highest-risk category for Google's 2MB rendered-HTML cap. Some SPAs are quite lean, so this isn't inherent to the architecture. But the things that bloat HTML on SPAs compound in ways they don't on traditional server-rendered sites, and the failure mode is exactly the kind of bug nobody notices for months: content past 2MB silently dropped from indexing.
Why SPAs are the highest-risk category
A few mechanics conspire to push SPAs over the limit.
Everything renders through the JS bundle. On a traditional site the server sends content as HTML and the browser displays it. On an SPA the server sends a shell plus a JS bundle, and the bundle builds the DOM at runtime. That DOM is what Googlebot reads, including any framework metadata, internal state, and data the bundle needed to construct it.
Hydration payloads grow with the page. A static marketing site might hydrate with 5KB of data. A logged-out e-commerce SPA might need 200KB of category data, product cards, and filters serialized into the HTML for the client to take over rendering. Our covers the per-framework specifics.
And client-side data fetching often ends up in the initial HTML anyway. Developers sometimes assume "I fetch this on the client, so it doesn't count." If the SSR or RSC pass already pre-fetched it during render (which most modern frameworks do automatically), it's serialized into the HTML before the client ever runs.
The specific SPA shapes that fail
Some SPA patterns are routinely above 2MB rendered HTML.
Logged-out dashboards: marketing landing pages built inside a fully-hydrated React shell. The framework ships everything (auth state, feature flags, telemetry config, navigation tree) even though the visitor is anonymous.
E-commerce SPAs with inline catalog data: product listing pages where 50+ items' worth of JSON is included for client-side filtering. View-source shows a tiny shell. Rendered HTML shows 1.5MB+.
Content sites that migrated to JS frameworks: a news site that switched from Rails or WordPress to a Next.js or Nuxt setup. The article content is the same, but now it ships with framework runtime, hydration data, and inline JSON for every section.
B2B docs sites with embedded search indexes: some doc generators inline the full search index into every page. A 200KB Algolia or Lunr index inlined on every page tips the scales fast.
How to diagnose your SPA
Start with a fast pass. on your three most important pages: homepage, top organic landing page, top product or article page. If any are over 1.5MB rendered HTML, audit the largest <script> tags in DevTools. Look for __NEXT_DATA__, __NUXT__, __remixContext, or inline JSON. Those are your hydration payloads.
If you want the authoritative number, use Google Search Console's URL Inspection (Test Live URL, then View Tested Page) to see the exact HTML Google captured. Compare that against view-source. A gap of more than 5x means you have a serialization-heavy app and the next feature could push you over. The covers what specifically counts.
What "partially indexed" looks like

When an SPA goes past 2MB, Google doesn't error. It indexes what it can read and silently drops the rest. The symptoms are subtle:
- Bottom-of-page content stops appearing in search snippets even though it's in the source.
- Schema markup placed in the footer stops triggering rich results.
- Internal links in the footer or post-content area aren't getting discovered. You'll see fewer "discovered, currently not indexed" pages in Search Console over time.
- Long-tail queries that should match content past the cutoff just don't.
None of this is obvious from looking at the site. It shows up over weeks as a slow decline in long-tail organic traffic, which then gets blamed on algorithm updates or competitor activity instead of a fixable rendering issue.
What to do about it
The fixes, easiest first.
Audit and trim the hydration payload. This is where roughly 80% of SPA bloat lives. The covers what to look for in each major framework. Common wins: stop serializing data the client doesn't need, move heavy datasets to client-side fetches, use Server Components for read-only data.
Restructure the page so critical content sits early. Even if you can't get under 2MB, you can make sure the SEO-important content (article body, product details, schema) sits in the first MB. Footer content, related items, comments, and rich widgets can fall past the cutoff with minimal damage.
Consider streaming and partial pre-rendering. Frameworks with streaming SSR (Next.js App Router, Remix defer(), SvelteKit streams) can move portions of the HTML out of the initial response. Content streamed after first byte still gets indexed, but doesn't pile up in the initial byte count the same way.
Reconsider full hydration. If you can move parts of the page off the client entirely (static export, ISR, Server Components, islands architecture), you reduce both the JS bundle and the hydration payload at the same time.
For the general byte-trimming playbook beyond SPA-specific concerns, see .
SPAs aren't doomed for SEO. Plenty rank well. But the 2MB cap is closer than most SPA devs realize, and the failure mode is invisible until you go looking for it. If your stack involves a JavaScript framework, measure your rendered HTML. View-source won't tell you the truth, and neither will webpack-bundle-analyzer. The only number that matters is the byte count of the DOM after hydration completes. and find out where you stand.
Check your page size now
Test any URL against Google's 2MB Googlebot HTML limit in seconds.
Run a check

