20%
Crawlability and indexability
Status codes, canonicals, robots, sitemap coverage, redirect hygiene, crawl depth, and whether important pages are discoverable without guesswork.
20%
Structured data and entity clarity
JSON-LD quality, schema coverage by page type, entity consistency, breadcrumb structure, and whether the site is explicit about what the business is and what each page owns.
15%
Internal linking and page-role separation
Money pages connected to proof pages, support pages, and topic pages cleanly enough that both humans and machines can follow intent paths.
15%
Answerability and support surfaces
FAQ, comparison, cost, proof, definition, and support content that makes the site easier to quote, summarize, and retrieve from AI-style search flows.
10%
Core Web Vitals and render health
PSI and CrUX data where available, plus render checks that catch pages that technically exist but perform badly or expose weak UX to crawlers.
10%
Media, social, and reputation signals
Image alts, image sitemap hygiene, Open Graph and card metadata, proof distribution, and whether review or testimonial surfaces reinforce trust instead of fragmenting it.
10%
AI-agent readiness
Machine-readable assets like llms.txt, stable metadata, predictable page patterns, clean support surfaces, and low-contradiction content structure.