Skip to content

First-party case study · technical SEO · AI-search readiness

daviddacruz.dev SEO report: from scattered strength to a cleaner AI-search and technical-SEO system.

A first-party SEO case study on how I tightened daviddacruz.dev for agent search optimization, answer engine visibility, technical SEO hygiene, and clearer service conversion, with site-wide audit results before and after the work.

For this report, the audit score means the share of audited URLs with no blocking issues and no warnings in the site-wide scanner.

Executive summary

The SEO work now reads like a system instead of a collection of disconnected pages.

The reportable outcome is simple: 0 blocking issues in both full-site audits, 6 warnings on the older public audit cleaned down to 0 on the current preview audit, more crawlable pages, stronger internal links, and a clearer route from AI-search discovery into the services that sell.

100/100

preview audit score across 96 audited URLs

93/100

live-site audit score across 82 audited URLs

0

blocking issues in both full-site audits

10+

SEO and AEO surfaces added or strengthened

Report context

This page is not a representative example. It is the first-party record of what I changed on daviddacruz.dev to make the site structurally stronger for technical SEO, AI-mediated search, and service discovery.

The important point is not that one page was rewritten. The important point is that the site was tightened as a system: topic ownership, support pages, crawl resources, schema, metadata, internal links, and conversion paths all had to line up.

That is the type of work this service is meant to showcase. Not SEO theater. Not isolated recommendations. A cleaner public surface with measurable audit improvement and a stronger path from search visibility into commercial pages.

Audit snapshots

April 25, 2026

Current preview audit

100/100

The local preview audit covered 96 URLs and came back with 0 blocking issues and 0 warnings. That is the cleanest view of the current shipped structure.

  • 96 sitemap URLs audited
  • 0 blocking issues
  • 0 warnings
  • Average internal links: 29
  • Average word count: 1225

April 22, 2026

Public live-site audit

93/100

The public-domain audit of the live site covered 82 URLs. It also had 0 blocking issues, but still showed 6 metadata warnings at that point. Those warnings were one of the cleanup targets.

  • 82 sitemap URLs audited
  • 0 blocking issues
  • 6 warnings
  • Average internal links: 29
  • Average word count: 1405

Before vs current state

Metric

Audited URLs

Before

82 live URLs

Current

96 preview URLs

Why it matters

The public surface expanded while staying crawlable and structured.

Metric

Blocking issues

Before

0

Current

0

Why it matters

The site stayed technically clean while the cluster grew.

Metric

Warnings

Before

6

Current

0

Why it matters

Metadata and page-level hygiene improved from the public audit state.

Metric

Homepage internal links

Before

58

Current

87

Why it matters

The homepage now routes more aggressively into priority pages and proof surfaces.

Metric

Services page internal links

Before

51

Current

58

Why it matters

The service layer is more connected to audit, comparison, cost, and case-study pages.

Metric

Homepage meta description

Before

175 chars

Current

137 chars

Why it matters

One of the warning-level metadata issues was corrected in the newer state.

What shipped

1. Made the site own the topic explicitly

The site now has a dedicated parent page for agent search optimization instead of letting the topic exist as scattered mentions across blog posts and services. That created a cleaner commercial center of gravity.

2. Built support pages around the buying journey

I added and strengthened supporting pages for SaaS intent, audit intent, cost, consultant-vs-agency comparison, and related explanatory surfaces. The goal was not raw page count. It was intent separation and retrieval depth.

3. Tightened machine-readable public assets

The site now presents a more deliberate crawler surface through llms.txt, llms-full.txt, sitemap.xml, news-sitemap.xml, robots.txt, feed.xml, and location.kml, alongside stronger page-type schema.

4. Improved metadata and warning cleanup

The audit work was not abstract. The public audit still showed long metadata on key pages. That cleanup work was part of the path from 6 warnings on the live audit to 0 warnings on the newer preview audit.

5. Increased internal-linking density where it matters

Older content now points more directly into the newer AI-search cluster, and top navigation surfaces route more clearly into the audit and proof pages. This matters because service pages perform better when support content reinforces them.

6. Turned the work into proof assets

The implementation now exists as a tutorial, a first-party case study, service CTAs, and searchable support pages. That makes the SEO work itself part of the marketing system instead of leaving it hidden in the codebase.

Public proof assets

Machine-readable asset

/llms.txt

Explicit public context for AI crawlers and agent systems.

Why this showcases the service

The case study shows that I do both the strategic layer and the implementation layer. This is not an audit-only service.

The report uses site-wide evidence instead of vague claims. That is a better sales asset because the proof is visible and falsifiable.

The work spans technical SEO, content architecture, AI-search readiness, internal links, schema, and conversion paths. That breadth is the offer.

The output is reusable: the same operating pattern can be applied to a SaaS site, service business, education business, content site, or founder-led consultancy.

Next moves

  • Keep shipping first-party proof pages so the site earns more retrieval depth from its own public work.
  • Watch which AEO pages become the real entry points and reinforce them with stronger surrounding links.
  • Add more exact-match support carefully without creating near-duplicate pages that cannibalize the cluster.
  • Extend the same report style to other service lines where the implementation can be shown publicly.

FAQ

Is the 100/100 score from Google?

No. It is the site-wide technical audit score used in this report: the share of audited URLs with no blocking issues and no warnings in the scanner. The point is not branding the score. The point is showing the actual site state clearly.

What changed most between the older live audit and the newer preview audit?

The main visible change was not one dramatic fix. It was a broader cleanup pass: stronger metadata hygiene, more support pages, clearer internal links, and a more deliberate AI-search cluster around the core service path.

Is this only about AI search terms?

No. The site was strengthened across standard SEO fundamentals too: crawl resources, page roles, metadata, schema, internal links, and conversion architecture. The AI-search angle sits on top of those foundations rather than replacing them.

Why turn this into a public case study?

Because the best showcase for this service is a site that demonstrates the work in the open. A visible before-and-after audit narrative is stronger than claiming expertise without proof.

Work with me

If you want this kind of SEO report, I can do the audit and the implementation.

The audit offer is for teams that need a real picture of crawl health, structure, schema, answer surfaces, and AI-search readiness. If you want the checklist version first, the companion tutorial breaks down the exact implementation pattern used on this site.