← Back
case-aeo-substrate.md 2026-05 → live Read time · 5 min

Show up when the buyer asks ChatGPT. Substrate's AEO loop, ready to install.

I measured my own brand against eight buyer prompts on AI search this morning. Kesava Mandiga appears once, on the vanity query, via third-party scrapers. Substrate, my open-source GTM operating system, appears zero times. Bluefish AI, the number-one consulting target in my space, sits in the slot I want. This is the work, measured before it is sold.

AEO benchmark · HubSpot Q1 2026 +1,850% leads · 3× conv Yamini Rangan, 2026-04-28. The anchor every AEO engagement chases. Source: agent-first-gtm-hubspot.md.
Day-zero baseline · my own brand · 2026-05-16 0 of 8 buyer prompts Eight prompts a target buyer would actually ask. Zero appearances of iamkesava.com, substrate.iamkesava.com, or portfolio.iamkesava.com in the top results. One appearance on the vanity query, and only via third-party scrapers and LinkedIn.
Who shows up instead Bluefish AI · Matt McKinney · Ben Thiefels The most-cited names across the buyer-shape prompts. Bluefish on the GEO category. Matt McKinney on "I build the AI systems." Ben Thiefels on fractional PMM. These are the names I have to be co-cited with by Phase 1 close.

What it was

Search is splitting into two surfaces. The classic SERP that PMM has optimized against for fifteen years, and the AI answer that synthesizes three to seven citations into one paragraph the buyer reads instead of clicking. SaaS founders shipping in 2026 ask the same question: when ChatGPT names the top three vendors in our category, are we one of them, and how do we know?

The diff

Before: SEO content programs built for crawl-rank-click, judged on impressions and organic sessions. AEO treated as a marketing channel question.

After: AEO treated as a substrate question. Three primitives wired together: a relevance loop that tunes which claims an LLM will surface, a schema-delta read that tracks citation share against named rivals week over week, and a manual-action protocol for the case the answer is wrong about your product.

What I actually did

  1. Wrote skills/aeo-relevance in substrate. The relevance loop reads the buyer's category question, runs it through the three frontier models, parses citations, and scores who shows up against a named rival set. Output is a weekly delta, not a vanity ranking.
  2. Wrote skills/schema-delta-aeo in substrate. Tracks structured-data deltas (Organization, Product, FAQ, HowTo, Article) against the rival set and flags when a competitor ships schema that maps to a prompt you should be winning.
  3. Wrote skills/aeo-manual-action. The protocol for the case the answer is wrong. Source-page update, schema patch, support-doc update, citation-trail to the corrected claim. Reusable across engagements.
  4. Wired the three skills into one dispatcher so a Phase 1 engagement runs them as a single weekly routine. Output lands in the client's repo, not a vendor dashboard.
  5. Set up the iamkesava.com baseline as the dry run. Established initial citation share for "pmm consultant india" and "ai-native pmm" prompts so the first client engagement compares apples to apples on the same instrument.

Day-zero baseline for iamkesava.com

I ran the loop on my own brand first. Eight prompts a buyer or founder would actually ask an AI assistant when evaluating consultants for AI-native PMM, GEO, or substrate work. One engine, Exa web search, as a proxy for what generative answer surfaces retrieve. Measured on 2026-05-16. Raw results in data/aeo-baseline-2026-05-16.json.

Prompt Appears Rank
best AI-native PMM consultants for B2B SaaS startups in 2026No
fractional Head of PMM for Series A AI startup with engineering chopsNo
AEO consultant who can move AI citation share for B2B SaaS brandsNo
PMM consultant who builds the AI systems and substrate his team runs onNo
open-source GTM operating system, public substrate for product marketingNo
GEO consultant in the Bluefish AI, Parallel Web Systems, AMP categoryNo
outcomes-based PMM consulting with weekly delta reportingNo
Kesava Mandiga PMM consulting product marketing JustCall (vanity)Yes1, via third parties only

Seven of eight buyer-shape prompts return zero appearances. The eighth, the vanity query, returns Kesava through RocketReach, Torre, Datanyze, and three LinkedIn posts. None of the owned domains (iamkesava.com, substrate.iamkesava.com, portfolio.iamkesava.com) appear anywhere in the result set. The category I claim to consult on is dominated by Bluefish AI, Matt McKinney, Ben Thiefels, Austin Heaton, Sam Awezec, and a long tail of fractional PMM and GEO agency sites with answer-first content shaped exactly for this read.

What stayed honest

This is one engine, web search as a proxy for the generative answer that buyers actually read. A paid AEO product like Bluefish measures the full ChatGPT, Claude, Perplexity, Gemini, AI Overviews matrix with audience-segmented prompt panels. I do not. The baseline above was run on my own brand, not a client. Cadence from here: re-measure weekly on the same eight prompts and the same engine, and watch the deltas. The Phase 1 deliverable for a client is the same loop wired into their domain, with the full multi-engine matrix and a named rival set instead of the generic field above. The HubSpot number at the top is still HubSpot's, not mine. Anyone selling AEO with retroactive metrics is selling someone else's chart.

What it became, in substrate

Like what you read? Book 30 minutes.

Pick a slot →
← Back