Show up when the buyer asks ChatGPT. Substrate's AEO loop, ready to install.
I measured my own brand against eight buyer prompts on AI search this morning. Kesava Mandiga appears once, on the vanity query, via third-party scrapers. Substrate, my open-source GTM operating system, appears zero times. Bluefish AI, the number-one consulting target in my space, sits in the slot I want. This is the work, measured before it is sold.
What it was
Search is splitting into two surfaces. The classic SERP that PMM has optimized against for fifteen years, and the AI answer that synthesizes three to seven citations into one paragraph the buyer reads instead of clicking. SaaS founders shipping in 2026 ask the same question: when ChatGPT names the top three vendors in our category, are we one of them, and how do we know?
The diff
Before: SEO content programs built for crawl-rank-click, judged on impressions and organic sessions. AEO treated as a marketing channel question.
After: AEO treated as a substrate question. Three primitives wired together: a relevance loop that tunes which claims an LLM will surface, a schema-delta read that tracks citation share against named rivals week over week, and a manual-action protocol for the case the answer is wrong about your product.
What I actually did
- Wrote
skills/aeo-relevancein substrate. The relevance loop reads the buyer's category question, runs it through the three frontier models, parses citations, and scores who shows up against a named rival set. Output is a weekly delta, not a vanity ranking. - Wrote
skills/schema-delta-aeoin substrate. Tracks structured-data deltas (Organization, Product, FAQ, HowTo, Article) against the rival set and flags when a competitor ships schema that maps to a prompt you should be winning. - Wrote
skills/aeo-manual-action. The protocol for the case the answer is wrong. Source-page update, schema patch, support-doc update, citation-trail to the corrected claim. Reusable across engagements. - Wired the three skills into one dispatcher so a Phase 1 engagement runs them as a single weekly routine. Output lands in the client's repo, not a vendor dashboard.
- Set up the iamkesava.com baseline as the dry run. Established initial citation share for "pmm consultant india" and "ai-native pmm" prompts so the first client engagement compares apples to apples on the same instrument.
Day-zero baseline for iamkesava.com
I ran the loop on my own brand first. Eight prompts a buyer or founder would actually ask an AI assistant when evaluating consultants for AI-native PMM, GEO, or substrate work. One engine, Exa web search, as a proxy for what generative answer surfaces retrieve. Measured on 2026-05-16. Raw results in data/aeo-baseline-2026-05-16.json.
| Prompt | Appears | Rank |
|---|---|---|
| best AI-native PMM consultants for B2B SaaS startups in 2026 | No | — |
| fractional Head of PMM for Series A AI startup with engineering chops | No | — |
| AEO consultant who can move AI citation share for B2B SaaS brands | No | — |
| PMM consultant who builds the AI systems and substrate his team runs on | No | — |
| open-source GTM operating system, public substrate for product marketing | No | — |
| GEO consultant in the Bluefish AI, Parallel Web Systems, AMP category | No | — |
| outcomes-based PMM consulting with weekly delta reporting | No | — |
| Kesava Mandiga PMM consulting product marketing JustCall (vanity) | Yes | 1, via third parties only |
Seven of eight buyer-shape prompts return zero appearances. The eighth, the vanity query, returns Kesava through RocketReach, Torre, Datanyze, and three LinkedIn posts. None of the owned domains (iamkesava.com, substrate.iamkesava.com, portfolio.iamkesava.com) appear anywhere in the result set. The category I claim to consult on is dominated by Bluefish AI, Matt McKinney, Ben Thiefels, Austin Heaton, Sam Awezec, and a long tail of fractional PMM and GEO agency sites with answer-first content shaped exactly for this read.
What stayed honest
This is one engine, web search as a proxy for the generative answer that buyers actually read. A paid AEO product like Bluefish measures the full ChatGPT, Claude, Perplexity, Gemini, AI Overviews matrix with audience-segmented prompt panels. I do not. The baseline above was run on my own brand, not a client. Cadence from here: re-measure weekly on the same eight prompts and the same engine, and watch the deltas. The Phase 1 deliverable for a client is the same loop wired into their domain, with the full multi-engine matrix and a named rival set instead of the generic field above. The HubSpot number at the top is still HubSpot's, not mine. Anyone selling AEO with retroactive metrics is selling someone else's chart.
What it became, in substrate
skills/aeo-relevance. weekly citation-share read across frontier models against a named rival set.skills/schema-delta-aeo. structured-data delta tracker, flags rival schema you should match.skills/aeo-manual-action. correction protocol when the LLM answer is wrong about your product.patterns/aeo-as-baseline-then-delta. the engagement shape: measure first, install loop second, ship delta third.
Like what you read? Book 30 minutes.
Pick a slot →