Claude Code as the PMM operating system. From MessagingGPT to a fact-check pipeline.
Most PMM functions buy AI tools. This one built them, in the open, on top of Claude Code. Every tool was scoped to a bottleneck a monthly report had already named. The headline run is forty-two claims checked on a live PPC page with one caught math error before the buyer ever saw it.
AI-native PMM stack the buyer can audit
This is the case that maps to your AMP/GEO or agent-infra company. Substrate runs the pipeline. Artemis runs the fact-check. Six slash commands run the operating cadence. The fact-check that caught a math error before a live PPC page is the artifact a Bluefish or a Conductor founder is buying: not "I will use AI", but "I will install gates that make AI claims fact-traceable for your team after I leave."
What it was
A B2B communications platform where PMM owned the messaging, the landing pages, the battlecards, and the help center. The bottleneck was never headcount. The bottleneck was throughput against a knowledge surface that was always slightly out of date.
The diff
Before, jul 2025: ChatGPT custom GPTs for one-off help. Drafts hand-checked. Numerical claims on landing pages shipped without verification. Sales-call review eating three to four days per twenty-call batch.
After, mar 2026: Artemis pipeline running claim → source → fact-check → render. One canonical Product-Knowledge.md file. An Allowed-Claims.md registry that grows with every verified claim. Six slash commands. ResearchOS and DiscoveryClaude in build against named programmes. 730 of 730 help articles read and gap-mapped.
What I actually did
- Started with customGPTs scoped to single bottlenecks. Jul 2025: deal-brief GPT and n8n + Mastermind for CI extraction from calls. Aug 2025: a second customGPT for sales-call CI that fed the State of CI report the same month.
- Shipped MessagingGPT in Nov 2025. ICP, feature grid, value-based messaging, social proof, all integrated. Fine-tuned for AIVA Custom Actions in Dec 2025.
- Moved from one-offs to a pipeline inside Claude Code. Artemis: every output traceable back to a sourced claim. Every claim labeled direct, indirect, or contextual. Same pipeline this portfolio was written inside.
- Made Product-Knowledge.md the single source of truth. Six commands read from it:
/render-pages,/detect-drift,/generate-content,/read,/discover,/digest. - Ran the first production fact-check on a live PPC Sales Dialer page. 42 claims verified. 5 BLOCK, 12 FLAG, 4 FIX, 1 MATH error caught before the page reached a buyer. Three new case studies added to Allowed-Claims.md the same month.
- Shipped through justcall-staging.vercel.app, a personal Vercel project, to bypass the production handoff gate. Mar 2026 LP batch (9 live, 6 in design, 4 integration LPs) and Homepage Refresh 6 went through it.
What stayed honest
None of these tools were sanctioned initiatives. They were built by the operator to remove named bottlenecks, then absorbed into the monthly rhythm once they worked. KB coverage sits at 16 percent. The 70 percent call-review reduction is projected, not measured. The 6,724 win/loss plus 8,773 retention records were ingested but the 2026 ICP work that depends on them was still in flight at month-close. Every number cites a monthly PMM report. The flip switch restores tool names on resignation.
What it became, in substrate
skills/fact-check-pipeline. claim extraction, BLOCK/FLAG/FIX/MATH labels, Allowed-Claims registry.skills/product-knowledge-single-source. one canonical file every downstream command reads.routines/operator-owned-staging. personal Vercel project to bypass a slow handoff gate.patterns/tools-as-bottleneck-removers. every tool scoped to a bottleneck named in last month's report.
Like what you read? Book 30 minutes.
Pick a slot →