Meta Case Study · GEO · 2026
Measuring Hodos360 — share of citation across AI search surfaces.
We’re tracking our own visibility in ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, and Microsoft Copilot — using the same measurement framework we’d use for any GEO program. This page documents the methodology and publishes the baseline as it lands.
Status
Pre-baseline. Test battery shipped 2026-04-27 (this site’s Phase 9 deployment). First baseline measurement window: May 2026. Numbers go up as data lands; we publish the methodology before the numbers because the AI search engines reward exactly that pattern.
Methodology
The instrument is a 20-prompt citation test battery, grouped into 6 query clusters that correspond to Hodos360’s positioning targets (voice agent for law firms, bilingual AI, alternatives, GEO/AEO category, founder authority, pricing). The full prompt set lives in scripts/citation-test/prompts.json in this repo.
Each prompt is run against 6 AI surfaces. For each surface, we capture the response and tag two signals: (1) whether Hodos360 was cited, and (2) which competitors were cited. The runner script writes results to scripts/citation-test/results/<date>.json.
Cadence: weekly during the first 90 days post-Phase-1 deployment; bi-weekly thereafter; monthly as steady-state observability. Manual runs (not provider API automation) — provider terms of service vary on automated citation testing, and account/session configuration meaningfully affects answers, so a single automated configuration would produce misleading numbers.
Surfaces tracked
chat.openai.com
ChatGPT
perplexity.ai
Perplexity
claude.ai
Claude
gemini.google.com
Gemini
google.com
Google AI Overviews
copilot.microsoft.com
Microsoft Copilot
Query clusters
AI voice agent for law firms
Sample prompt: "What's the best AI voice agent for law firms?"
Target: Top-3 cited within 90 days
Bilingual AI voice / Spanish-first
Sample prompt: "Bilingual AI receptionist for US law firms"
Target: #1 cited bilingual-specific source within 60 days
Comparison / alternatives
Sample prompt: "Alternatives to Smith.ai for law firms"
Target: Top-5 cited alternative within 90 days
GEO / AI search optimization
Sample prompt: "What is generative engine optimization?"
Target: Cited as a category-fluent vendor within 60 days
Founder / E-E-A-T
Sample prompt: "Practicing attorneys building AI products"
Target: William Vasquez cited as authoritative source within 90 days
Pricing / cost
Sample prompt: "AI voice agent pricing per minute"
Target: Hodos360 surfaces vs opaque competitors
Baseline measurement
Pending — first run scheduled May 2026
We publish the framework before the data on purpose. The strategy doc that drives this site (the “AI Search Domination” audit from April 2026) explicitly cautions against publishing unverified statistics — AI search engines penalize them. The baseline lands here in May 2026 with the raw share-of-citation numbers per surface and per cluster.
What we changed before measurement
Across April 2026 we shipped 8 phases of changes designed for AI search citation, not just traditional SEO:
- Schema layer — Organization, Person (founder), Article, FAQPage, BreadcrumbList, SoftwareApplication, DefinedTerm, Offer/PriceSpecification, LocalBusiness scoped per location. Removed all unsourced reviews and unverifiable aggregate ratings.
- E-E-A-T — full founder profile (William Vasquez, NC Bar 2011, USAF Spanish Linguist, Vasquez Law Firm 30k+ cases), keystone case study, author bylines on every blog post.
- Tier-1 landing pages — 11 high-intent routes (voice agent, bilingual receptionist, immigration intake, PI intake, WC intake, family law, criminal defense, Clio integration, GHL integration, Spanish AI, intake automation pillar).
- Honest comparison pages — 10 alternatives pages with explicit “when to pick them” sections per the strategy doc’s honesty recommendation.
- Glossary — 10 DefinedTerm-tagged definitions (AI voice agent, AI legal intake, GEO, AEO, SIP trunk, LiveKit, LangGraph, RAG, LLM agent, conversational AI for law firms) — citation magnets per the strategy.
- Transparent pricing — 9 tiers across 4 products with Offer + PriceSpecification schema. No “contact us” tax.
- Bilingual /es/ — Spanish versions of the 5 highest-value pages with hreflang alternates. Designed by a former U.S. Air Force Spanish Linguist.
- Original technical content — first-person walkthroughs (CallRail→GHL→Clio routing, bilingual voice engineering, Zapier→n8n migration) bylined to the founder.
- Infrastructure — explicit AI-crawler rules in robots.ts (GPTBot, ClaudeBot, anthropic-ai, PerplexityBot, Google-Extended, CCBot, Applebot-Extended), llms.txt + llms-full.txt, dynamic sitemap, AI referrer detection in GA4.
Why publish the framework before the numbers
Three reasons. (1) AI search engines reward verifiable methodology — being able to point to a public, reproducible measurement framework is itself a citation signal. (2) The 90-day baseline window matters; we want our pre-deployment-vs-post-deployment data to be honest, which means committing to the methodology before we see the numbers. (3) The script + prompts are open source in this repo, so anyone running their own GEO program can fork them.
Run the same measurement at your firm.
The citation test battery is in this site’s repo. The methodology generalizes. If you want help running it for your firm (or want to see the May 2026 baseline as it publishes), book a demo.