FAQPage schema alone boosts citation probability 3.2× (internal 2025 benchmark, 400 sites).
AI Search Readiness Audit
Check if your content shows up in AI-powered search results. 8 categories, 42 checkpoints — only 23% of B2B sites pass this audit.
- 42 signals · 8 categories
- Private · runs locally
- Takes ~ 6 min
GEO Content Readiness
01 Technical Foundation Core infrastructure that AI crawlers evaluate first — HTTPS, speed, sitemaps. 0/10
- Site served over HTTPS 2pts
- Page loads in under 600 ms (server response time) 3pts
- sitemap.xml present 1pt
- sitemap.xml accessible (not 403/404) 1pt
- HTML lang attribute set 1pt
- No intrusive interstitials on load 2pts
02 Schema & Structured Data JSON-LD markup that helps AI models understand and cite your content. FAQPage schema alone delivers a 3.2× citation boost. 0/8
- Any JSON-LD structured data present 1pt i
- Organization schema implemented 2pts
- FAQPage schema on key pages (3.2× citation boost) 3pts i
- BreadcrumbList schema for navigation context 1pt
- Author/Person schema linked to content 1pt
03 AI Bot Access Whether GPTBot, ClaudeBot, PerplexityBot, and Google-Extended can crawl your site. 0/6
- GPTBot not blocked in robots.txt 2pts i
- ClaudeBot / anthropic-ai not blocked 1pt
- PerplexityBot not blocked 1pt
- Google-Extended (Gemini) not blocked 1pt
- Key content not JS-only (server-rendered HTML) 1pt
04 Content Structure Answer capsules, statistics with sources, and semantic HTML tables that AI models prefer to cite. 0/12
- Pages start with 40-60 word answer capsule 3pts i
- Statistics include source and year 3pts
- Semantic HTML tables with <thead> and <caption> 2pts
- Structured comparison tables (product vs alternatives) 2pts
- Visible 'Last Updated' dates on articles and data pages 2pts
05 Brand Authority Signals External signals like Wikipedia presence, Knowledge Panel, and press mentions that establish brand trust. 0/10
- Brand or founder mentioned on Wikipedia / Wikidata 3pts i
- Google Knowledge Panel claimed 2pts
- Consistent NAP across LinkedIn, Crunchbase, G2, etc. 2pts
- Industry press or news mentions in the last 6 months 2pts
- Non-promotional brand discussions on Reddit or forums 1pt
06 Content Freshness 76.4% of ChatGPT's most-cited pages were updated within 30 days. Freshness is the #3 citation factor. 0/8
- New content published or existing pages updated monthly 3pts
- Statistics include year/quarter stamps (e.g. 'Q1 2026 data') 2pts
- Industry benchmarks, surveys, or original research published 2pts
- Key pages show visible update history or 'Last Reviewed' date 1pt
07 E-E-A-T Signals Experience, Expertise, Authoritativeness, Trustworthiness — author bios, case studies, contact info. 0/10
- Article authors have bio pages with professional credentials 3pts
- About page includes founding date, team, mission, and claims 2pts
- Business address, phone, or verified contact method visible 1pt
- Case studies or client results with specific numbers 2pts
- Industry certifications, awards, or memberships displayed 1pt
- Privacy Policy and Terms of Service present and linked in footer 1pt
08 Multi-Format Readiness Alternative content formats — llms.txt, JSON endpoints, Open Graph, canonical tags, hreflang. 0/8
- /llms.txt file present 1pt i
- Benchmark or structured data exposed via JSON/API 2pts
- Articles have a print-friendly or minimal version 1pt
- Open Graph tags (og:title, og:description) implemented 2pts
- Canonical tags set to prevent duplicate content confusion 1pt
- hreflang tags implemented (if multilingual site) 1pt
Generative search is a different indexing problem.
of AI citations we tracked went to pages updated within the last 90 days.
Average time for Perplexity and ChatGPT to crawl new content from high-authority domains.
How readiness scores translate to citations.
Score band → citation rate by engine
| Score band | ChatGPT | Perplexity | Claude |
|---|---|---|---|
| 0–40 | 2% | 3% | 1% |
| 41–65 | 11% | 18% | 7% |
| 66–85 | 34% | 51% | 22% |
| 86–100 | 71% | 83% | 58% |
Which signals move the needle fastest?
| Signal | Effort | Avg. citation lift |
|---|---|---|
| FAQPage JSON-LD | 2 h | +3.2× |
| Last-updated timestamps | 1 h | +1.9× |
| Author E-E-A-T bios | 4 h | +2.4× |
| Semantic tables | 4 h | +1.7× |
| llms.txt | 1 h | +1.3× |
| Allow GPTBot + PerplexityBot | 5 min | +4.1× |
What to ship first.
Ship FAQPage JSON-LD
Structured answers are the single highest-lift citation signal. Audit every pillar page and generate FAQPage schema.
~ 2 h per pagePublish last-updated timestamps
AI engines favour freshness. Add dateModified to Article schema + visible "Updated" label on all content.
~ 1 h per pageAllow GPTBot + PerplexityBot
Most sites still block AI crawlers by default. Flipping one line in robots.txt unlocks indexing across every major engine.
~ 5 min site-wideQuestions founders ask.
-
What is GEO content readiness?
GEO content readiness measures how well a website is optimised for AI search engines like ChatGPT, Perplexity, and Gemini. It covers 8 areas: technical foundation, Schema markup, AI bot access, content structure, brand authority, content freshness, E-E-A-T signals, and multi-format availability. -
Which AI bots should I allow in robots.txt?
The four most important AI crawlers to allow are GPTBot (OpenAI/ChatGPT), ClaudeBot or anthropic-ai (Anthropic/Claude), PerplexityBot (Perplexity), and Google-Extended (Google Gemini and AI Overviews). Blocking any of these prevents that AI from citing your content. -
Does FAQPage schema really help AI visibility?
Yes. Pages with FAQPage schema are cited in AI responses at a 41% rate, compared to 15% for pages without it — a 3.2x improvement. This holds true even though Google restricted FAQ rich results in August 2023. AI crawler behaviour differs from Google's SERP policies. -
How long does the GEO readiness audit take?
The manual self-assessment checklist takes approximately 3-5 minutes to complete. You answer 42 questions across 8 categories about your website's AI search readiness and receive instant scoring with prioritised quick wins. -
What GEO readiness score is considered good?
Scores above 70/100 indicate strong AI search readiness. The industry average across all B2B sectors is 38/100. B2B SaaS companies average 45/100, while local service businesses average 27/100. Scores below 30 indicate the site is likely invisible in AI-assisted research. -
Can JavaScript-heavy websites be optimised for AI visibility?
JavaScript-rendered content is invisible to GPTBot, ClaudeBot, and PerplexityBot — these bots cannot execute JavaScript. Key content must be server-rendered HTML. This is the single most common critical failure in GEO audits. -
What is the difference between SEO and GEO?
SEO optimises for ranking in traditional search results; GEO (Generative Engine Optimisation) optimises for citation in AI-generated responses. Key differences: GEO requires statistical claims with sources (keyword stuffing hurts GEO by 10%), semantic HTML tables (2.5x citation rate), and content freshness updated within 30 days. -
How often should I re-run the GEO readiness checklist?
Monthly is recommended, primarily because AI citation patterns shift as models are updated. Content freshness is also a major factor — 76.4% of ChatGPT's most-cited pages were updated within the previous 30 days. Set a recurring monthly reminder to update key pages and re-audit. -
How is this calculated?
The checklist has 42 items across 8 categories with pre-assigned point values totalling 72 raw points. Each answer maps to a coefficient: Yes = 1.0, Partially = 0.5, Not yet = 0. Your running score is the raw sum converted to a 0-100 scale. Categories are weighted internally for UX feedback but every item contributes its base points to the total — no hidden multipliers. Source research: CodeFormers 2025 GEO benchmark (n=400 sites), public llms.txt audits, and ChatGPT / Perplexity / Claude citation tracking from Jan-Nov 2025.
Professional GEO optimization, audited.
We take your readiness score from whatever-it-is today to 65+ in four weeks. Full technical + content + schema audit, implementation, and post-launch citation tracking.