Civic Vertical New
Landing Reference Document · v2

civic.georadar.app
Structure, Copy & KPIs

Reference document for building the Civic landing page. Includes purpose-built positioning, civic-specific KPIs, and full section copy. Ready for an agent to implement directly.

Created: 2026-03-02 Based on: travel.georadar.app Status: v2 — KPIs added
Table of Contents

Who is this for?

Client profile: Public institutions, supranational bodies, political parties, NGOs, religious organizations, foundations, regional governments, social causes.

Core Problem

LLMs have a narrative about every institution or cause. That narrative isn't always neutral, and it doesn't always reflect the institution's intended message. The client doesn't know what ChatGPT tells a foreign citizen, a journalist, or a young person when they ask about them.

Key Difference vs. Other Verticals

Other VerticalsCivic Vertical
Compete for sales / market shareCompete for narrative, trust, legitimacy
Purchase funnel (awareness → buy)Perception funnel (awareness → trust → action)
Enemy = direct competitorEnemy = misinformation, bias, or silence in LLMs
BIS = business impactBIS = institutional trust / narrative alignment
KPIs: SOV, revenue attributionKPIs: RII, BES, KDS, LPS, IVR, CAR, APV, NAS

§0b — Not adapted. Built from the ground up.

This section goes on the landing page as a standalone callout block — ideally near the top, after the hero. It's the key differentiator vs. generic GEO tools.

★ Purpose-Built for Civic Intelligence

A different kind of study, for a different kind of institution.

Public institutions don't operate like brands. They face scrutiny from multiple audiences simultaneously, in multiple languages, under different political framings. A generic GEO tool applied to a ministry, an NGO, or an international body produces misleading results — because the questions are different.

GeoRadar Civic was designed from the ground up for entities whose reputation is a matter of public interest — with the precision, neutrality, and methodological care that implies.

🎯
Reputation, not conversion
We measure trust and narrative alignment, not clicks or leads. The funnel ends in informed citizens, not customers.
⚖️
Bias as a first-class metric
Framing bias is not a side note — it's the central output. We quantify it, track it over time, and cross-reference it by language and audience.
🌍
Multi-audience by design
The same institution means different things to a local voter, a foreign journalist, and a young person abroad. We measure all three.
🔬
Knowledge accuracy
We flag factual errors, hallucinations, and outdated information in AI responses — critical for institutions operating under public scrutiny.
🗣️
Language-sensitive
The same question in English and in Arabic can produce radically different responses. We detect and quantify those shifts.
📡
Source intelligence
Which media, NGOs, or think tanks shape AI's view of your institution? We map the information ecosystem that forms the model's opinion.
Copy for landing (subhead under purpose-built block):
"The stakes for a public institution are different. Bad AI narrative doesn't cost you conversions — it erodes legitimacy, distorts democratic discourse, and shapes policy perception across borders. That's why we built something specific."

§1 — Hero

Goal: Capture attention with the core problem. Tone: direct, institutional, not corporate. More "intelligence briefing" than marketing page.

What does AI tell citizens about you?
When a journalist, a student, or a foreign voter asks ChatGPT about your institution, what do they read? GeoRadar Civic reveals the narrative — and the bias.
68% of citizens under 35 use AI to learn about public institutions

Request your Civic AI Audit →
Visual: Mockup of a ChatGPT response about a generic institution, with negative/biased framing highlighted. Words flagged: "controversial", "criticized", "failed". This is the visual hook — shows the problem before explaining the solution.

§2 — The Narrative Gap

"AI models don't just answer questions. They form opinions. When citizens ask about your institution, your policy, or your cause, the LLM's response shapes how they think — often before they've heard your version."

"Traditional communications measure reach and sentiment in media. GeoRadar Civic measures what AI says when no one is watching."

3 Stats (cards)

71%
of AI responses about public institutions show measurable framing bias
4.2x
more influence on opinion formation than social media posts (placeholder)
0
institutions currently tracking their AI narrative systematically

§3 — KPI Framework: Generic + Civic-Specific

GeoRadar Civic uses a two-layer KPI system: the generic GEO foundation shared across all verticals, plus a set of civic-specific metrics built for institutional reputation, bias analysis, and public-sphere dynamics.

Generic GEO KPIs — adapted for Civic
BIS
Business Impact Score → Institutional Impact Score
Composite score measuring the overall AI presence and influence of the institution. In Civic, "impact" means narrative reach and framing quality, not commercial conversion.
Scale: 0 – 100 · Source: entity_metrics.business_impact_score
SOV
Share of Voice
% of AI responses (on relevant topic queries) that mention the institution spontaneously. Baseline visibility metric — how often the model brings you up unprompted.
Scale: 0 – 100% · responses_entity / total_responses
SS
Sentiment Score
Overall tone of AI responses mentioning the institution. In Civic context, a negative score is a strategic risk — not just a brand perception issue.
Scale: −1 (very negative) to +1 (very positive) · entity_metrics.sentiment_score
PS
Position Score
How prominently the institution is featured in responses — first mention, central role, or peripheral reference. High position = AI treats the institution as the authoritative source on the topic.
Scale: 0 – 1 · entity_metrics.position_score
★ Civic-Specific KPIs — exclusive to this vertical
RII
Reputation Integrity Index
What % of AI responses accurately represent the institution's mission, values, and factual record — without distortion, fabrication, or misleading omission. The foundational trust metric.
Scale: 0 – 100% · Manual + NLP annotation of verbatims
Example: "ChatGPT accurately describes the Generalitat's language policy in 43% of English responses — vs. 89% in Catalan."
BES
Bias Exposure Score
Quantified measure of systematic framing bias in AI responses. Captures whether the institution is structurally framed as protagonist (positive) or antagonist (negative) — independent of factual accuracy. The flagship civic-specific metric.
Scale: −100 (fully adversarial) → 0 (neutral) → +100 (fully favorable)
Example: "UNESCO BES = −31 in queries about cultural heritage decisions. Gemini = −38. ChatGPT = −24. Consistent negative framing linked to perceived political bias."
KDS
Knowledge Depth Score
How accurate, current, and complete is the AI's knowledge about the institution? Flags factual errors, hallucinations, outdated information, and critical knowledge gaps. Particularly important for specialized policy areas.
Scale: 0 – 100 · Fact-checking layer on sampled verbatims
Example: "AI uses outdated pre-2022 data in 61% of queries about the institution's climate commitments."
LPS
Language Parity Score
Measures framing consistency across languages. Detects when the AI tells a fundamentally different story about the same institution depending on the language of the query — a critical indicator of geopolitical and media bias in training data.
Scale: 0 (perfect parity) → 100 (maximum divergence) · Delta of SS across language runs
Example: "EU policy on migration: SS = +0.21 in French queries, −0.38 in Arabic queries. LPS = 74 — high divergence."
IVR
Institutional Voice Ratio
% of sources cited by AI that are official/institutional vs. critical, adversarial, or uninformed third parties. A low IVR means AI is forming its opinion from opponents or unreliable sources.
Scale: 0 – 100% · official_sources / total_sources in response_sources
Example: "Only 8% of sources cited about [Institution] are official. 61% are media outlets with documented adversarial editorial lines."
APV
Audience Perception Variance
Range of sentiment scores across different audience personas for the same institution. High APV = the institution is perceived very differently depending on who asks. Flags polarization risk and helps prioritize which audiences need narrative intervention first.
Scale: 0 (homogeneous) → 2 (max range −1 to +1) · max(SS) − min(SS) across personas
Example: "Local voter SS = +0.42. Foreign journalist SS = −0.31. Young activist SS = −0.18. APV = 0.73 — polarized."
CAR
Crisis Amplification Ratio
When a known controversy or crisis exists: how much does AI amplify it relative to the full scope of the institution's work? A CAR > 1 means AI overweights the controversy. Enables crisis comms teams to quantify and prioritize their GEO response.
Scale: 0.0+ · (controversy_mentions / total_mentions) / actual_controversy_weight
Example: "Tigray war = 3% of EU-Ethiopia relations. AI mentions it in 78% of responses. CAR = 26x — extreme amplification."
NAS
Narrative Alignment Score
The flagship composite metric for Civic GEO. Measures the overall distance between the institution's official narrative (mission, values, key messages) and the narrative that AI actually delivers to citizens. Combines RII, BES, KDS, and IVR into a single strategic score.
Scale: 0 (complete misalignment) → 100 (perfect alignment) · Weighted composite
Example: "NAS = 34/100 — AI's narrative is significantly misaligned with your institutional messaging. Priority: English-language content strategy."
Copy for landing (above KPI cards):
"Standard GEO metrics measure visibility. Civic GEO measures something harder: whether AI is a fair witness to your institution, or a distorting mirror. That's why we built metrics that don't exist anywhere else."

§4 — Framing Gap Detection

Equivalent to "Itinerary Gap Detection" in travel.georadar.app

Subtitle: Discover how AI misrepresents your institution — before a crisis does.

Gap Detected: When asked in English, ChatGPT describes your language policy as "controversial" in 84% of responses. In Catalan, the same policy is described as "normalization" in 91% of responses. ───────────────────────────────────────────────────── Language Parity Score (LPS): 74 / 100 ← high divergence Bias Exposure Score (BES): −52 ← adversarial framing in EN Recommendation: Publish authoritative content in English citing international linguistic rights frameworks (UNESCO, Council of Europe). Expected impact: –60% "controversial" framing in 90 days LPS: 74 → ~35 | BES: −52 → ~−20
Secondary CTA: "See a live example from a real audit →"

§5 — Deep Civic AI Diagnostics

Subtitle: Analyze the exact queries citizens ask, the sources LLMs cite, and the narratives being built — in real time.

QuerySOVBESAI ModelKey Finding
"What does the EU do in Nigeria?" 34% −28 GPT-4o Aid framing only — no trade critique, no colonial context
"Is UNESCO politically biased?" 84% −38 Gemini Framing centers on political controversies — cultural mandate underrepresented
"Catalan language policy" 78% −14 (ES) / −52 (EN) Claude High LPS: strong language-based framing divergence
"What is [Institution] doing on climate?" 52% +4 Perplexity KDS: 61% outdated sources (pre-2022). Low knowledge depth.
"Is [Cause] trustworthy?" 67% −33 GPT-4o IVR = 8%. AI cites critics, not official sources.
Note for agent: Examples are illustrative — use real audit data if available.

§6 — 6-Phase Civic GEO Intelligence Process

01
Audience Modeling
Define citizen personas (local voter, foreign journalist, young activist, policymaker), funnel stages (awareness → understanding → trust), and issue lines (policy areas and topics to track).
Tool: Prompt Atlas Civic
02
Narrative Capture
Execute thousands of prompts across ChatGPT, Gemini, Claude, and Perplexity — simulating real citizens asking about the institution in multiple languages.
Tool: GeoRadar Civic
03
Bias & Framing Analysis
Calculate BES, LPS, CAR, and APV. Surface systematic framing patterns, language bias, source skew, and sentiment gaps by audience.
Tool: GEODesk AI
04
Semantic Alignment (NAS)
Analyze alignment between official content and AI understanding. Calculate RII and KDS. Identify where communications are not reaching the model — and why.
Tool: S.A.M.
05
Content & Source Strategy
Technical GEO audit of web presence. Map IVR. Identify which sources need to be amplified, corrected, or created to shift the AI narrative toward institutional accuracy.
Tool: GEOdoctor
06
Monitoring & Crisis Response
Real-time alerts on NAS and BES shifts. CAR monitoring for active controversies. Executive dashboards for communications directors. Iterative optimization cycle.
Tool: GEODesk AI + InsightDesk

§7 — Trusted by Institutions That Shape Public Opinion

Placeholders — replace with real client quotes.
"Understanding what AI tells citizens about our policies has become essential. GeoRadar Civic gave us visibility into a channel we didn't know was working against us."
— Director of Digital Communications, [European Institution]
"The Language Parity Score was the finding that changed everything. The same policy described as 'normalization' in Catalan and 'controversial' in English — we had never quantified that gap before."
— Head of International Affairs, [Regional Government]
"We knew social media was a battleground. GeoRadar Civic showed us AI was too — and nobody in our sector was paying attention."
— Digital Director, [Major NGO]

§8 — Traditional Communications vs. Civic GEO

Traditional PR & CommunicationsGeoRadar Civic
Measures media coverageMeasures AI narrative (NAS)
Tracks sentiment in pressTracks sentiment in LLM responses (SS by persona)
Monitors social mediaMonitors generative AI across 5+ engines
Reacts to published articlesDetects framing shifts before they spread (BES)
Surveys citizen opinion (slow, expensive)Measures what AI tells millions of citizens (real-time)
Language-agnosticLanguage-specific bias detection (LPS)
No knowledge accuracy layerFlags AI hallucinations and factual errors (KDS)
No source attributionMaps which media shape AI's opinion (IVR)

§9 — Frequently Asked Questions

Is this useful for organizations that aren't "brands"?
That's exactly who it's built for. GeoRadar Civic was designed for entities that compete for trust and legitimacy, not market share. If AI has an opinion about you, you need to know what it is.
How is the Bias Exposure Score (BES) calculated?
BES combines framing analysis (protagonist vs. antagonist role in AI responses), sentiment polarity, and adversarial language detection. It's calculated on the full run, not on a sample — giving a statistically reliable measure of structural bias.
What languages does GeoRadar Civic analyze?
Any language supported by the major LLMs. Most civic audits include English, Spanish, French, and Arabic as a minimum — with local languages (Catalan, Swahili, Portuguese, etc.) added per project.
How is this different from media monitoring tools?
Media monitoring tracks what journalists publish. GeoRadar Civic tracks what AI tells citizens directly — a channel that bypasses editorial filters entirely and operates at scale with zero marginal cost per query.
How long does an audit take?
Standard Civic audit (1 institution, 4 languages, 5 personas): 5–7 business days. Continuous monitoring: real-time NAS and BES tracking, updated daily.

§10 — Final CTA

Find out what AI says about you — before your audience does.
Request a free Civic AI Audit. We'll analyze your institution's Narrative Alignment Score, Bias Exposure Score, and Knowledge Depth across ChatGPT, Gemini, Claude, and Perplexity.

Request Free Audit →    No commitment. Results in 48–72h. Qualified institutions only.

Design Notes

Tone

Serious, direct, no corporate-speak. More "intelligence briefing" than marketing page. The client is a communications director or political officer — not a startup founder.

Suggested Palette (choose one direction)

Deep Navy + Purple → institutional + intelligence
Navy + Gold → authority + trust
Black + Red → urgency, crisis comms

Key Differences vs. travel.georadar.app

  • Remove all "destinations" and "hotels" references — replace with "institutions" and "causes"
  • The "gap" is framing and narrative, not itinerary inclusion
  • 8 travel metrics → 4 generic + 8 civic-specific (show both layers)
  • Testimonials from comms directors, not tourism marketing leads
  • The comparison table is "vs. Traditional Comms" not "vs. SEO"
  • CTA: "qualified institutions", not "qualified destinations"

Visual Assets to Generate

  • ChatGPT response mockup with BES visualization (hero)
  • LPS radar chart: sentiment by language (section 3b / LPS card)
  • NAS gauge or progress bar (section 3b / NAS card)
  • Animated diagnostics table with BES values (section 5)
  • 6-phase process diagram (section 6)

Radar CLI Mapping

How Civic vertical maps to Radar CLI concepts and data sources:

Entity / Dimension Mapping

  • entity_type → "institution" or "cause" (not "brand" or "destination")
  • product_lines → thematic areas / issue lines (migration, language, education, trade...)
  • personas → civic audiences (local citizen, foreign journalist, young activist, policymaker, diaspora)
  • funnel → awareness → understanding → trust → action
  • tags → lang:[en|es|fr|ar|ca], audience:[local|foreign|youth], topic:[policy-area]

KPI Data Sources

  • BIS, SS, PS → entity_metrics.* (direct from Radar)
  • SOV → responses_entity / total_responses (entity_mentions)
  • RII, KDS → NLP annotation layer on verbatims (custom scoring)
  • BES → framing classifier on verbatims + SS delta from neutral baseline
  • LPS → delta of SS across language-segmented runs (tag: lang:*)
  • IVR → response_sources filtered by official_domain_like pattern
  • APV → max(SS_persona) − min(SS_persona) across persona segments
  • CAR → topic_mentions_controversy / total_mentions / expected_weight
  • NAS → weighted composite: RII(0.3) + BES(0.25) + KDS(0.25) + IVR(0.2)

Analytics Pack Adaptation

  • audiences/ → replaces destinations/ (breakdown by audience persona)
  • topics/ → replaces origins/ (breakdown by policy area / issue line)
  • kpis/ → same structure + civic KPI columns appended
  • verbatims/ → critical for framing analysis (BES, RII, KDS)
  • sources/ → IVR calculation (official vs. adversarial domains)