How Scores Are Computed
Every numeric score the API returns is auditable. Here's exactly how each one is calculated.
How Scores Are Computed
All scores are derived from run-level fields extracted by LLM analysis (Gemini 2.0 Flash) during each daily scan. The methodology is versioned — the current version is 1.0, last updated 2026-04-10.
You can fetch the full machine-readable methodology at any time:
curl -H "x-api-key: rg_YOUR_KEY" https://api.reddgrow.ai/v1/aeo/score-methodologyvisibility_pct
Formula: (mentioned_count / total_scans) * 100
Range: 0–100
What it measures: The percentage of AI answer scans — within your selected filters (brand, engine, country, topic, date range) — where the brand was mentioned at all.
What "mentioned" means: A run is counted as a mention if brand_mentioned = true. This field is set by LLM analysis of the raw answer text. Indirect mentions (paraphrases, common abbreviations) are included when the LLM determines the reference clearly identifies the brand.
Window semantics: visibility_pct is always computed over a time window. If no from/to filters are provided, the API uses the last 30 days by default. Shorter windows (e.g., last 7 days) produce more volatile scores. Longer windows (e.g., last 90 days) smooth out day-to-day fluctuation.
A visibility_pct of 0 means the brand was not mentioned in any scanned answer within the window. A score of 100 means it was mentioned in every answer.
sentiment_score
Range: 0–100
Model: google/gemini-2.0-flash
What it measures: A confidence-weighted numeric sentiment score for brand mentions within a given window. Averaged across all runs where the brand was mentioned.
How to interpret it: Higher scores indicate more consistently positive brand framing in AI answers. A score of 80+ typically corresponds to answers that recommend the brand explicitly or describe it positively. A score below 40 typically corresponds to answers that mention the brand with caveats or in a negative context.
The score is only meaningful when brand_mentioned = true. Runs where the brand wasn't mentioned don't contribute to the average.
brand_sentiment
Values: positive | neutral | negative | not_mentioned
Model: google/gemini-2.0-flash
What it is: A categorical sentiment label assigned per run (one AI answer). The LLM reads the full answer text and classifies how the brand is portrayed.
positive— the answer recommends or speaks favorably about the brandneutral— the brand is mentioned without clear positive or negative framingnegative— the answer includes criticism, caveats, or unfavorable comparisons involving the brandnot_mentioned— the brand does not appear in the answer
brand_sentiment is per-run. sentiment_score is the aggregate numeric value averaged across runs in a window.
brand_position
Range: 1–N (integer) or null
Model: google/gemini-2.0-flash
What it is: The ordinal position of the first brand mention within the AI answer text. Position 1 means the brand was mentioned first. Position 3 means two other entities were mentioned before the brand appeared.
brand_position is null when brand_mentioned = false.
Position is extracted by the LLM during analysis of the raw answer text. It reflects the sequencing of brand mentions in the answer, not a ranking or recommendation order inherent to the AI engine.
avg_position
Formula: AVG(brand_position) WHERE brand_mentioned = true
What it is: The average brand_position across all runs where the brand was mentioned, within the selected window and filters.
A lower avg_position means the brand tends to appear earlier in AI answers. An avg_position of 1.2 means the brand is nearly always the first entity mentioned. An avg_position of 4.5 means it tends to appear later, after competitors.
avg_position is computed only over runs where the brand was mentioned. Runs with brand_mentioned = false are excluded from the average.
Auditing scores with /explain endpoints
Every aggregate score is auditable. Two endpoints let you trace a score back to its source data:
GET /v1/aeo/visibility/explain?brand_id=X— shows every individual scan result that contributed to avisibility_pctcalculation, with thebrand_mentionedvalue for eachGET /v1/aeo/runs/{scan_result_id}/explain— shows the raw answer text, extracted fields, and full LLM analysis output for one specific scan
See Auditing Your Scores for a step-by-step walkthrough.
analysis_version
Every explain response includes an analysis_version field (currently "1.0"). This version identifies which scoring methodology was active when that run was analyzed.
If the methodology changes in a future version, older runs retain their original analysis_version. You can filter or group by analysis_version when comparing scores across long time windows to ensure you're comparing results computed by the same methodology.
The current methodology version and its full definition are always available at GET /v1/aeo/score-methodology.