ReddGrowReddGrow Docs
AI Observability as a ServiceGuides

Auditing Your Scores

Use the /explain endpoints to see exactly how every visibility and sentiment score was computed.

Auditing Your Scores

Goal: Trace a visibility_pct score back to the individual scan results that produced it, and drill into any single result to see the raw AI answer and every extracted field.

This is useful when reconciling score discrepancies, verifying competitor analysis, or producing audit trails for client reporting.


Step 1: Get the top-level score

Start with the aggregated visibility score for a brand:

curl "https://api.reddgrow.ai/v1/aeo/visibility/by-brand?brand_id=42&from=2026-04-03&to=2026-04-10" \
  -H "x-api-key: rg_YOUR_KEY"

Response (abbreviated):

{
  "data": {
    "brand_id": 42,
    "brand_name": "Acme Corp",
    "visibility_pct": 64.3,
    "mentioned_count": 9,
    "total_scans": 14,
    "avg_sentiment_score": 72.1,
    "avg_position": 2.3
  }
}

You know that 9 out of 14 scans mentioned the brand. To see which 9 — and which 5 didn't — use the explain endpoint.


Step 2: Audit with visibility/explain

curl "https://api.reddgrow.ai/v1/aeo/visibility/explain?brand_id=42&from=2026-04-03&to=2026-04-10" \
  -H "x-api-key: rg_YOUR_KEY"

This returns every individual scan result that was counted in the visibility calculation:

{
  "data": {
    "brand_id": 42,
    "visibility_pct": 64.3,
    "analysis_version": "1.0",
    "results": [
      {
        "scan_result_id": 8801,
        "engine": "chatgpt",
        "country": "US",
        "scanned_at": "2026-04-10T03:42:00.000Z",
        "brand_mentioned": true,
        "brand_sentiment": "positive",
        "brand_position": 1
      },
      {
        "scan_result_id": 8802,
        "engine": "gemini",
        "country": "US",
        "scanned_at": "2026-04-10T04:11:00.000Z",
        "brand_mentioned": false,
        "brand_sentiment": "not_mentioned",
        "brand_position": null
      }
    ]
  }
}

You can see exactly which scan results contributed to the mentioned_count and which didn't. The scan_result_id on each row is your entry point to step 3.


Step 3: Drill into a single result

For any scan result that looks unexpected — a brand_mentioned: false on a day you expected a mention, or a brand_sentiment: negative you want to understand — pull the full answer:

curl "https://api.reddgrow.ai/v1/aeo/runs/8802/explain" \
  -H "x-api-key: rg_YOUR_KEY"

This returns the raw AI answer text, all extracted fields, and every citation found in that answer:

{
  "data": {
    "scan_result_id": 8802,
    "engine": "gemini",
    "country": "US",
    "scanned_at": "2026-04-10T04:11:00.000Z",
    "answer_text": "For project management, teams often choose...",
    "brand_mentioned": false,
    "brand_sentiment": "not_mentioned",
    "brand_position": null,
    "sentiment_score": null,
    "citations": [
      { "domain": "g2.com", "url": "https://g2.com/categories/project-management", "domain_type": "review_site" }
    ],
    "analysis_version": "1.0"
  }
}

Reading the raw answer_text tells you exactly what the AI engine said. You can see whether the brand was genuinely absent, whether it was paraphrased in a way the LLM didn't classify as a mention, or whether a competitor dominated the answer.


The analysis_version field

Every explain response includes analysis_version. Currently "1.0".

If the scoring methodology changes in a future version, historical runs retain their original analysis_version. When comparing scores across long time windows (e.g., a 90-day trend), check that analysis_version is consistent. If you see a version change mid-window, the score delta may reflect methodology changes rather than changes in AI behavior.

The full methodology definition for any version is available at GET /v1/aeo/score-methodology.