ReddGrowReddGrow Docs
AI Observability as a Service

Rate Limits

60 requests per minute, 1,000 requests per hour. How to stay within limits and retry correctly.

Rate Limits

All /v1/aeo/* endpoints share a single rate limit pool per API key:

WindowLimit
Per minute60 requests
Per hour1,000 requests

Both windows are enforced independently. You can hit the per-minute limit without hitting the per-hour limit, and vice versa.


When you hit a limit

Exceeding either limit returns HTTP 429 Too Many Requests:

{ "statusCode": 429, "message": "Too many requests" }

Wait until the current window resets before retrying. The per-minute window resets every 60 seconds. The per-hour window resets every 3,600 seconds.


Which endpoints are expensive vs. cheap

Not all endpoints have the same response cost. Plan your request patterns accordingly:

Cheaper endpoints (fast, low data volume — safe to call frequently):

  • GET /v1/aeo/me
  • GET /v1/aeo/engines
  • GET /v1/aeo/countries
  • GET /v1/aeo/score-methodology
  • GET /v1/aeo/brands
  • GET /v1/aeo/topics

Moderate endpoints (aggregated data, typically fast):

  • GET /v1/aeo/visibility/by-brand
  • GET /v1/aeo/visibility/by-engine
  • GET /v1/aeo/visibility/by-country
  • GET /v1/aeo/visibility/by-topic
  • GET /v1/aeo/visibility/timeline
  • GET /v1/aeo/prompts

Heavier endpoints (row-level data, larger payloads — use pagination):

  • GET /v1/aeo/prompts/:id/runs
  • GET /v1/aeo/sources/domains
  • GET /v1/aeo/sources/detail
  • GET /v1/aeo/citations
  • GET /v1/aeo/visibility/explain
  • GET /v1/aeo/runs/:id/explain

For heavy endpoints, always use limit and offset pagination rather than pulling everything in one call. Default limit is 20 for most paginated endpoints.


Retry advice

Use exponential backoff with jitter when retrying after a 429.

A simple pattern:

async function fetchWithRetry(url: string, headers: Record<string, string>, maxRetries = 4) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const res = await fetch(url, { headers });
    if (res.status !== 429) return res;

    const baseDelay = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s, 8s
    const jitter = Math.random() * 500;
    await new Promise((r) => setTimeout(r, baseDelay + jitter));
  }
  throw new Error('Max retries exceeded');
}

For batch data sync jobs, spread requests over several minutes rather than issuing them all at once. The 1,000/hour limit is generous for typical polling patterns — you'd need to call over 16 endpoints per minute continuously to exhaust it.