Skip to main content
Follow these guidelines to get the most out of the Events API and ensure your data flows reliably into RevBridge AI.

Batch your requests

Send multiple users per request instead of making one API call per user. Batching reduces HTTP overhead and improves throughput.

Avoid

1,000 requests with 1 user each

Prefer

1 request with 1,000 users
You can include up to 1,000 users per request. For high-volume pipelines, break your data into batches of 500–1,000 users and send them sequentially or with controlled concurrency.

Use consistent identifiers

Always use the same primary identifier for a given user across all your events. Inconsistent identification creates duplicate profiles.
Consistent — All events use user_id:
{ "identifiers": { "user_id": "usr_001", "email": "jane@example.com" } }
{ "identifiers": { "user_id": "usr_001", "email": "jane@example.com" } }
Inconsistent — Sometimes email-only, sometimes user_id-only:
{ "identifiers": { "email": "jane@example.com" } }
{ "identifiers": { "user_id": "usr_001" } }
The inconsistent approach may create two separate profiles until the system can merge them. Providing both identifiers together ensures immediate matching.
When a user authenticates, always send both their anon_id and their authenticated identifier (user_id or email) so the anonymous profile merges with the known one.

Follow naming conventions

Use snake_case for event names and property keys. Be consistent — add_to_cart and addToCart are treated as two different event types.
DoDon’t
purchasePurchase, PURCHASE
add_to_cartaddToCart, AddToCart
page_viewpageView, PageView
product_nameproductName, ProductName

Always send timestamps

When you have the actual time an event occurred, include it in the timestamp field. This is especially important for:
  • Historical imports — backfilling data from your database
  • Offline events — POS transactions, call center interactions
  • Queued events — events that were collected offline and sent later
If you omit the timestamp, the event is recorded at the time the API receives it, which may not reflect when the action actually happened.

Handle errors properly

After each API call, check the response for rejected users.
const response = await fetch("https://api.revbridge.ai/ingest/events", {
  method: "POST",
  headers: { /* ... */ },
  body: JSON.stringify(payload),
});

const data = await response.json();

if (data.users_rejected > 0) {
  console.error(`[RevBridge] ${data.users_rejected} users rejected`, {
    trace_id: data.trace_id,
    errors: data.errors,
  });
  // Fix the rejected users and retry them
}
Always log the trace_id from every response. If you need to contact RevBridge support, the trace ID allows us to quickly locate your request.

Be aware of idempotency

The Events API does not deduplicate events. If you send the same event twice, it will be recorded twice. Design your integration to avoid double-sends:
  • Use a send log. Track which events you’ve successfully sent (by your own event ID or a hash) and skip duplicates on retry.
  • Handle retries carefully. Only retry when you receive a network error or 503. A 202 response means the data was accepted — do not re-send it.
  • Check users_accepted. On 202, only retry the users listed in the errors array after fixing the validation issues.

Validate before sending

Catch problems early by validating your data before making API calls:
  • Every user has at least one primary identifier (user_id, email, phone_number, or anon_id)
  • Every user has at least one event
  • Every event has a non-empty event_name
  • revenue and event_value are numeric (not strings)
  • timestamp is in ISO 8601 format when provided
  • The batch has no more than 1,000 users
If all users in a request fail validation, the API returns 400 instead of 202. Validating client-side prevents entire batches from being rejected.

Optimize payload size

Keep your payloads efficient:
  • Only send properties that are relevant to each event. Avoid sending empty strings or null values.
  • Use short, descriptive property names.
  • For large imports, split into multiple requests of 500–1,000 users each.
  • Total payload must be under 10 MB per request.