Nov 28, 202411 minSudoMock Team

Rate Limiting and Error Handling Best Practices

Handle API rate limits gracefully. Implement retry logic, backoff strategies, and error handling.

Building robust API integrations means handling rate limits gracefully. This guide covers best practices for retry logic, backoff strategies, and error handling — so your mockup pipeline never breaks.

Understanding Rate Limits

Every API has rate limits to ensure fair usage. SudoMock's limits vary by plan:

PlanPer MinutePer HourConcurrent
Free5301
Starter301803
Pro10060010
Scale300180025

Response Headers

Every API response includes rate limit headers:
  • X-RateLimit-Remaining — Requests left in current window
  • X-RateLimit-Reset — When the limit resets (Unix timestamp)
  • Retry-After — Seconds to wait (only on 429 responses)

Handling 429 Errors

When you exceed rate limits, the API returns 429 Too Many Requests. Here's how to handle it properly:

Basic retry with backoffjavascript
1async function renderWithRetry(payload, maxRetries = 3) {
2 for (let attempt = 0; attempt < maxRetries; attempt++) {
3 const response = await fetch('https://api.sudomock.com/api/v1/renders', {
4 method: 'POST',
5 headers: {
6 'X-API-KEY': API_KEY,
7 'Content-Type': 'application/json'
8 },
9 body: JSON.stringify(payload)
10 });
11
12 if (response.ok) {
13 return response.json();
14 }
15
16 if (response.status === 429) {
17 // Get retry delay from header, or use exponential backoff
18 const retryAfter = response.headers.get('Retry-After');
19 const delay = retryAfter
20 ? parseInt(retryAfter) * 1000
21 : Math.pow(2, attempt) * 1000;
22
23 console.log(`Rate limited. Retrying in ${delay}ms...`);
24 await sleep(delay);
25 continue;
26 }
27
28 // Other errors - don't retry
29 throw new Error(`API error: ${response.status}`);
30 }
31
32 throw new Error('Max retries exceeded');
33}
34
35function sleep(ms) {
36 return new Promise(resolve => setTimeout(resolve, ms));
37}

Exponential Backoff

For retry logic, exponential backoff prevents overwhelming the API. The delay doubles with each retry attempt:

1s
Retry 1
First attempt
2s
Retry 2
Second attempt
4s
Retry 3
Third attempt
8s
Retry 4
Fourth attempt
Exponential backoff with jitterjavascript
function getBackoffDelay(attempt, baseDelay = 1000) {
  // Exponential: 1s, 2s, 4s, 8s...
  const exponentialDelay = Math.pow(2, attempt) * baseDelay;
  
  // Add jitter (0-25% random) to prevent thundering herd
  const jitter = exponentialDelay * Math.random() * 0.25;
  
  // Cap at 30 seconds
  return Math.min(exponentialDelay + jitter, 30000);
}

Why Jitter?

When multiple clients hit rate limits simultaneously, they all retry at the same time — causing another rate limit. Adding random jitter spreads out retries.

Batch Processing Strategy

For high-volume workloads, don't blast requests — queue them intelligently:

Rate-limited queuejavascript
1class RateLimitedQueue {
2 constructor(requestsPerMinute) {
3 this.queue = [];
4 this.processing = false;
5 this.delayMs = (60 / requestsPerMinute) * 1000;
6 }
7
8 async add(payload) {
9 return new Promise((resolve, reject) => {
10 this.queue.push({ payload, resolve, reject });
11 this.process();
12 });
13 }
14
15 async process() {
16 if (this.processing || this.queue.length === 0) return;
17 this.processing = true;
18
19 while (this.queue.length > 0) {
20 const { payload, resolve, reject } = this.queue.shift();
21
22 try {
23 const result = await renderWithRetry(payload);
24 resolve(result);
25 } catch (error) {
26 reject(error);
27 }
28
29 // Wait between requests
30 await sleep(this.delayMs);
31 }
32
33 this.processing = false;
34 }
35}
36
37// Usage: 30 requests/minute on Starter plan
38const queue = new RateLimitedQueue(30);
39
40// Add 100 renders - they'll process at safe rate
41for (const design of designs) {
42 queue.add({ mockup_uuid, smart_objects: [{ uuid: soId, asset: { url: design } }] })
43 .then(result => console.log('Done:', result))
44 .catch(err => console.error('Failed:', err));
45}

Error Types & Handling

CodeMeaningAction
200SuccessProcess result
400Bad requestFix payload, don't retry
401UnauthorizedCheck API key
404Not foundCheck mockup UUID
429Rate limitedRetry with backoff
500Server errorRetry with backoff

Don't Retry Everything

Only retry 429 and 5xx errors. Client errors like 400and 401 won't succeed on retry — fix the underlying issue instead.

Monitoring & Metrics

Track these metrics to optimize your API usage:

Request rateStay below your plan limits
429 frequencyHigh rate = need to upgrade or slow down
LatencyAverage response time (expect under 1 second)
Error rateNon-429 errors indicate code issues

Best Practice

Use the X-RateLimit-Remaining header proactively. If it's getting low, slow down before hitting the limit — smoother than handling 429s.

Related Resources

Ready to Try SudoMock?

Start automating your mockups with 500 free API credits.