Nov 5, 202511 minSudoMock Team

API Integration Best Practices

Build robust API integrations. Implement parallel processing, retry logic, and error handling.

Building robust API integrations means handling limits and errors gracefully. This guide covers best practices for parallel processing, retry logic, and error handling — so your mockup pipeline never breaks.

Understanding Parallel Limits

SudoMock uses parallel limits to ensure fair usage and consistent performance. These limits determine how many simultaneous requests you can have in-flight at once:

PlanParallel RendersParallel Uploads
Free11
Starter32
Pro105
Scale2510

Response Headers

Every API response includes parallel limit headers:
  • x-concurrent-limit — Your parallel limit
  • x-concurrent-remaining — Available slots
  • Retry-After — Seconds to wait (only on 429 responses)

Handling 429 Errors

When you exceed parallel limits, the API returns 429 Parallel Limit Reached. Here's how to handle it properly:

Basic retry with backoff
javascript
1async function renderWithRetry(payload, maxRetries = 3) {
2 for (let attempt = 0; attempt < maxRetries; attempt++) {
3 const response = await fetch('https://api.sudomock.com/api/v1/renders', {
4 method: 'POST',
5 headers: {
6 'X-API-KEY': API_KEY,
7 'Content-Type': 'application/json'
8 },
9 body: JSON.stringify(payload)
10 });
11
12 if (response.ok) {
13 return response.json();
14 }
15
16 if (response.status === 429) {
17 // Get retry delay from header, or use short backoff
18 const retryAfter = response.headers.get('Retry-After');
19 const delay = retryAfter
20 ? parseInt(retryAfter) * 1000
21 : Math.pow(2, attempt) * 1000;
22
23 console.log(`Parallel limit reached. Retrying in ${delay}ms...`);
24 await sleep(delay);
25 continue;
26 }
27
28 // Other errors - don't retry
29 throw new Error(`API error: ${response.status}`);
30 }
31
32 throw new Error('Max retries exceeded');
33}
34
35function sleep(ms) {
36 return new Promise(resolve => setTimeout(resolve, ms));
37}

Exponential Backoff

For retry logic, exponential backoff prevents overwhelming the API. The delay doubles with each retry attempt:

1s
Retry 1
First attempt
2s
Retry 2
Second attempt
4s
Retry 3
Third attempt
8s
Retry 4
Fourth attempt
Exponential backoff with jitter
javascript
1function getBackoffDelay(attempt, baseDelay = 1000) {
2 // Exponential: 1s, 2s, 4s, 8s...
3 const exponentialDelay = Math.pow(2, attempt) * baseDelay;
4
5 // Add jitter (0-25% random) to prevent thundering herd
6 const jitter = exponentialDelay * Math.random() * 0.25;
7
8 // Cap at 30 seconds
9 return Math.min(exponentialDelay + jitter, 30000);
10}

Why Jitter?

When multiple clients hit limits simultaneously, they all retry at the same time — causing another limit breach. Adding random jitter spreads out retries.

Parallel Processing Strategy

For high-volume workloads, process requests in parallel up to your plan's limit:

Parallel worker pool
javascript
1class ParallelQueue {
2 constructor(parallelLimit) {
3 this.queue = [];
4 this.activeCount = 0;
5 this.parallelLimit = parallelLimit;
6 this.results = [];
7 }
8
9 async add(payload) {
10 return new Promise((resolve, reject) => {
11 this.queue.push({ payload, resolve, reject });
12 this.process();
13 });
14 }
15
16 async process() {
17 while (this.queue.length > 0 && this.activeCount < this.parallelLimit) {
18 const { payload, resolve, reject } = this.queue.shift();
19 this.activeCount++;
20
21 renderWithRetry(payload)
22 .then(result => {
23 resolve(result);
24 this.activeCount--;
25 this.process(); // Process next in queue
26 })
27 .catch(error => {
28 reject(error);
29 this.activeCount--;
30 this.process();
31 });
32 }
33 }
34}
35
36// Usage: 10 parallel renders on Pro plan
37const queue = new ParallelQueue(10);
38
39// Add 100 renders - they'll process 10 at a time
40for (const design of designs) {
41 queue.add({ mockup_uuid, smart_objects: [{ uuid: soId, asset: { url: design } }] })
42 .then(result => console.log('Done:', result))
43 .catch(err => console.error('Failed:', err));
44}

Error Types & Handling

CodeMeaningAction
200SuccessProcess result
400Bad requestFix payload, don't retry
401UnauthorizedCheck API key
404Not foundCheck mockup UUID
429Parallel limit reachedRetry with backoff
500Server errorRetry with backoff

Don't Retry Everything

Only retry 429 and 5xx errors. Client errors like 400and 401 won't succeed on retry — fix the underlying issue instead.

Monitoring & Metrics

Track these metrics to optimize your API usage:

Active requestsStay below your parallel limit
429 frequencyHigh frequency = need to upgrade or queue better
LatencyAverage response time (expect under 1 second)
Error rateNon-429 errors indicate code issues

Best Practice

Use the x-concurrent-remaining header proactively. If it's getting low, wait for active requests to complete before sending more.

Related Resources

Ready to Try SudoMock?

Start automating your mockups with 500 free API credits.