SudoMock
Use Case

Bulk Rendering

Generate thousands of mockups efficiently. Learn parallel processing, limits, and optimization strategies.

10K+
Daily capacity
<1s
Per render
25
Parallel Renders (Scale)
99.9%
Uptime

Parallel Processing

SudoMock is built for parallel requests. Send multiple renders simultaneously to maximize throughput:

Parallel Processing with Worker Pool
1// Process 100 designs with 10 parallel requests
2async function bulkRender(designs, parallelLimit = 10) {
3 const results = [];
4 const queue = [...designs];
5
6 async function worker() {
7 while (queue.length > 0) {
8 const design = queue.shift();
9 try {
10 const result = await renderMockup(design);
11 results.push({ success: true, design, result });
12 } catch (error) {
13 results.push({ success: false, design, error: error.message });
14 }
15 }
16 }
17
18 // Start parallel workers
19 const workers = Array(parallelLimit).fill(null).map(() => worker());
20 await Promise.all(workers);
21
22 return results;
23}
24
25async function renderMockup(design) {
26 const response = await fetch("https://api.sudomock.com/api/v1/renders", {
27 method: "POST",
28 headers: {
29 "Content-Type": "application/json",
30 "X-API-KEY": process.env.SUDOMOCK_API_KEY
31 },
32 body: JSON.stringify({
33 mockup_uuid: design.mockup_uuid,
34 smart_objects: [{
35 uuid: design.smart_object_uuid,
36 asset: { url: design.design_url, fit: "cover" }
37 }],
38 export_options: {
39 image_format: "webp",
40 image_size: 1920,
41 quality: 95
42 }
43 })
44 });
45
46 if (!response.ok) {
47 throw new Error(`HTTP ${response.status}`);
48 }
49
50 return response.json();
51}
52
53// Usage
54const designs = [
55 { mockup_uuid: "...", smart_object_uuid: "...", design_url: "https://..." },
56 // ... 99 more designs
57];
58
59const results = await bulkRender(designs, 10);
60console.log(`Success: ${results.filter(r => r.success).length}`);
61console.log(`Failed: ${results.filter(r => !r.success).length}`);

Understanding Parallel Limits

Parallel limits vary by plan. Stay within limits for consistent performance:

PlanParallel RendersDaily Capacity*
Free1~7,200
Starter3~21,600
Pro10~72,000
Scale25~180,000

*Daily capacity assumes continuous processing with ~1s per render. Real throughput depends on request complexity.

Parallel Limit Headers

Check response headers for parallel limit status:
  • x-concurrent-limit: Your parallel render limit
  • x-concurrent-remaining: Available parallel slots

Robust Error Handling

Retry with Exponential Backoff
1// Retry logic with exponential backoff
2async function renderWithRetry(design, maxRetries = 3) {
3 for (let attempt = 1; attempt <= maxRetries; attempt++) {
4 try {
5 const response = await fetch("https://api.sudomock.com/api/v1/renders", {
6 method: "POST",
7 headers: {
8 "Content-Type": "application/json",
9 "X-API-KEY": process.env.SUDOMOCK_API_KEY
10 },
11 body: JSON.stringify(design)
12 });
13
14 // Handle rate or concurrent limit exceeded
15 if (response.status === 429) {
16 const retryAfter = parseInt(response.headers.get("Retry-After") || "5");
17 console.log(`Limit exceeded. Waiting ${retryAfter}s...`);
18 await delay(retryAfter * 1000);
19 continue;
20 }
21
22 // Handle server errors with retry
23 if (response.status >= 500) {
24 const backoff = Math.pow(2, attempt) * 1000; // 2s, 4s, 8s
25 console.log(`Server error. Retrying in ${backoff}ms...`);
26 await delay(backoff);
27 continue;
28 }
29
30 // Client errors shouldn't be retried
31 if (response.status >= 400) {
32 const error = await response.json();
33 throw new Error(`Client error: ${error.detail}`);
34 }
35
36 return await response.json();
37
38 } catch (error) {
39 if (attempt === maxRetries) {
40 throw error;
41 }
42 console.log(`Attempt ${attempt} failed: ${error.message}`);
43 }
44 }
45}
46
47function delay(ms) {
48 return new Promise(resolve => setTimeout(resolve, ms));
49}

Batch Processing Pattern

For very large datasets, process in batches with progress tracking:

Batch Processing with Progress
1// Batch processor with progress tracking
2async function processBatches(designs, batchSize = 100) {
3 const totalBatches = Math.ceil(designs.length / batchSize);
4 const results = [];
5
6 for (let i = 0; i < totalBatches; i++) {
7 const batch = designs.slice(i * batchSize, (i + 1) * batchSize);
8
9 console.log(`Processing batch ${i + 1}/${totalBatches} (${batch.length} items)`);
10
11 const batchResults = await bulkRender(batch, 10);
12 results.push(...batchResults);
13
14 // Log progress
15 const successful = results.filter(r => r.success).length;
16 const failed = results.filter(r => !r.success).length;
17 console.log(`Progress: ${successful} success, ${failed} failed`);
18
19 // Optional: Add delay between batches
20 if (i < totalBatches - 1) {
21 await delay(1000); // 1 second between batches
22 }
23 }
24
25 return results;
26}
27
28// Process 10,000 designs in batches of 100
29const allDesigns = await loadDesignsFromDatabase();
30const results = await processBatches(allDesigns, 100);
31
32// Save results
33await saveResultsToDatabase(results);
34
35// Generate report
36const report = {
37 total: results.length,
38 successful: results.filter(r => r.success).length,
39 failed: results.filter(r => !r.success).length,
40 errors: results.filter(r => !r.success).map(r => ({
41 design: r.design.id,
42 error: r.error
43 }))
44};
45
46console.log("Final Report:", report);

Optimization Tips

Pre-upload Your PSDs

Upload all mockup templates once and cache the UUIDs. Don't re-upload for each batch.

Use WebP Format

WebP files are ~30% smaller. Faster response times and lower storage costs.

Optimize Source Images

Host your design images on a fast CDN. Slow source URLs slow down rendering.

Right-Size Your Output

Don't request 4000px images if you only need 1920px. Smaller = faster.

Pro Tip

Run bulk jobs during off-peak hours (late night/early morning) for fastest response times.

Need Higher Limits?

Enterprise plans with custom parallel limits are available.