Ever noticed your web application feeling sluggish, especially when it needs to fetch a lot of data?
One common culprit can be too many individual API calls.
Each call incurs network overhead, which can quickly add up and impact both your server and your user’s experience.
Fortunately, there’s a powerful optimization technique called API call batching.
In this tutorial, we’ll dive into what API batching is, why it’s crucial for performance, and how you can implement a robust batching mechanism in your JavaScript applications.
The Problem: Too Many Requests 😩
Imagine your dashboard needs to display a user’s profile, their latest posts, comments on those posts, and various settings. Without batching, you might end up making four separate API calls:
GET /api/user/profileGET /api/user/postsGET /api/user/commentsGET /api/user/settings
Each of these is a separate network request. Even if they’re fast, the cumulative effect of establishing connections, sending headers, waiting for responses, and processing each individually can lead to noticeable delays.
This not only burdens the client (your user’s browser) but also puts more strain on your server, which has to handle each request independently.
The Solution: API Call Batching 📦
API call batching is like sending a single, consolidated shopping list to the grocery store instead of sending separate messengers for each item.
Instead of multiple small requests, we gather several requests on the client-side and send them all at once in a single, larger network request to a dedicated batch endpoint on your server.
The server processes all these sub-requests and returns a single response containing the results for each.
Benefits of Batching:
- Reduced Network Overhead: Fewer round trips mean less time spent on connection setup and teardown.
- Lower Server Load: Your server processes fewer overall requests, saving CPU and memory.
- Improved Performance: Faster data loading leads to a snappier, more responsive application and a better user experience.
Building Your API Batcher in JavaScript
Let’s break down the JavaScript code that powers our API batching system. We’ll look at a simulateApiCall utility (to act as our mock backend) and the ApiBatcher class itself.
1. simulateApiCall Function: Our Mock API 🎭
function simulateApiCall(request) {
return new Promise(resolve => {
// Simulate network delay
setTimeout(() => {
console.log(`Simulating API call for:`, request);
// Simulate a successful response
resolve({
requestId: request.id,
status: 'success',
data: `Processed data for ${request.type} ID: ${request.data.id}`
});
}, Math.random() * 500 + 100); // Random delay between 100ms and 600ms
});
}
This function isn’t hitting a real server. Its purpose is purely to mimic the asynchronous nature and delay of a real API call.
- It returns a
Promise, just likefetchwould. setTimeoutintroduces a random delay between 100ms and 600ms. This makes our simulation more realistic.- After the delay, it calls
resolve()on the Promise, passing a simulated response object. This object contains:requestId: Crucial for identifying which original request this response belongs to in a batch.status: Indicates success or failure.data: The mock data returned by the “API”.
2. ApiBatcher Class: The Core Batching Logic
This class is the brain behind our batching operation.
constructor(delayMs, maxBatchSize, batchEndpoint) 🏗️
class ApiBatcher {
constructor(delayMs = 200, maxBatchSize = 5, batchEndpoint = simulateApiCall) {
this.requestQueue = []; // Stores individual requests
this.pendingTimeout = null; // Stores the setTimeout ID
this.delayMs = delayMs;
this.maxBatchSize = maxBatchSize;
this.batchEndpoint = batchEndpoint;
console.log(`ApiBatcher initialized with delay: ${delayMs}ms, max batch size: ${maxBatchSize}`);
}
// ... rest of the class
}
The constructor sets up our batcher:
requestQueue: An array that holds individual requests waiting to be sent in a batch. Each item in the queue stores the originalrequestobject itself, along with theresolveandrejectfunctions of the Promise returned to the caller, so we can fulfill them later.pendingTimeout: Stores the ID of thesetTimeoutthat triggers batch processing. This allows us to clear it if a batch is processed early.delayMs: The maximum time (in milliseconds) the batcher will wait before sending the current queue as a batch, even ifmaxBatchSizeisn’t reached. This prevents requests from being stuck indefinitely.maxBatchSize: The maximum number of individual requests to include in a single batch. If the queue hits this size, the batch is sent immediately.batchEndpoint: This is the function that will actually send the consolidated batch request. In our example, it’s a simulated server endpoint that takes an array of requests and returns an array of responses. In a real application, this would be yourfetchcall to your actual server’s batch API.
addRequest(request): Queuing Individual Calls ➕
addRequest(request) {
return new Promise((resolve, reject) => {
this.requestQueue.push({ request, resolve, reject });
console.log(`Request added to queue. Current queue size: ${this.requestQueue.length}`);
// If the queue is full, process immediately
if (this.requestQueue.length >= this.maxBatchSize) {
this.processBatch();
} else if (!this.pendingTimeout) {
// Otherwise, set a timeout to process the batch
this.pendingTimeout = setTimeout(() => {
this.processBatch();
}, this.delayMs);
}
});
}
This is how your application code interacts with the batcher. When you call addRequest with an individual request object (e.g., { id: 'user_1', type: 'getUser', data: { id: 1 } }):
- It creates a new
Promiseand stores therequestalong with itsresolveandrejectfunctions in therequestQueue. This allows us to fulfill this specific Promise later, even though it’s part of a batch. - It checks two conditions to decide when to send the batch:
- Size-based Trigger: If
this.requestQueue.lengthreachesthis.maxBatchSize,processBatch()is called immediately. This is a “full-batch” trigger. - Time-based Trigger: If the queue isn’t full, but no batch processing is currently pending, it sets a
setTimeoutto callprocessBatch()afterthis.delayMs. This ensures that even a small number of requests get processed after a short wait.
- Size-based Trigger: If
- The method returns the new Promise, so your original code can still use
.then()and.catch()as if it were making a direct API call.
processBatch(): Sending the Batch and Resolving Promises 🚀
async processBatch() {
// Clear any pending timeouts to prevent duplicate processing
if (this.pendingTimeout) {
clearTimeout(this.pendingTimeout);
this.pendingTimeout = null;
}
if (this.requestQueue.length === 0) {
console.log('No requests in queue to process.');
return;
}
// Take all requests from the queue for the current batch
const currentBatch = this.requestQueue;
this.requestQueue = []; // Reset the queue
const batchedRequests = currentBatch.map(item => item.request);
console.log(`Processing batch with ${batchedRequests.length} requests:`, batchedRequests);
try {
const batchResponse = await this.batchEndpoint({
type: 'batch',
requests: batchedRequests
});
// Map responses back to individual promises
currentBatch.forEach((item, index) => {
const individualResponse = batchResponse.data.find(res => res.requestId === item.request.id);
if (individualResponse && individualResponse.status === 'success') {
item.resolve(individualResponse.data);
} else {
item.reject(new Error(`Request ${item.request.id} failed: ${individualResponse?.message || 'Unknown error'}`));
}
});
console.log('Batch processed successfully.');
} catch (error) {
console.error('Error processing batch:', error);
// Reject all promises in the current batch if the batch call itself fails
currentBatch.forEach(item => {
item.reject(new Error(`Batch processing failed: ${error.message}`));
});
}
}
This is where the magic happens! processBatch is called when a batch is ready to be sent.
- It first clears any
pendingTimeoutto avoid sending the same batch twice. - It captures all requests currently in
this.requestQueueintocurrentBatchand then resetsthis.requestQueue. This is critical: it ensures new incoming requests start a fresh queue. - It then calls
this.batchEndpoint, passing a single object containing all thebatchedRequests. This simulates sending the combined request to your server. - Once the
batchEndpoint(our mock server) responds, it iterates through thecurrentBatchof original requests. For each original request, it finds its corresponding result in thebatchResponse.datausing therequestId. - Finally, it calls the
resolveorrejectfunction associated with each individual request’s Promise, fulfilling or rejecting it based on thestatusfrom theindividualResponse. This means the.then()or.catch()handlers you attached to youraddRequestcalls will now fire!
Example Usage: Seeing It in Action 🎬
// Initialize the batcher
const apiBatcher = new ApiBatcher(200, 3, async (batchPayload) => {
// This is the function that simulates your server-side batch API endpoint
// It receives the `batchPayload` containing an array of individual requests.
// It should return a promise that resolves with an array of individual responses.
console.log(`\n--- Server received a batch with ${batchPayload.requests.length} items ---`);
const results = [];
for (const req of batchPayload.requests) {
// Simulate processing each individual request on the server side
const responseData = await simulateApiCall(req); // Call your actual API logic here for each sub-request
results.push({
requestId: req.id,
status: 'success',
data: responseData.data // Use data from the simulated sub-call
});
}
console.log(`--- Server finished processing batch, sending response ---`);
return {
batchId: 'BATCH_' + Date.now(),
status: 'success',
data: results // Array of results for each individual request
};
});
// Add requests to the batcher
console.log("Adding requests...");
// These requests will be batched and sent together based on delay/size
apiBatcher.addRequest({ id: 'user_1', type: 'getUser', data: { id: 1 } })
.then(result => console.log(`User 1 data: ${result}`))
.catch(error => console.error(`Failed to get user 1: ${error.message}`));
apiBatcher.addRequest({ id: 'post_1', type: 'getPost', data: { id: 101 } })
.then(result => console.log(`Post 1 data: ${result}`))
.catch(error => console.error(`Failed to get post 1: ${error.message}`));
apiBatcher.addRequest({ id: 'comment_1', type: 'getComment', data: { id: 501 } })
.then(result => console.log(`Comment 1 data: ${result}`))
.catch(error => console.error(`Failed to get comment 1: ${error.message}`));
// This request will trigger the batch immediately because maxBatchSize is 3
apiBatcher.addRequest({ id: 'user_2', type: 'getUser', data: { id: 2 } })
.then(result => console.log(`User 2 data: ${result}`))
.catch(error => console.error(`Failed to get user 2: ${error.message}`));
// These requests will form a new batch after the delay
apiBatcher.addRequest({ id: 'settings_1', type: 'getSettings', data: { id: 10 } })
.then(result => console.log(`Settings 1 data: ${result}`))
.catch(error => console.error(`Failed to get settings 1: ${error.message}`));
apiBatcher.addRequest({ id: 'feed_1', type: 'getFeed', data: { id: 20 } })
.then(result => console.log(`Feed 1 data: ${result}`))
.catch(error => console.error(`Failed to get feed 1: ${error.message}`));
In the example:
- We initialize
apiBatcherwith adelayMsof 200ms andmaxBatchSizeof 3. - The
batchEndpointpassed to theApiBatcheris anasyncfunction that itself uses oursimulateApiCallfor each individual request within the batch, mimicking server-side processing. - When you add the first three requests (
user_1,post_1,comment_1), they’re queued. - Adding
user_2hits themaxBatchSizeof 3 (making the queue size 4, so it triggers immediately after adding 3 requests). This causes the first batch to be processed right away. - The subsequent requests (
settings_1,feed_1) will then form a new batch, which will be processed after thedelayMs(200ms) becausemaxBatchSizehasn’t been reached yet.
Run this code in your browser’s console to observe the log messages and understand the flow of batching!
You’ll see individual requests being added, but batch processing occurring only when the size limit is hit or the delay expires.
Considerations for Real-World Applications 🌍
While this implementation provides a solid foundation, a production-ready batching system would also consider:
- Server-Side Support: You need a backend API endpoint specifically designed to receive and process batched requests. The server must be able to parse the incoming array of requests, execute them, and return a consolidated response with individual results.
- Error Handling Granularity: How to gracefully handle scenarios where some requests within a batch succeed while others fail.
- Request Dependencies: What if one request in a batch depends on the result of another? Batching might not be suitable for heavily dependent operations, or your server might need sophisticated dependency resolution.
- Payload Size Limits: Very large batches can still cause issues if the total payload size exceeds server or network limits.
- Authentication & Authorization: Ensuring each sub-request within a batch is properly authenticated and authorized.
- Idempotency: Designing requests so that sending the same batch multiple times (e.g., due to retries) doesn’t cause unintended side effects.
Conclusion 👋
API call batching is a powerful tool in your web development arsenal for optimizing application performance and reducing server load.
By intelligently grouping requests, you can significantly improve the efficiency of your data interactions. Understanding concepts like delayMs, maxBatchSize, and how to resolve individual Promises within a batch is key to building a robust system.
Feel free to experiment with the delayMs and maxBatchSize parameters in the example to see how they affect batching behavior!
Let me know if you’d like to dive deeper into any specific aspect of this batcher, such as implementing a real fetch call to a mock server, or exploring error handling in more detail!



