≡ Menu

Ever noticed your web application feeling sluggish, especially when it needs to fetch a lot of data?

One common culprit can be too many individual API calls.

Each call incurs network overhead, which can quickly add up and impact both your server and your user’s experience.

Fortunately, there’s a powerful optimization technique called API call batching.

In this tutorial, we’ll dive into what API batching is, why it’s crucial for performance, and how you can implement a robust batching mechanism in your JavaScript applications.

The Problem: Too Many Requests 😩

Imagine your dashboard needs to display a user’s profile, their latest posts, comments on those posts, and various settings. Without batching, you might end up making four separate API calls:

  1. GET /api/user/profile
  2. GET /api/user/posts
  3. GET /api/user/comments
  4. GET /api/user/settings

Each of these is a separate network request. Even if they’re fast, the cumulative effect of establishing connections, sending headers, waiting for responses, and processing each individually can lead to noticeable delays.

This not only burdens the client (your user’s browser) but also puts more strain on your server, which has to handle each request independently.

The Solution: API Call Batching 📦

API call batching is like sending a single, consolidated shopping list to the grocery store instead of sending separate messengers for each item.

Instead of multiple small requests, we gather several requests on the client-side and send them all at once in a single, larger network request to a dedicated batch endpoint on your server.

The server processes all these sub-requests and returns a single response containing the results for each.

Benefits of Batching:

  • Reduced Network Overhead: Fewer round trips mean less time spent on connection setup and teardown.
  • Lower Server Load: Your server processes fewer overall requests, saving CPU and memory.
  • Improved Performance: Faster data loading leads to a snappier, more responsive application and a better user experience.

Building Your API Batcher in JavaScript

Let’s break down the JavaScript code that powers our API batching system. We’ll look at a simulateApiCall utility (to act as our mock backend) and the ApiBatcher class itself.

1. simulateApiCall Function: Our Mock API 🎭

function simulateApiCall(request) {
    return new Promise(resolve => {
        // Simulate network delay
        setTimeout(() => {
            console.log(`Simulating API call for:`, request);
            // Simulate a successful response
            resolve({
                requestId: request.id,
                status: 'success',
                data: `Processed data for ${request.type} ID: ${request.data.id}`
            });
        }, Math.random() * 500 + 100); // Random delay between 100ms and 600ms
    });
}

This function isn’t hitting a real server. Its purpose is purely to mimic the asynchronous nature and delay of a real API call.

  • It returns a Promise, just like fetch would.
  • setTimeout introduces a random delay between 100ms and 600ms. This makes our simulation more realistic.
  • After the delay, it calls resolve() on the Promise, passing a simulated response object. This object contains:
    • requestId: Crucial for identifying which original request this response belongs to in a batch.
    • status: Indicates success or failure.
    • data: The mock data returned by the “API”.

2. ApiBatcher Class: The Core Batching Logic

This class is the brain behind our batching operation.

constructor(delayMs, maxBatchSize, batchEndpoint) 🏗️

class ApiBatcher {
    constructor(delayMs = 200, maxBatchSize = 5, batchEndpoint = simulateApiCall) {
        this.requestQueue = []; // Stores individual requests
        this.pendingTimeout = null; // Stores the setTimeout ID
        this.delayMs = delayMs;
        this.maxBatchSize = maxBatchSize;
        this.batchEndpoint = batchEndpoint;
        console.log(`ApiBatcher initialized with delay: ${delayMs}ms, max batch size: ${maxBatchSize}`);
    }
    // ... rest of the class
}

The constructor sets up our batcher:

  • requestQueue: An array that holds individual requests waiting to be sent in a batch. Each item in the queue stores the original request object itself, along with the resolve and reject functions of the Promise returned to the caller, so we can fulfill them later.
  • pendingTimeout: Stores the ID of the setTimeout that triggers batch processing. This allows us to clear it if a batch is processed early.
  • delayMs: The maximum time (in milliseconds) the batcher will wait before sending the current queue as a batch, even if maxBatchSize isn’t reached. This prevents requests from being stuck indefinitely.
  • maxBatchSize: The maximum number of individual requests to include in a single batch. If the queue hits this size, the batch is sent immediately.
  • batchEndpoint: This is the function that will actually send the consolidated batch request. In our example, it’s a simulated server endpoint that takes an array of requests and returns an array of responses. In a real application, this would be your fetch call to your actual server’s batch API.

addRequest(request): Queuing Individual Calls ➕

addRequest(request) {
    return new Promise((resolve, reject) => {
        this.requestQueue.push({ request, resolve, reject });
        console.log(`Request added to queue. Current queue size: ${this.requestQueue.length}`);

        // If the queue is full, process immediately
        if (this.requestQueue.length >= this.maxBatchSize) {
            this.processBatch();
        } else if (!this.pendingTimeout) {
            // Otherwise, set a timeout to process the batch
            this.pendingTimeout = setTimeout(() => {
                this.processBatch();
            }, this.delayMs);
        }
    });
}

This is how your application code interacts with the batcher. When you call addRequest with an individual request object (e.g., { id: 'user_1', type: 'getUser', data: { id: 1 } }):

  1. It creates a new Promise and stores the request along with its resolve and reject functions in the requestQueue. This allows us to fulfill this specific Promise later, even though it’s part of a batch.
  2. It checks two conditions to decide when to send the batch:
    • Size-based Trigger: If this.requestQueue.length reaches this.maxBatchSize, processBatch() is called immediately. This is a “full-batch” trigger.
    • Time-based Trigger: If the queue isn’t full, but no batch processing is currently pending, it sets a setTimeout to call processBatch() after this.delayMs. This ensures that even a small number of requests get processed after a short wait.
  3. The method returns the new Promise, so your original code can still use .then() and .catch() as if it were making a direct API call.

processBatch(): Sending the Batch and Resolving Promises 🚀

async processBatch() {
    // Clear any pending timeouts to prevent duplicate processing
    if (this.pendingTimeout) {
        clearTimeout(this.pendingTimeout);
        this.pendingTimeout = null;
    }

    if (this.requestQueue.length === 0) {
        console.log('No requests in queue to process.');
        return;
    }

    // Take all requests from the queue for the current batch
    const currentBatch = this.requestQueue;
    this.requestQueue = []; // Reset the queue

    const batchedRequests = currentBatch.map(item => item.request);
    console.log(`Processing batch with ${batchedRequests.length} requests:`, batchedRequests);

    try {
        const batchResponse = await this.batchEndpoint({
            type: 'batch',
            requests: batchedRequests
        });

        // Map responses back to individual promises
        currentBatch.forEach((item, index) => {
            const individualResponse = batchResponse.data.find(res => res.requestId === item.request.id);
            if (individualResponse && individualResponse.status === 'success') {
                item.resolve(individualResponse.data);
            } else {
                item.reject(new Error(`Request ${item.request.id} failed: ${individualResponse?.message || 'Unknown error'}`));
            }
        });
        console.log('Batch processed successfully.');

    } catch (error) {
        console.error('Error processing batch:', error);
        // Reject all promises in the current batch if the batch call itself fails
        currentBatch.forEach(item => {
            item.reject(new Error(`Batch processing failed: ${error.message}`));
        });
    }
}

This is where the magic happens! processBatch is called when a batch is ready to be sent.

  1. It first clears any pendingTimeout to avoid sending the same batch twice.
  2. It captures all requests currently in this.requestQueue into currentBatch and then resets this.requestQueue. This is critical: it ensures new incoming requests start a fresh queue.
  3. It then calls this.batchEndpoint, passing a single object containing all the batchedRequests. This simulates sending the combined request to your server.
  4. Once the batchEndpoint (our mock server) responds, it iterates through the currentBatch of original requests. For each original request, it finds its corresponding result in the batchResponse.data using the requestId.
  5. Finally, it calls the resolve or reject function associated with each individual request’s Promise, fulfilling or rejecting it based on the status from the individualResponse. This means the .then() or .catch() handlers you attached to your addRequest calls will now fire!

Example Usage: Seeing It in Action 🎬

// Initialize the batcher
const apiBatcher = new ApiBatcher(200, 3, async (batchPayload) => {
    // This is the function that simulates your server-side batch API endpoint
    // It receives the `batchPayload` containing an array of individual requests.
    // It should return a promise that resolves with an array of individual responses.
    console.log(`\n--- Server received a batch with ${batchPayload.requests.length} items ---`);
    const results = [];
    for (const req of batchPayload.requests) {
        // Simulate processing each individual request on the server side
        const responseData = await simulateApiCall(req); // Call your actual API logic here for each sub-request
        results.push({
            requestId: req.id,
            status: 'success',
            data: responseData.data // Use data from the simulated sub-call
        });
    }
    console.log(`--- Server finished processing batch, sending response ---`);
    return {
        batchId: 'BATCH_' + Date.now(),
        status: 'success',
        data: results // Array of results for each individual request
    };
});

// Add requests to the batcher
console.log("Adding requests...");

// These requests will be batched and sent together based on delay/size
apiBatcher.addRequest({ id: 'user_1', type: 'getUser', data: { id: 1 } })
    .then(result => console.log(`User 1 data: ${result}`))
    .catch(error => console.error(`Failed to get user 1: ${error.message}`));

apiBatcher.addRequest({ id: 'post_1', type: 'getPost', data: { id: 101 } })
    .then(result => console.log(`Post 1 data: ${result}`))
    .catch(error => console.error(`Failed to get post 1: ${error.message}`));

apiBatcher.addRequest({ id: 'comment_1', type: 'getComment', data: { id: 501 } })
    .then(result => console.log(`Comment 1 data: ${result}`))
    .catch(error => console.error(`Failed to get comment 1: ${error.message}`));

// This request will trigger the batch immediately because maxBatchSize is 3
apiBatcher.addRequest({ id: 'user_2', type: 'getUser', data: { id: 2 } })
    .then(result => console.log(`User 2 data: ${result}`))
    .catch(error => console.error(`Failed to get user 2: ${error.message}`));

// These requests will form a new batch after the delay
apiBatcher.addRequest({ id: 'settings_1', type: 'getSettings', data: { id: 10 } })
    .then(result => console.log(`Settings 1 data: ${result}`))
    .catch(error => console.error(`Failed to get settings 1: ${error.message}`));

apiBatcher.addRequest({ id: 'feed_1', type: 'getFeed', data: { id: 20 } })
    .then(result => console.log(`Feed 1 data: ${result}`))
    .catch(error => console.error(`Failed to get feed 1: ${error.message}`));

In the example:

  1. We initialize apiBatcher with a delayMs of 200ms and maxBatchSize of 3.
  2. The batchEndpoint passed to the ApiBatcher is an async function that itself uses our simulateApiCall for each individual request within the batch, mimicking server-side processing.
  3. When you add the first three requests (user_1, post_1, comment_1), they’re queued.
  4. Adding user_2 hits the maxBatchSize of 3 (making the queue size 4, so it triggers immediately after adding 3 requests). This causes the first batch to be processed right away.
  5. The subsequent requests (settings_1, feed_1) will then form a new batch, which will be processed after the delayMs (200ms) because maxBatchSize hasn’t been reached yet.

Run this code in your browser’s console to observe the log messages and understand the flow of batching!

You’ll see individual requests being added, but batch processing occurring only when the size limit is hit or the delay expires.

Considerations for Real-World Applications 🌍

While this implementation provides a solid foundation, a production-ready batching system would also consider:

  • Server-Side Support: You need a backend API endpoint specifically designed to receive and process batched requests. The server must be able to parse the incoming array of requests, execute them, and return a consolidated response with individual results.
  • Error Handling Granularity: How to gracefully handle scenarios where some requests within a batch succeed while others fail.
  • Request Dependencies: What if one request in a batch depends on the result of another? Batching might not be suitable for heavily dependent operations, or your server might need sophisticated dependency resolution.
  • Payload Size Limits: Very large batches can still cause issues if the total payload size exceeds server or network limits.
  • Authentication & Authorization: Ensuring each sub-request within a batch is properly authenticated and authorized.
  • Idempotency: Designing requests so that sending the same batch multiple times (e.g., due to retries) doesn’t cause unintended side effects.

Conclusion 👋

API call batching is a powerful tool in your web development arsenal for optimizing application performance and reducing server load.

By intelligently grouping requests, you can significantly improve the efficiency of your data interactions. Understanding concepts like delayMs, maxBatchSize, and how to resolve individual Promises within a batch is key to building a robust system.

Feel free to experiment with the delayMs and maxBatchSize parameters in the example to see how they affect batching behavior!

Let me know if you’d like to dive deeper into any specific aspect of this batcher, such as implementing a real fetch call to a mock server, or exploring error handling in more detail!

{ 0 comments }

The field of JavaScript development is constantly evolving, and interviews often go beyond basic syntax to test a candidate’s understanding of core concepts.

Tricky JavaScript questions are a common part of this process, designed to evaluate how a developer thinks about the language’s nuances, particularly its asynchronous nature, lexical scoping, and the “this” keyword. 🤯

These questions aren’t just about getting the right answer; they’re about demonstrating a deep comprehension of fundamental principles like closures, hoisting, and the event loop .

A strong performance on these questions shows an interviewer that you can write predictable and bug-free code, and that you’re prepared to tackle the complex challenges of modern web development.

Here are a few tricky Javascript questions to get you warmed up:

Question 1:

What’s the output of the following code and why?

for (var i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 1000);
}

Answer:

The output will be 3 printed three times after a one-second delay. This is because var is function-scoped, not block-scoped.

The setTimeout callback function creates a closure, which “remembers” the outer scope’s variable i. By the time the setTimeout callbacks execute, the for loop has already completed, and the value of i has been incremented to 3.

All three closures refer to the same i in the same memory location, which now holds the value 3.


Tricky Questions on Hoisting

 

Question 2:

What’s the output of the following code and why?

console.log(a);
var a = 5;

Answer:

The output will be undefined. In JavaScript, variable declarations are hoisted to the top of their scope, but initializations are not. So, the code is interpreted as:

var a;
console.log(a); // a is declared but not yet assigned a value, so it's undefined
a = 5;

Tricky Questions on the Event Loop

 

Question 3:

What’s the order of the console logs and why?

console.log('Start');

setTimeout(() => {
  console.log('Timeout');
}, 0);

Promise.resolve().then(() => {
  console.log('Promise');
});

console.log('End');

Answer:

The output will be: Start, End, Promise, Timeout.

This is a classic question about the event loop and the task queue vs. microtask queue.

  1. console.log('Start') is a synchronous operation and runs immediately.
  2. setTimeout is an asynchronous Web API. Its callback is placed in the task queue (also known as the macrotask queue). It will run after all other code in the current stack and all microtasks have finished.
  3. Promise.resolve().then(...) is also asynchronous, but its callback is placed in the microtask queue. The event loop prioritizes the microtask queue over the task queue.
  4. console.log('End') is a synchronous operation and runs immediately after Start.
  5. After the synchronous code completes, the event loop checks the microtask queue. It finds the Promise callback and executes it, logging 'Promise'.
  6. Finally, after the microtask queue is empty, the event loop checks the task queue, finds the setTimeout callback, and executes it, logging 'Timeout'.

Try solving all of the problems below.

Problems 26-55 are considered advanced or “senior-level” JavaScript interview questions.

  1. Reverse a String
  2. Check if a String is a Palindrome
  3. Remove Duplicates from a String
  4. Find the First Non-Repeating Character
  5. Count the Occurrences of Each Character
  6. Reverse Words in a Sentence
  7. Check if Two Strings are Anagrams
  8. Find the Longest Substring Without Repeating Characters
  9. Convert a String to an Integer (atoi Implementation)
  10. Compress a String (Run-Length Encoding)
  11. Find the Most Frequent Character
  12. Find All Substrings of a Given String
  13. Check if a String is a Rotation of Another String
  14. Remove All White Spaces from a String
  15. Check if a String is a Valid Shuffle of Two Strings
  16. Convert a String to Title Case
  17. Find the Longest Common Prefix
  18. Convert a String to a Character Array
  19. Replace Spaces with %20 (URL Encoding)
  20. Convert a Sentence into an Acronym
  21. Check if a String Contains Only Digits
  22. Find the Number of Words in a String
  23. Remove a Given Character from a String
  24. Find the Shortest Word in a String
  25. Find the Longest Palindromic Substring
  26. Build a custom Promise from scratch.
  27. Create your own Promise.all implementation.
  28. Design a Promise.any that resolves to the first fulfilled promise.
  29. Develop a Promise.race to resolve based on the fastest result.
  30. Implement Promise.allSettled to handle multiple results—fulfilled or rejected.
  31. Add a finally method for promises that always runs, regardless of outcome.
  32. Convert traditional callback-based functions into promises (promisify).
  33. Implement custom methods for Promise.resolve() and Promise.reject().
  34. Execute N async tasks in series—one after another.
  35. Handle N async tasks in parallel and collect results.
  36. Process N async tasks in race to pick the fastest one.
  37. Recreate setTimeout() from scratch.
  38. Rebuild setInterval() for periodic execution.
  39. Design a clearAllTimers function to cancel all timeouts and intervals.
  40. Add auto-retry logic for failed API calls with exponential backoff.
  41. Create a debounce function to limit how often a task is executed.
  42. Implement throttling to control the frequency of function calls.
  43. Group API calls in batches to reduce server load.
  44. Build a cache system to memoize identical API calls for better performance.
  45. Develop a promise chaining system to handle dependent tasks seamlessly.
  46. Write a timeout-safe promise to reject automatically if it takes too long.
  47. Implement a retry mechanism with a maximum attempt limit.
  48. Create a cancelable promise to terminate unwanted async tasks.
  49. Build an event emitter to handle custom events in an asynchronous flow.
  50. Simulate async polling to continuously check server updates.
  51. Design a rate limiter to handle high-frequency API requests.
  52. Implement a job scheduler that runs async tasks at specified intervals.
  53. Develop a parallel execution pool to limit concurrency in async tasks.
  54. Create a lazy loader for async data fetching.
  55. Build an async pipeline to process tasks in stages with dependencies.

Find the largest number in a Javascript array (one of my LinkedIn posts)

Bonus Questions (React):

Your React app is getting slower when rendering a large list. How will you optimize it?

How would you handle API call retries with exponential backoff in React?

You have a component with heavy computations. How do you prevent unnecessary recalculations?

A child component re-renders even when props don’t change — what’s your debugging approach?

How do you implement role-based authentication in a React app?

You need to share state across deeply nested components. What options do you have?

How do you handle memory leaks in React apps (like setInterval, subscriptions)?

What’s your strategy for error handling at the global React app level?

How would you design a theme switcher (dark/light mode) in React?

Your app needs offline support — how would you implement it?

Useful links below:

Let me & my team build you a money making website/blog for your business https://bit.ly/tnrwebsite_service

Get Bluehost hosting for as little as $1.99/month (save 75%)…https://bit.ly/3C1fZd2

Join my Patreon for one-on-one coaching and help with your coding…https://www.patreon.com/c/TyronneRatcliff

Buy me a coffee ☕️https://buymeacoffee.com/tyronneratcliff

{ 0 comments }