≡ Menu

Credit:https://bit.ly/40AWbX5

When building modern web applications, interacting with APIs is a daily task.

Often, you might find yourself needing to fetch data from numerous endpoints.

While simply firing off all requests at once with Promise.all() might seem like a quick solution, it can lead to several issues:

  • API Rate Limits: Many APIs restrict the number of requests you can make within a certain timeframe. Hitting these limits can result in errors and temporary bans.
  • Resource Consumption: Too many concurrent network requests can overwhelm the browser or server, leading to sluggish performance, memory leaks, or even crashes.
  • Network Congestion: Flooding the network with requests can cause delays and reduce overall efficiency.

This is where batched fetching comes in handy.

By processing your API calls in controlled groups, you can manage resources, respect rate limits, and ensure a smoother user experience.

In this blog post, we’ll dissect a powerful JavaScript async function designed for this very purpose: fetchInBatches.

The fetchInBatches Function: A Closer Look

Here’s the JavaScript function we’ll be exploring:

async function fetchInBatches(urls, batchSize = 5) {
  // 1. Initialize an array to store all results
  const results = [];

  // 2. Loop through the URLs array in chunks (batches)
  for (let i = 0; i < urls.length; i += batchSize) {
    // 3. Extract the current batch of URLs
    const batch = urls.slice(i, i + batchSize);

    // 4. Create an array of Promises for fetching each URL in the current batch
    const batchPromises = batch.map(url =>
      fetch(url).then(res => {
        // Check if the response is OK (status in the 200-299 range)
        if (!res.ok) {
          // If not OK, throw an error for better error handling later
          throw new Error(`HTTP error! status: ${res.status} for URL: ${url}`);
        }
        return res.json(); // Parse the response body as JSON
      })
    );

    // 5. Wait for all Promises in the current batch to resolve
    try {
      const batchResults = await Promise.all(batchPromises);
      // 6. Add the results of the current batch to the main results array
      results.push(...batchResults);
    } catch (error) {
      // 7. Handle any errors that occurred within the current batch
      console.error("Error fetching a batch:", error);
      // Depending on your application's needs, you might:
      // - Log the error and continue
      // - Push a placeholder/error object into `results` for the failed requests
      // - Implement retry logic
      // - Stop further processing (e.g., `break;`)
    }
  }

  // 8. Return all collected results
  return results;
}

Step-by-Step Breakdown

Let’s break down each numbered section of the code to understand its role.

1. Function Signature and Initialization

async function fetchInBatches(urls, batchSize = 5) {
  const results = [];
  // ...
}

  • async function fetchInBatches(urls, batchSize = 5):
    • The async keyword is essential. It tells JavaScript that this function will perform asynchronous operations and allows us to use the await keyword inside it. The function itself will implicitly return a Promise.
    • urls: This parameter expects an array of strings, where each string is a URL to an API endpoint.
    • batchSize = 5: This is a default parameter. If you call fetchInBatches without providing a batchSize, it will automatically use 5. This value determines how many fetch requests will run concurrently in each batch.
  • const results = [];: An empty array is initialized to store the data returned from all the successful API calls across all batches.

2. Iterating Through URLs in Batches

for (let i = 0; i < urls.length; i += batchSize) {
  // ...
}

  • This for loop is the core of the batching mechanism.
  • let i = 0: The loop starts at the beginning of the urls array.
  • i < urls.length: The loop continues as long as i is within the bounds of the urls array.
  • i += batchSize: After each iteration (i.e., after processing one batch), the i variable is incremented by batchSize. This ensures that the loop jumps to the start of the next batch of URLs.

3. Extracting the Current Batch

const batch = urls.slice(i, i + batchSize);

  • Inside the loop, urls.slice(i, i + batchSize) is used to extract a subset of URLs for the current batch.
  • The slice() method creates a new array containing elements from the i index (inclusive) up to, but not including, i + batchSize.
  • If i + batchSize goes beyond the end of the urls array, slice() gracefully handles it by just including the remaining elements.

4. Creating Promises for the Batch

const batchPromises = batch.map(url =>
  fetch(url).then(res => {
    if (!res.ok) {
      throw new Error(`HTTP error! status: ${res.status} for URL: ${url}`);
    }
    return res.json();
  })
);

  • batch.map(...): The map() method is incredibly powerful here. It iterates over each url in the batch array and transforms it into a Promise.
  • fetch(url): This initiates the actual network request for each URL. It returns a Promise that resolves to a Response object.
  • .then(res => { ... }): This chain handles the Response object once the fetch Promise resolves.
    • if (!res.ok): It’s crucial to check res.ok. The fetch API’s Promise only rejects for network errors (e.g., no internet connection). It does not reject for HTTP error status codes (like 404 Not Found or 500 Internal Server Error). res.ok is a boolean that is true for successful HTTP status codes (200-299) and false otherwise. If res.ok is false, we explicitly throw new Error() to ensure our Promise.all (and subsequent try...catch) can properly detect and handle these API-level errors.
    • return res.json(): This parses the body of the Response object as JSON. This method also returns a Promise, which resolves with the parsed JavaScript object.
  • The batchPromises array will now hold a collection of Promise objects, each representing the eventual JSON data from a URL in the current batch.

5. Waiting for Batch Completion

try {
  const batchResults = await Promise.all(batchPromises);
  // ...
} catch (error) {
  // ...
}

  • await Promise.all(batchPromises): This is where the concurrent execution within a batch happens.
    • Promise.all() takes an array of Promises (batchPromises in this case).
    • It returns a single Promise that will:
      • Resolve with an array of all the resolved values from batchPromises (in the same order) only when all of them have successfully resolved.
      • Reject immediately with the reason of the first Promise that rejects.
    • The await keyword pauses the execution of the fetchInBatches function until this Promise.all Promise settles. This means the next batch won’t start until the current one is entirely complete.
  • try...catch: This block is vital for robust error handling. If any fetch request within the batchPromises fails (either a network error or an HTTP error that we explicitly threw), Promise.all will reject, and the catch block will execute, allowing you to log the error or handle it as needed without crashing the entire process.

6. Aggregating Results

results.push(...batchResults);

  • ...batchResults: This is the spread syntax. It takes all the individual elements from the batchResults array (the successfully fetched JSON data from the current batch) and “spreads” them as separate arguments to the push() method.
  • This effectively adds all the data from the current batch into the main results array, building up the complete collection of fetched data.

7. Returning All Collected Data

return results;

  • Once the for loop has completed iterating through all the batches, the function returns the results array, which now contains all the successfully fetched and parsed JSON data from all the URLs.

Example Usage

To use this function, you simply provide an array of URLs and, optionally, your desired batchSize.

// Imagine these are your actual API endpoints
const apiUrls = [
  'https://jsonplaceholder.typicode.com/todos/1',
  'https://jsonplaceholder.typicode.com/todos/2',
  'https://jsonplaceholder.typicode.com/todos/3',
  'https://jsonplaceholder.typicode.com/todos/4',
  'https://jsonplaceholder.typicode.com/todos/5',
  'https://jsonplaceholder.typicode.com/todos/6',
  'https://jsonplaceholder.typicode.com/todos/7',
  'https://jsonplaceholder.typicode.com/todos/8',
  'https://jsonplaceholder.typicode.com/todos/9',
  'https://jsonplaceholder.typicode.com/todos/10'
];

async function runBatchedFetches() {
  console.log("Starting batched fetches...");
  // Fetch 3 URLs at a time
  const allData = await fetchInBatches(apiUrls, 3);
  console.log("All data fetched:", allData);
  console.log("Total items:", allData.length);
}

runBatchedFetches();

When you run runBatchedFetches(), you’ll see the requests happening in groups of three, with a slight pause between each group as await Promise.all waits for the current batch to complete.

Benefits of Batched Fetching

  • Rate Limit Compliance: Prevents you from overwhelming APIs that have strict rate limits.
  • Improved Performance: Reduces the strain on the client’s network and CPU by limiting concurrent requests.
  • Better Error Handling: Allows for more granular error handling. If one request in a batch fails, you can catch it without necessarily stopping all subsequent batches (depending on your catch block implementation).
  • Resource Management: More efficient use of network connections and system resources.
  • Predictable Behavior: Provides a more controlled and predictable flow for large sets of asynchronous operations.

Conclusion

The fetchInBatches function is a practical and essential pattern for any JavaScript developer dealing with numerous API calls.

By leveraging async/await, Promise.all(), and Array.prototype.slice(), you can build more powerful applications.

Incorporate this technique into your toolkit to manage your asynchronous operations like a pro!

{ 0 comments }

Credit:http://bit.ly/3FWT3Or

Have you ever noticed how some search bars feel incredibly smooth, only firing off a search query once you’ve stopped typing for a moment?

Or perhaps you’ve built an app where an action triggers too frequently, leading to performance issues.

If so, you’ve encountered the need for debouncing.

In React, we can easily implement this powerful technique using a custom hook called useDebounce.

It’s a game-changer for optimizing performance and enhancing user experience.

Let’s dive in and see how it works!

What is Debouncing and Why Do We Need It?

Debouncing is a programming pattern that limits the rate at which a function can fire.

When you’re typing into a search box, for instance, each keystroke updates the input’s value.

Without debouncing, an API call for search results might be triggered with every single character you type.

Imagine searching for “React Hooks Tutorial”:

  • R -> search for “R”
  • Re -> search for “Re”
  • Rea -> search for “Rea”
  • …and so on.

This creates a flurry of unnecessary requests to your server, wasting resources and potentially slowing down your application.

Debouncing solves this by introducing a delay. It waits for a specified period of inactivity before executing the function. If the “activity” (like typing) continues within that delay, the timer resets. The function only runs after the activity stops for the set duration.


Building Our useDebounce Hook

Let’s break down the useDebounce hook. We’ll create a file named useDebounce.js for this.

import { useState, useEffect } from 'react';

function useDebounce(value, delay) {
  // State to store the debounced value
  const [debouncedValue, setDebouncedValue] = useState(value);

  useEffect(() => {
    // Set a timeout to update the debounced value after the specified delay
    const handler = setTimeout(() => {
      setDebouncedValue(value);
    }, delay);

    // Clean up the timeout if the value or delay changes, or if the component unmounts
    return () => {
      clearTimeout(handler);
    };
  }, [value, delay]); // Only re-run if value or delay changes

  return debouncedValue;
}

export default useDebounce;

Let’s dissect what’s happening here:

  • useState(value): We initialize a state variable debouncedValue with the initial value that’s passed into our hook. This debouncedValue is what our component will actually “see” after the delay.
  • useEffect(() => { ... }, [value, delay]): This is where the magic happens.
    • The useEffect hook runs whenever value or delay changes.
    • setTimeout(...): Inside the effect, we set a timer. After the specified delay, the setDebouncedValue(value) function is called, updating our debouncedValue state to the current value.
    • Cleanup Function (return () => { clearTimeout(handler); }): This part is crucial for proper debouncing. If value changes before the setTimeout finishes (i.e., the user types another character), this cleanup function runs. It clears the previous setTimeout, effectively resetting the timer. This ensures that the setDebouncedValue only happens once the user has stopped typing for the entire delay duration.

Putting useDebounce into Practice: A Search Input Example

Now that we have our useDebounce hook, let’s see how to use it in a typical search input component.

import React, { useState, useEffect } from 'react'; // Don't forget useEffect here!
import useDebounce from './useDebounce'; // Assuming you save the hook in useDebounce.js

function SearchInput() {
  const [searchTerm, setSearchTerm] = useState('');
  const debouncedSearchTerm = useDebounce(searchTerm, 500); // Debounce by 500ms

  // This effect will only run after 500ms of no typing
  useEffect(() => {
    if (debouncedSearchTerm) {
      console.log('Fetching data for:', debouncedSearchTerm);
      // In a real app, you'd make an API call here, e.g.,
      // fetchData(debouncedSearchTerm);
    } else {
      console.log('Search term cleared.');
    }
  }, [debouncedSearchTerm]); // Trigger this effect only when debouncedSearchTerm changes

  const handleChange = (event) => {
    setSearchTerm(event.target.value);
  };

  return (
    <div>
      <input
        type="text"
        placeholder="Search..."
        value={searchTerm}
        onChange={handleChange}
      />
      <p>Current search term: {searchTerm}</p>
      <p>Debounced search term (used for fetching): {debouncedSearchTerm}</p>
    </div>
  );
}

export default SearchInput;

In this SearchInput component:

  1. We maintain the searchTerm state, which updates immediately with every keystroke.
  2. We pass searchTerm and a delay of 500 milliseconds to our useDebounce hook. The hook returns debouncedSearchTerm.
  3. A separate useEffect monitors debouncedSearchTerm. This useEffect will only run when debouncedSearchTerm actually changes, which happens after the user has paused typing for 500ms.
  4. Inside this useEffect, you’d typically trigger your data fetching logic (e.g., an API call) using the stable debouncedSearchTerm.

You can try this out yourself! As you type quickly, notice how the “Debounced search term” paragraph (and the console log) only updates after you stop typing for half a second.


Beyond Search Inputs

The useDebounce hook isn’t just for search bars! You can use it for:

  • Resizing windows: Only update layout calculations after the user has finished resizing.
  • Form validations: Validate input only after the user pauses typing, rather than on every keystroke.
  • Autosave features: Save user progress only after a period of inactivity.
  • Any scenario where you want to limit the frequency of an action based on continuous user input.

Conclusion

The useDebounce custom hook is a simple yet incredibly powerful tool for optimizing your React applications.

By intelligently delaying actions based on user input, you can significantly improve performance, reduce unnecessary requests, and provide a smoother, more responsive user experience.

Incorporate debouncing into your React toolkit, and your users will thank you!

Do you have any other common performance bottlenecks in your React apps that you’re looking to solve?

If so let us know in the comments section below!

{ 0 comments }