≡ Menu

SAAS products are proving that you don’t need a massive team or venture capital to build a successful and profitable business.

A growing number of micro-SAAS companies are hitting significant monthly recurring revenue (MRR) milestones, often by focusing on a single, well-defined problem and providing a streamlined solution.

Here are a few dozen examples of SAAS products that are making at least $5,000 in monthly recurring revenue (MRR).

 

1 – SolidGigs – SolidGigs is a platform for freelancers that curates the best job leads from various job boards and agencies, saving them from the time-consuming task of searching for work. The founder achieved $7,600 MRR within seven months, demonstrating a clear demand for a service that simplifies the freelance job-hunting process.

 

2 – Hypefury – Hypefury is a social media tool that helps content creators and marketers automate and optimize their posts on platforms like Twitter (now X) and LinkedIn. By providing features like automated scheduling and engagement prompts, it helps users grow their audience and has grown its MRR to over $20,000.

 

3 – Lasso – Lasso is a popular WordPress plugin that helps affiliate marketers manage and monetize their links. By providing a clean interface for displaying, organizing, and cloaking affiliate links, it solves a key pain point for bloggers and content creators and has a self-reported MRR of over $10,000.

 

4 – Super Send – Super Send is a cold outreach tool that automates personalized communication across multiple channels, including email and social media. The founder, who was frustrated with existing, expensive solutions, built a platform that now generates around $6,000 MRR by offering a more comprehensive and affordable all-in-one tool for small businesses and individuals.

 

5 – Plausible Analytics – Plausible Analytics is a privacy-friendly and simple alternative to Google Analytics. With a focus on data privacy and a lightweight, open-source approach, it has successfully tapped into the growing market of users concerned about tracking, reaching $30,000 MRR.

 

6 – PDFShift – PDFShift provides an API for developers to easily convert HTML to PDF documents. It’s a classic example of a micro-SaaS that solves a specific, technical problem for a niche audience. The company achieved $8,500 MRR, showcasing the value of a reliable, developer-focused tool.

 

7 – Potion – Potion is a SaaS product that allows users to create custom websites directly from their Notion pages. By leveraging the popularity of Notion, it provides a simple and efficient way to turn notes into public websites and has reached $5,000 MRR and beyond.

 

8 – Repurposepie – Repurposepie is a micro-SaaS that automatically converts tweets into short videos for platforms like TikTok and YouTube Shorts. The product’s simplicity and direct value proposition allowed it to achieve a remarkable $5,000 MRR in just three days after its launch, demonstrating the high demand for content repurposing tools.

 

9 – Magical Make an Offer – This Shopify app brings the negotiation experience of a physical marketplace to e-commerce, allowing customers to make offers on products. It has reached a self-reported $5,000 MRR by increasing conversions and customer engagement for Shopify merchants.

 

10 – Unicorn Platform – Unicorn Platform is a simple AI-powered website builder for startups and solopreneurs. It differentiates itself from more complex tools like WordPress by focusing on ease of use and speed, and has grown to over $16,000 MRR with over 1,000 paying customers.

 

11 – Olvy – Olvy is a tool for businesses to make their release notes more engaging and conversational. The founders leveraged a “builders program” for early feedback and a successful launch on Product Hunt, which helped them grow from around $2,000 MRR to over $50,000 by creating a product that people genuinely wanted to use.

 

12 – Better Sheets – Better Sheets is a collection of video tutorials and templates for Google Sheets. The founder leverages a lifetime deal model to grow the business to $5,000 MRR, showcasing how a non-traditional SaaS model can be highly profitable.

 

13 – OpenPhone – OpenPhone is a modern business phone system that allows teams to manage calls, texts, and voicemails from their computers or phones. It serves as a great example of a SaaS product that simplifies communication for businesses, leading to significant growth.

 

14 – Kinsta – Kinsta is a high-performance managed WordPress hosting platform. It provides a specialized and reliable hosting service for a specific niche, demonstrating how a premium product with excellent support can command higher prices and build a loyal customer base. Kinsta generates well over $1 million in annual recurring revenue (ARR). 

 

15 – DocuWriter.ai – This AI-powered tool helps users with writing and document creation. By leveraging the power of AI to streamline a common task, it has found success in the market and is generating over $5,000 MRR.

Well there you go,a few SAAS companies making over $5k in monthly recurring revenue.

I’ll be adding to this list once I do some more research.

Remember,it doesn’t take a large team of programmers and bunch of investor money to build an app that generates some nice monthly recurring revenue!

 

{ 0 comments }

Ever noticed your web application feeling sluggish, especially when it needs to fetch a lot of data?

One common culprit can be too many individual API calls.

Each call incurs network overhead, which can quickly add up and impact both your server and your user’s experience.

Fortunately, there’s a powerful optimization technique called API call batching.

In this tutorial, we’ll dive into what API batching is, why it’s crucial for performance, and how you can implement a robust batching mechanism in your JavaScript applications.

The Problem: Too Many Requests 😩

Imagine your dashboard needs to display a user’s profile, their latest posts, comments on those posts, and various settings. Without batching, you might end up making four separate API calls:

  1. GET /api/user/profile
  2. GET /api/user/posts
  3. GET /api/user/comments
  4. GET /api/user/settings

Each of these is a separate network request. Even if they’re fast, the cumulative effect of establishing connections, sending headers, waiting for responses, and processing each individually can lead to noticeable delays.

This not only burdens the client (your user’s browser) but also puts more strain on your server, which has to handle each request independently.

The Solution: API Call Batching 📦

API call batching is like sending a single, consolidated shopping list to the grocery store instead of sending separate messengers for each item.

Instead of multiple small requests, we gather several requests on the client-side and send them all at once in a single, larger network request to a dedicated batch endpoint on your server.

The server processes all these sub-requests and returns a single response containing the results for each.

Benefits of Batching:

  • Reduced Network Overhead: Fewer round trips mean less time spent on connection setup and teardown.
  • Lower Server Load: Your server processes fewer overall requests, saving CPU and memory.
  • Improved Performance: Faster data loading leads to a snappier, more responsive application and a better user experience.

Building Your API Batcher in JavaScript

Let’s break down the JavaScript code that powers our API batching system. We’ll look at a simulateApiCall utility (to act as our mock backend) and the ApiBatcher class itself.

1. simulateApiCall Function: Our Mock API 🎭

function simulateApiCall(request) {
    return new Promise(resolve => {
        // Simulate network delay
        setTimeout(() => {
            console.log(`Simulating API call for:`, request);
            // Simulate a successful response
            resolve({
                requestId: request.id,
                status: 'success',
                data: `Processed data for ${request.type} ID: ${request.data.id}`
            });
        }, Math.random() * 500 + 100); // Random delay between 100ms and 600ms
    });
}

This function isn’t hitting a real server. Its purpose is purely to mimic the asynchronous nature and delay of a real API call.

  • It returns a Promise, just like fetch would.
  • setTimeout introduces a random delay between 100ms and 600ms. This makes our simulation more realistic.
  • After the delay, it calls resolve() on the Promise, passing a simulated response object. This object contains:
    • requestId: Crucial for identifying which original request this response belongs to in a batch.
    • status: Indicates success or failure.
    • data: The mock data returned by the “API”.

2. ApiBatcher Class: The Core Batching Logic

This class is the brain behind our batching operation.

constructor(delayMs, maxBatchSize, batchEndpoint) 🏗️

class ApiBatcher {
    constructor(delayMs = 200, maxBatchSize = 5, batchEndpoint = simulateApiCall) {
        this.requestQueue = []; // Stores individual requests
        this.pendingTimeout = null; // Stores the setTimeout ID
        this.delayMs = delayMs;
        this.maxBatchSize = maxBatchSize;
        this.batchEndpoint = batchEndpoint;
        console.log(`ApiBatcher initialized with delay: ${delayMs}ms, max batch size: ${maxBatchSize}`);
    }
    // ... rest of the class
}

The constructor sets up our batcher:

  • requestQueue: An array that holds individual requests waiting to be sent in a batch. Each item in the queue stores the original request object itself, along with the resolve and reject functions of the Promise returned to the caller, so we can fulfill them later.
  • pendingTimeout: Stores the ID of the setTimeout that triggers batch processing. This allows us to clear it if a batch is processed early.
  • delayMs: The maximum time (in milliseconds) the batcher will wait before sending the current queue as a batch, even if maxBatchSize isn’t reached. This prevents requests from being stuck indefinitely.
  • maxBatchSize: The maximum number of individual requests to include in a single batch. If the queue hits this size, the batch is sent immediately.
  • batchEndpoint: This is the function that will actually send the consolidated batch request. In our example, it’s a simulated server endpoint that takes an array of requests and returns an array of responses. In a real application, this would be your fetch call to your actual server’s batch API.

addRequest(request): Queuing Individual Calls ➕

addRequest(request) {
    return new Promise((resolve, reject) => {
        this.requestQueue.push({ request, resolve, reject });
        console.log(`Request added to queue. Current queue size: ${this.requestQueue.length}`);

        // If the queue is full, process immediately
        if (this.requestQueue.length >= this.maxBatchSize) {
            this.processBatch();
        } else if (!this.pendingTimeout) {
            // Otherwise, set a timeout to process the batch
            this.pendingTimeout = setTimeout(() => {
                this.processBatch();
            }, this.delayMs);
        }
    });
}

This is how your application code interacts with the batcher. When you call addRequest with an individual request object (e.g., { id: 'user_1', type: 'getUser', data: { id: 1 } }):

  1. It creates a new Promise and stores the request along with its resolve and reject functions in the requestQueue. This allows us to fulfill this specific Promise later, even though it’s part of a batch.
  2. It checks two conditions to decide when to send the batch:
    • Size-based Trigger: If this.requestQueue.length reaches this.maxBatchSize, processBatch() is called immediately. This is a “full-batch” trigger.
    • Time-based Trigger: If the queue isn’t full, but no batch processing is currently pending, it sets a setTimeout to call processBatch() after this.delayMs. This ensures that even a small number of requests get processed after a short wait.
  3. The method returns the new Promise, so your original code can still use .then() and .catch() as if it were making a direct API call.

processBatch(): Sending the Batch and Resolving Promises 🚀

async processBatch() {
    // Clear any pending timeouts to prevent duplicate processing
    if (this.pendingTimeout) {
        clearTimeout(this.pendingTimeout);
        this.pendingTimeout = null;
    }

    if (this.requestQueue.length === 0) {
        console.log('No requests in queue to process.');
        return;
    }

    // Take all requests from the queue for the current batch
    const currentBatch = this.requestQueue;
    this.requestQueue = []; // Reset the queue

    const batchedRequests = currentBatch.map(item => item.request);
    console.log(`Processing batch with ${batchedRequests.length} requests:`, batchedRequests);

    try {
        const batchResponse = await this.batchEndpoint({
            type: 'batch',
            requests: batchedRequests
        });

        // Map responses back to individual promises
        currentBatch.forEach((item, index) => {
            const individualResponse = batchResponse.data.find(res => res.requestId === item.request.id);
            if (individualResponse && individualResponse.status === 'success') {
                item.resolve(individualResponse.data);
            } else {
                item.reject(new Error(`Request ${item.request.id} failed: ${individualResponse?.message || 'Unknown error'}`));
            }
        });
        console.log('Batch processed successfully.');

    } catch (error) {
        console.error('Error processing batch:', error);
        // Reject all promises in the current batch if the batch call itself fails
        currentBatch.forEach(item => {
            item.reject(new Error(`Batch processing failed: ${error.message}`));
        });
    }
}

This is where the magic happens! processBatch is called when a batch is ready to be sent.

  1. It first clears any pendingTimeout to avoid sending the same batch twice.
  2. It captures all requests currently in this.requestQueue into currentBatch and then resets this.requestQueue. This is critical: it ensures new incoming requests start a fresh queue.
  3. It then calls this.batchEndpoint, passing a single object containing all the batchedRequests. This simulates sending the combined request to your server.
  4. Once the batchEndpoint (our mock server) responds, it iterates through the currentBatch of original requests. For each original request, it finds its corresponding result in the batchResponse.data using the requestId.
  5. Finally, it calls the resolve or reject function associated with each individual request’s Promise, fulfilling or rejecting it based on the status from the individualResponse. This means the .then() or .catch() handlers you attached to your addRequest calls will now fire!

Example Usage: Seeing It in Action 🎬

// Initialize the batcher
const apiBatcher = new ApiBatcher(200, 3, async (batchPayload) => {
    // This is the function that simulates your server-side batch API endpoint
    // It receives the `batchPayload` containing an array of individual requests.
    // It should return a promise that resolves with an array of individual responses.
    console.log(`\n--- Server received a batch with ${batchPayload.requests.length} items ---`);
    const results = [];
    for (const req of batchPayload.requests) {
        // Simulate processing each individual request on the server side
        const responseData = await simulateApiCall(req); // Call your actual API logic here for each sub-request
        results.push({
            requestId: req.id,
            status: 'success',
            data: responseData.data // Use data from the simulated sub-call
        });
    }
    console.log(`--- Server finished processing batch, sending response ---`);
    return {
        batchId: 'BATCH_' + Date.now(),
        status: 'success',
        data: results // Array of results for each individual request
    };
});

// Add requests to the batcher
console.log("Adding requests...");

// These requests will be batched and sent together based on delay/size
apiBatcher.addRequest({ id: 'user_1', type: 'getUser', data: { id: 1 } })
    .then(result => console.log(`User 1 data: ${result}`))
    .catch(error => console.error(`Failed to get user 1: ${error.message}`));

apiBatcher.addRequest({ id: 'post_1', type: 'getPost', data: { id: 101 } })
    .then(result => console.log(`Post 1 data: ${result}`))
    .catch(error => console.error(`Failed to get post 1: ${error.message}`));

apiBatcher.addRequest({ id: 'comment_1', type: 'getComment', data: { id: 501 } })
    .then(result => console.log(`Comment 1 data: ${result}`))
    .catch(error => console.error(`Failed to get comment 1: ${error.message}`));

// This request will trigger the batch immediately because maxBatchSize is 3
apiBatcher.addRequest({ id: 'user_2', type: 'getUser', data: { id: 2 } })
    .then(result => console.log(`User 2 data: ${result}`))
    .catch(error => console.error(`Failed to get user 2: ${error.message}`));

// These requests will form a new batch after the delay
apiBatcher.addRequest({ id: 'settings_1', type: 'getSettings', data: { id: 10 } })
    .then(result => console.log(`Settings 1 data: ${result}`))
    .catch(error => console.error(`Failed to get settings 1: ${error.message}`));

apiBatcher.addRequest({ id: 'feed_1', type: 'getFeed', data: { id: 20 } })
    .then(result => console.log(`Feed 1 data: ${result}`))
    .catch(error => console.error(`Failed to get feed 1: ${error.message}`));

In the example:

  1. We initialize apiBatcher with a delayMs of 200ms and maxBatchSize of 3.
  2. The batchEndpoint passed to the ApiBatcher is an async function that itself uses our simulateApiCall for each individual request within the batch, mimicking server-side processing.
  3. When you add the first three requests (user_1, post_1, comment_1), they’re queued.
  4. Adding user_2 hits the maxBatchSize of 3 (making the queue size 4, so it triggers immediately after adding 3 requests). This causes the first batch to be processed right away.
  5. The subsequent requests (settings_1, feed_1) will then form a new batch, which will be processed after the delayMs (200ms) because maxBatchSize hasn’t been reached yet.

Run this code in your browser’s console to observe the log messages and understand the flow of batching!

You’ll see individual requests being added, but batch processing occurring only when the size limit is hit or the delay expires.

Considerations for Real-World Applications 🌍

While this implementation provides a solid foundation, a production-ready batching system would also consider:

  • Server-Side Support: You need a backend API endpoint specifically designed to receive and process batched requests. The server must be able to parse the incoming array of requests, execute them, and return a consolidated response with individual results.
  • Error Handling Granularity: How to gracefully handle scenarios where some requests within a batch succeed while others fail.
  • Request Dependencies: What if one request in a batch depends on the result of another? Batching might not be suitable for heavily dependent operations, or your server might need sophisticated dependency resolution.
  • Payload Size Limits: Very large batches can still cause issues if the total payload size exceeds server or network limits.
  • Authentication & Authorization: Ensuring each sub-request within a batch is properly authenticated and authorized.
  • Idempotency: Designing requests so that sending the same batch multiple times (e.g., due to retries) doesn’t cause unintended side effects.

Conclusion 👋

API call batching is a powerful tool in your web development arsenal for optimizing application performance and reducing server load.

By intelligently grouping requests, you can significantly improve the efficiency of your data interactions. Understanding concepts like delayMs, maxBatchSize, and how to resolve individual Promises within a batch is key to building a robust system.

Feel free to experiment with the delayMs and maxBatchSize parameters in the example to see how they affect batching behavior!

Let me know if you’d like to dive deeper into any specific aspect of this batcher, such as implementing a real fetch call to a mock server, or exploring error handling in more detail!

{ 0 comments }