top of page
Writer's pictureThe Tech Platform

How to handle blockers in Node.js ?



What is a blocker?

Before we talk about “handling” a blocker in your code, we first need to identify it. A blocker is a CPU-intensive operation that keeps your main thread busy for a long time.


Let’s build an example, I will use Node.js for that. Node.js is asynchronous by its nature. I will spin up a simple HTTP server and create a “blocker”. Let’s say we will need to find the sum of numbers up to 1000000000. On my machine, it takes around 1 second to calculate that number.

const http = require('node:http');

function getSum(n) 
{    
    let sum = 0;    
    
    while (n > 0) 
    {        
        sum += n;        
        n--;    
    }    
    
    return sum;
}

const server = http.createServer((req, res) => 
{    
    res.write(String(getSum(1000000000)));    
    res.end();
});

server.listen(3000);

Now let’s use Apache bench to measure our server's performance.



As you can see the server can handle only 1 RPS (Request Per Second), though we have concurrency set to 10. This happens because the main thread is blocked exactly for one second working on a blocker, so it is a hard limit for our server right now.


Partitioning

The main idea behind async code is to process small pieces, as fast as possible. Therefore one of the options is to split a blocker into smaller pieces of work, in that way “spreading” the work in the time.



Now let’s modify our code to calculate the result “step by step”, that we will need a method that would allow us to “schedule” our piece of work for later, in Node.js we can use two functions setTimeout or setImmediate . I won’t talk about the differences between these two functions in this article. In my implementation, I am using setImmediate


As you can see now, we calculated the sum in pieces by 1000000, each time we did a part of the work, we scheduled another part of the work for later. In that way, we give some time for other requests.

const http = require('node:http');

function getSumAsync(n) 
{    
    let sum = 0;    
    
    const calcSum = (partSize, res) => () => 
    {        
        const partEnd = partSize > n ? 1 : n - partSize;        
        
        do 
        {            
            sum += n;            
            n--;        
        } 
        
        while (partEnd <= n)        
        
        if (n === 0) 
        {            
            res(sum);            
            return;        
        }        
        
        setImmediate(calcSum(partSize, res));    
    }    
    
    return new Promise(res => 
    {        
        setImmediate(calcSum(1000000, res));    
    });
}

const server = http.createServer(async (req, res) => 
{    
    getSumAsync(1000000000)        
    .then((number) => 
    {            
        res.write(String(number));            
        res.end();        
    })
});

server.listen(3000);

Now let’s test this solution with Apache Bench one more time.



As you can see now, the average number of RPS is 0.13, and this happens because now our server, instead of working on a single request, works on multiple “at the same time”.


Offloading

Besides partitioning, you can also offload your blocker to a separate thread. I will use a Worker Pool pattern to implement offloading, on a high level the idea is to have a queue of tasks and a pool of workers that work on each individual task when they can.

In that way, we can achieve better results because now we are actually scaling the server since more CPUs will be working on serving our requests.


We will be using workerpool NPM module because it is easy to use and reliable. As you can see, we created a pool and defined the max number of workers as equal to 8, because my PC has 8 cores, defining more than 8 will be inefficient (you can read why in my previous article).


const http = require('node:http');
const workerpool = require('workerpool');

const pool = workerpool.pool();
pool.maxWorkers = 8;

function getSum(n) 
{    
    let sum = 0;    
    
    while (n > 0) 
    {        
        sum += n;        
        n--;    
    }    
    
    return sum;
}

const server = http.createServer(async (req, res) => 
{    
    pool.exec(getSum, [1000000000])        
        .then((number) => 
        {            
            res.write(String(number));            
            res.end();        
        })
});

server.listen(3000);

And now it is time to test the server again.



This time our server was able to serve around 6.2 RPS, and our time per request improved significantly. I did a few more tests to find out the max concurrency level, below you can see the results of the max concurrency level that I was able to achieve with this set-up.



220 concurrent requests was a maximum for my server, though now we have 5.78 RPS, and on average it takes significantly longer to process one request.


This is actually expected that it takes longer because the throughput of our server remained the same. We still have 8 workers working on the tasks in the queue, but because we have higher concurrency on average a request will wait longer in a queue before it will be processed. Imagine that you are 220th in the queue, you will need to wait around 27 seconds before it's your turn.


Though we overcome the limits of a single thread in the async server, let’s see if we can take it to the next level of performance.


Bonus: Caching + Offloading

Caching is like a flex tape for the performance of your application, around 80% of performance optimizations are caching. The idea is to save the results of complex computations that we are making to not perform that same work multiple times.


To present this approach I will add a bit of randomness into our code to make sure it needs to calculate different numbers. In that way we will simulate more real-world situations. We will be generating a random N for each request between 1000000000 and 1000001000, and calculating the sum for it.


Though in our code we are using an in-memory cache for simplicity. In a production application you should use distributed cache (for example Redis) so that all your servers use the latest cache and do not perform extra work.

const http = require('node:http');
const workerpool = require('workerpool');

const pool = workerpool.pool();
pool.maxWorkers = 8;

const getRandomN = (min, max) => Math.floor(Math.random()*
                                        (max-min)) + min;

function getSum(n)
{
    let sum = 0;
    
    while (n>0)
    {
        sum += n;
        n --;
    }
   
   return sum;
}
       
const cache = new Map();

const server = http.createServer(async (req,res)=>
{
    const n = getRandomN(1000000000, 1000001000);
    
    if(cache.has(n))
    {
        res.write(cache.get(n));
        res.end();
        return;
    }
    
    pool.exec(getSum,[n]).then((number)=>
    {
        const result = String (number);
        cache.set(n, result);
        res.write(result);
        res.end();
    })
});

Now let’s see our results.



I have increased the number of requests to 10000 because we need some time to generate cache. As you can see, now we were able to reach 40 RPS on average. Also, we were able to reduce the time per request. This happens because if we calculated something once we return it from the cache.


Summary

Node.js, like any other asynchronous server, can be great at I/O (Input/Output) but it is terrible at CPU-intensive operations. You should be aware of blockers because they can kill the performance of your servers.


Offloading gives betters performance than partitioning. Moreover, sometimes it is hard to apply partitioning to the computations you have. Apart from that offloading offers better ways of scaling, because you may have separate machines acting as workers.



Source: Medium - Nazarii Romankiv


The Tech Platform

0 comments

Recent Posts

See All

Comments


bottom of page