Node.js is good at I/O concurrency. It is not good at pretending CPU work is asynchronous just because the handler function uses async.
That distinction matters because many performance incidents do not come from slow databases or slow networks. They come from one request doing too much synchronous work while every other request waits behind it.
The Misleading Example
This route looks harmless at first glance:
app.post("/webhook", async (req, res) => {
const payload = JSON.parse(req.body.raw);
await database.save(payload);
res.send("ok");
});
The database call is asynchronous. JSON.parse is not.
If the payload is huge, parsing happens on the main thread, and the event loop cannot keep serving other incoming requests until that work finishes.
What Actually Blocks
In real Node services, the usual culprits are:
- huge
JSON.parse and JSON.stringify calls
- image or PDF generation
- crypto work done in the wrong place
- large synchronous filesystem work
- expensive regex or transform loops
The fix is not "make everything async." The fix is to move or reshape the work.
Better Options
When possible:
- stream instead of buffering large payloads
- use workers for CPU-heavy tasks
- move expensive transformations out of hot request paths
For example, worker threads are often the cleaner boundary for compute-heavy operations:
import { Worker } from "node:worker_threads";
export function runHeavyTask(input: unknown) {
return new Promise((resolve, reject) => {
const worker = new Worker(new URL("./worker.js", import.meta.url), {
workerData: input,
});
worker.once("message", resolve);
worker.once("error", reject);
});
}
That does not make the work cheaper. It keeps the event loop responsive while the work happens elsewhere.
The Trade-Off
Node is still a good choice for lots of backend systems. You just need to respect the runtime model. If your service spends most of its time waiting on I/O, Node fits naturally. If it spends most of its time crunching data on the CPU, you need more deliberate isolation.
Further Reading