almessadi.
Back to Index

Cold Starts Hurt Most When Functions Do Too Much Before the Handler Runs_

Serverless cold-start latency comes from runtime setup, dependency loading, and application initialization. The fastest fixes usually remove startup work rather than chasing one metric.

PublishedOctober 9, 2024
Reading Time7 min read

Cold starts matter because they are paid exactly when a user expects the system to feel instant.

The common mistake is thinking of cold-start latency as a mysterious cloud tax you cannot influence. In practice, the biggest part you control is often your own function initialization path.

Where the Time Usually Goes

A cold start can include:

  • runtime boot
  • code package loading
  • dependency parsing
  • global initialization
  • database or SDK setup

That means large bundles and eager initialization are usually the first places to look.

A Better Pattern

Keep global scope light:

import { S3Client } from "@aws-sdk/client-s3";

const s3 = new S3Client({});

export async function handler(event: unknown) {
  return { ok: true };
}

And move genuinely rare heavy work deeper into the execution path:

export async function handler(event: { mode: string }) {
  if (event.mode === "pdf") {
    const { renderPdf } = await import("./pdf.js");
    return renderPdf(event);
  }

  return { ok: true };
}

That does not make every request faster. It makes the common path cheaper to start.

Trade-Offs

Provisioned or prewarmed concurrency can reduce cold-start pain, but it also changes cost. The right answer depends on whether the function is user-facing, traffic is bursty, and latency matters enough to justify the spend.

Further Reading