almessadi.
Back to Index

Denormalization Is Worth It Only When the Read Pattern Justifies Duplication_

Joins are not automatically the enemy. Denormalization pays off when the workload is predictable enough that duplicated data is cheaper than repeatedly reconstructing it.

PublishedJanuary 5, 2025
Reading Time4 min read

Teams often denormalize for the wrong reason: they are afraid of joins in general. That is usually a sign of vague performance folklore, not measurement.

The stronger reason to denormalize is workload shape. If a read path is hot, stable, and expensive to reconstruct over and over, then duplicating a carefully chosen slice of data can be the right trade.

Where Denormalization Actually Helps

A common example is an orders dashboard that always needs the same joined summary:

  • order id
  • customer name
  • current status
  • total amount

If that page is extremely hot, one option is to maintain a read-optimized table or materialized view:

CREATE MATERIALIZED VIEW order_summaries AS
SELECT
  o.id,
  c.name AS customer_name,
  o.status,
  o.total_amount
FROM orders o
JOIN customers c ON c.id = o.customer_id;

That is a performance technique. It is not a reason to abandon relational design everywhere else.

The Trade-Off

Denormalization buys read speed by introducing write complexity:

  • duplicated data
  • refresh logic
  • invalidation rules
  • drift risk when the projection is stale

If the query is not actually hot, or if indexing fixes the problem, denormalization may just create more maintenance work than value.

Better Rule

Start with normalized tables and good indexes. Measure the real bottleneck. Denormalize only when the read pattern is stable enough that duplication is cheaper than recomputation.

Joins are not the enemy. Unmeasured architecture decisions are.

Further Reading