If a Kafka consumer triggers a side effect, assume duplicates are possible until you have proven otherwise.
That is the safer default.
Kafka has exactly-once features, but engineers often over-interpret what those guarantees cover. They help a lot inside Kafka's own processing model. They do not magically make every downstream database write or external API call duplicate-free.
The Consumer Problem
The risky pattern looks like this:
consume message
update database
acknowledge progress
If the service crashes between steps, a replay can happen.
That is not Kafka being broken. It is normal distributed systems behavior under failure.
Use an Idempotency Key
Every event that can trigger a side effect should carry a stable identifier: