LLMs Don't Hallucinate – They Drift
Summary
The article argues that LLM errors stem more from semantic drift over time and context than from mere hallucinations. It introduces a framework for measuring fidelity decay, semantic drift, and potential collapse in model outputs, along with evaluation protocols. The goal is to provide robust metrics for monitoring and improving reliability in production AI systems used in business settings.