Large Language Model Reasoning Failures
Summary
This arXiv paper provides a comprehensive survey of reasoning failures in large language models, introducing a taxonomy that distinguishes embodied and non-embodied reasoning, with non-embodied further split into informal and formal reasoning. It classifies failures into fundamental, application-specific, and robustness issues, analyzes root causes, surveys existing work, and proposes mitigation strategies, plus a GitHub repository of related works to guide future research.