DigiNews

Tech Watch Articles

← Back to articles

Large Language Model Reasoning Failures

Quality: 9/10 Relevance: 9/10

Summary

This arXiv paper provides a comprehensive survey of reasoning failures in large language models, introducing a taxonomy that distinguishes embodied and non-embodied reasoning, with non-embodied further split into informal and formal reasoning. It classifies failures into fundamental, application-specific, and robustness issues, analyzes root causes, surveys existing work, and proposes mitigation strategies, plus a GitHub repository of related works to guide future research.

🚀 Service construit par Johan Denoyer