Contra “Grandmaster-Level chess without search” (2024)
Summary
The article critiques a 2024 DeepMind-style approach that uses transformer models to play chess without traditional search. It argues that the claimed grandmaster-level performance is not clearly novel, highlights open-source work by Leela Chess Zero and BT4 that challenges the novelty, and questions the evaluation methods (e.g., reliance on Blitz Elo and one-ply lookups) used to claim superiority. Overall, it advocates for more rigorous evaluation and acknowledgment of prior work in AI chess research.