A Theory of Deep Learning
Summary
An in-depth theory piece on deep learning generalization proposing a dynamical-systems view in output space and the empirical Neural Tangent Kernel. It surveys established results like benign overfitting, double descent, and implicit bias, then introduces a unifying framework and practical implications, including training on population risk. The article cites a preprint and provides links to code and datasets for further exploration.