DigiNews

Tech Watch Articles

← Back to articles

Open Weights Isn't Open Training

Quality: 7/10 Relevance: 8/10

Summary

Open Weights isn't Open Training argues that post-training trillion-parameter models on open-source ML stacks reveal deep inefficiencies that simple patching cannot fix. The author documents attempts to post-train Kimi-K2-Thinking, discusses the limits of existing open-source tooling, and chronicles a sequence of memory and architecture challenges, along with iterative fixes. The piece emphasizes the debt in open-source ML infra and the need for robust, well-integrated training stacks over patchwork solutions.

🚀 Service construit par Johan Denoyer