Muon outperforms every optimizer we tested (AdamW, SOAP, MAGMA). Multi-epoch training matters. And following work by Kotha et al. , scaling to large parameter counts works if you pair it with aggressive regularization -- weight decay up to 16x standard, plus dropout. The baseline sits at ~2.4x data efficiency against modded-nanogpt.
Мелания Трамп поблагодарила Россию02:10
,更多细节参见Safew下载
the confusion: “Can't we deprecate the thing and move it elsewhere?”.
(pc = config & SMASK) == 0)