SCALABLE AND PRACTICAL NATURAL GRADIENT FOR LARGE-SCALE DEEP LEARNING
large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization we propo...
SCALABLE AND PRACTICAL NATURAL GRADIENT FOR LARGE-SCALE DEEP LEARNING
large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization we propo...