large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization we propo...
|
large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization we propo...
|