Google Open-Sources GPipe Library to Scale Up Deep Neural Network Training

0
985
Advertisement

GPipe allows researchers to easily deploy more accelerators to train larger models and to scale the performance without tuning hyperparameters.

Google AI research team has open-sourced GPipe, a distributed machine learning library for efficiently training Large-scale Deep Neural Network Models, under the Lingvo Framework.

GPipe uses synchronous stochastic gradient descent and pipeline parallelism for training.

It allows researchers to easily deploy more accelerators to train larger models and to scale the performance without tuning hyperparameters.

Advertisement

“To enable efficient training across multiple accelerators, GPipe partitions a model across different accelerators and automatically splits a mini-batch of training examples into smaller micro-batches,” the team wrote in a blog.

Accuracy of GPipe

The AI research team used GPipe to verify the hypothesis that scaling up existing neural networks can help achieve better model quality. They trained an AmoebaNet-B with 557 million model parameters and input image size of 480 x 480 on the ImageNet ILSVRC-2012 dataset. The model was able to reach 84.3 per cent top-1 / 97 per cent top-5 single-crop validation accuracy without the use of any external data.

The researchers also ran the transfer learning experiments on the CIFAR10 and CIFAR100 datasets, where they observed that the giant models improved the best published CIFAR-10 accuracy to 99 per cent and CIFAR-100 accuracy to 91.3 per cent.

“The ongoing development and success of many practical machine learning applications, such as autonomous driving and medical imaging, depend on achieving the highest accuracy possible. As this often requires building larger and even more complex models, we are happy to provide GPipe to the broader research community, and hope it is a useful infrastructure for efficient training of large-scale DNNs,” the team said.

Google AI researchers had also published a paper titled “GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism”, in which they demonstrated the use of pipeline parallelism to scale up deep neural networks to overcome the memory limitation on current accelerators.

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here