Google develops Tensor2Tensor library to ease deep learning training

0
5978
Googlel Tensor2Tensor deep learning training library

Googlel Tensor2Tensor deep learning training library

Widening its presence in the world of artificial intelligence, Google has released its open source library called Tensor2Tensor. The latest development eases the training of deep learning models in the TensorFlow framework.

Tensor2Tensor (T2T) library can be used for applications such as text translation, parsing and image captioning to explore all the possible means in a faster way. This new development helps young developers who are trying to explore and experiment with deep learning tweaks for their applications. Notably, Google has provided relevant datasets, models and configuration to offer references for advanced integration practices.

“T2T is flexible, with training no longer pinned to a specific model or dataset. It is so easy that even architectures like the famous LSTM sequence-to-sequence model can be defined in a few dozen lines of code,” Senior Research Scientist of Google’s Brain Lukasz Kaiser writes in a blog post.

The library is capable of helping you achieve state-of-the-art results with a single GPU. Google has primarily used the tools available in TensorFlow to build the T2T structure. Besides, the system uses a standard interface in all aspects of a deep learning system, including datasets, optimisers, models and hyperparameters.

Google’s Brain team has opted a modular architecture approach. Furthermore, the library has a feature that lets developers swap the versions of the available components to check how they perform together.

Seeking contributions

While Google has provided all the required features within T2T to make it a perfect solution for training deep learning models, developer contribution is anticipated. Therefore, the engineers have allowed external developers to define their own model and add their own data-sets to the library.

“We believe that already included models will perform very well for many NLP tasks, so just adding your data-set might lead to interesting results,” Kaiser highlights.

You can access the T2T code and documentation on GitHub to test the available deep learning models. Also, you can contribute to the library from the online repository.

LEAVE A REPLY

Please enter your comment!
Please enter your name here