- It will reduce AI model training costs by 80 per cent
- SEED RL is built atop the TensorFlow 2.0 framework
Researchers at Google have open-sourced SEED RL framework. It can scale up artificial intelligence model training across thousands of machines.
Google noted in a research paper that this will enable AI algorithm training to be performed at millions of frames per second. It will also reduce the cost of doing so by as much as 80 per cent.
SEED RL is built atop the TensorFlow 2.0 framework. It leverages a combination of graphics processing units and tensor processing units to centralize model inference. After that, the inference is performed centrally using a learner component that trains the model.
Network library
The target model’s variables and state information are kept local. The observations on them are sent to the learner at every step of the process. It also uses a network library based on the open-source universal Remote Procedure Call (RPC) framework to minimize latency.
Google evaluated SEED RL’s efficiency by benchmarking it on the Arcade Learning Environment, the Google Research Football environment, and various DeepMind Lab environments. As per results, it managed to solve a Google Research Football task while training the model at 2.4 million frames per second using 64 Cloud Tensor Processing Unit chips.