Home Content News Support For Open Source Is Strengthened By Nvidia AI Enterprise 2.1

Support For Open Source Is Strengthened By Nvidia AI Enterprise 2.1

0
571
3d rendering robot learning or machine learning with education hud interface

In order to enable the running of artificial intelligence (AI) and machine learning (ML) workloads for enterprise use cases, Nvidia is today releasing version 2.1 of its AI Enterprise software suite. In August 2021, Nvidia AI Enterprise first became generally accessible as a collection of supported AI and machine learning (ML) solutions that function well on Nvidia’s hardware. A key element of the software package in the current edition is an updated list of supported versions of well-known open source tools, such as PyTorch and TensorFlow. Along with the 22.04 update for Nvidia’s Rapids open source libraries for executing data science pipelines on GPUs, this package now includes the new Nvidia Tao 22.05 low-code and no-code toolkit for computer vision and voice applications.

“Over the last couple of years, what we’ve seen is the growth of AI being used to solve a bunch of problems and it is really driving automation to improve operational efficiency,” Justin Boitano, VP of enterprise and edge computing at Nvidia. “Ultimately, as more organizations get AI into a production state, a lot of companies will need commercial support on the software stack that has traditionally just been open source.”

An “upstream” community, where the cutting edge of development occurs in an open manner, is a typical strategy with open source software. Vendors like Nvidia can and do donate code upstream, while in the “downstream,” they offer commercially backed products like Nvidia AI Enterprise.

The open source components of Nvidia AI Enterprise additionally profit from integration testing across several frameworks and on various kinds of hardware configurations to help guarantee that the programme functions as intended. Making the actual cloud deployment of various AI products easier is another crucial aspect of enterprise support. For those without experience, installing and configuring AI technologies might be a difficult undertaking.

The usage of containers and Kubernetes in a cloud-native model is one of the most widely used methods for cloud deployment today. Nvidia AI Enterprise is accessible as a set of containers, according to Boitano. In order to automate the installation and configuration of the AI tools in the cloud, there is also a Helm chart, which is an application manifest for Kubernetes deployment. Nvidia LaunchPad laboratories, a hosted service on Nvidia infrastructure for testing out the tools and frameworks that are supported by the Enterprise AI software package, offers an even simpler method.

One of the main objectives of Nvidia’s TAO toolkit, which is a component of the Nvidia Enterprise AI 2.1 upgrade, is to make it simpler to construct models for computer vision and speech recognition use cases. Boitano outlined how TAO offers businesses a low-code model to modify an existing pretrained model to a user’s unique environment and data. The use of computer vision in manufacturing is one example of how TAO might be useful.

Varied factories may have different lighting conditions, which might cause glare on cameras that may impair recognition. Accuracy can be increased by having the capacity to relabel a large amount of data in a given setting where the lighting may differ from the pretrained model. Boitano expressed excitement for upcoming Nvidia AI Enterprise releases, saying that the goal is to keep making it simpler for businesses to leverage various toolkits for implementing AI and ML workflows in production.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here