Nvidia Microsoft Partnership, What You Need To Know

The insatiable demand for robust AI training infrastructure has led to an arms race of somewhat among cloud and hardware vendors. 

On Wednesday, 16th, Nvidia Corp. and Microsoft disclosed their partnership to build a supercomputer ideal for running artificial intelligence software. 

Nvidia Corp. is a global leader in artificial intelligence software and hardware. The company designs GPUs, APIs for data science and high-performance computing, and chip units (SoCs) for the mobile automotive market and mobile computing.

According to both companies, the new system will be one of the most effective AI supercomputers to handle revolutionary artificial intelligence computing work in the cloud. The AI computer will work on Microsoft’s Azure Cloud, using multiple graphics processing units (GPUs), Nvidia’s most powerful current flagship data center GPU H100, and its A100 chips. 

The H100 was introduced in March. It features 80 billion transistors capable of training AI models multiple times faster than Nvidia’s previous-generation A100 graphics card. The H100 also possesses optimizations that allow it to run Transformer models more efficiently.

As part of its collaboration with Microsoft, Nvidia says it will use Azure virtual machine instances to research advancements in generative AI or self-learning algorithms that can create code, text, images, video, or audio. 

Simultaneously, Microsoft will optimize its DeepSpeed library for new Nvidia hardware in hopes of reducing memory usage and computing power during AI training and collaborate with Nvidia to make the company’s accumulation of software development kits and AI workflows available to Azure enterprise customers.

The optimization effort will enable developers to speed up AI models that use the transformer neural network architecture. The speed modification is made possible with the help of a transformer engine feature built into Nvidia’s H100 graphics card. 

According to the chipmaker, the Transformer Engine revs up neural networks by reducing the amount of data they must process to complete calculations. Although Nvidia withheld information about how much the deal is worth, industry sources say each A100 chip is priced at about $10,000 to $12,000, and the H100 is way more expensive than that.

Scott Guthrie, executive vice president of Microsoft’s cloud and AI group, said in a statement, “Our collaboration with Nvidia unlocks the world’s most scalable supercomputer platform, which delivers state-of-the-art AI capabilities for every enterprise on Microsoft Azure.”

“AI technology advances, as well as industry adoption, are accelerating,” said Manuvir Das, the vice president of enterprise computing at Nvidia. “The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups, and enabled new enterprise applications. Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalize on the transformative power of AI.”

AI is fueling the future of automation across industrial computing and enterprises, allowing organizations and individuals to do more with less as they navigate economic uncertainties. What last-mile possibilities do you foresee with this partnership?

Join the conversation; follow us on  FacebookInstagram, and Twitter at GoSpeedHub.

Photo by Turag Photography

Previous Post

mRNA Technology

Next Post

New Energy Solutions For The Coming Years

Related Posts