HGX-1 Nvidia and Microsoft Combines to Develop AI Accellerator Calculator

HGX-1 AI Accelerator Calculator. The manufacturer of graphics cores Nvidia has teamed up with software giant Microsoft.

HGX-1 Nvidia and Microsoft AI Accellerator Calculator

They aim to begin development of the new accelerator calculator under the name HGX-1. The new AI accelerator will be integrated into the servers data centers in order to create a new , flexible faster way for applications that make intensive use of artificial intelligence. As this is a field that is still quite in the nappy but with this new device is going to produce great results.

HGX-1 gets best results for cloud-based artificial intelligence workloads. Talking about the ATX standard Advanced Technology eXtended made for personal computer motherboards, that were introduced more than two decades ago. It sets an industry standard that can quickly and efficiently adopt the technology for growing market demand.

HGX-1 Supporting multiple GPU

The new architecture has a special design to meet the growing demand for artificial intelligence-based computing in the cloud. So in fields such as autonomous driving, personalized medical care, superhuman voice recognition. Also for data and video analysis and molecular simulations and in various fields of medicine and chemistry.

The new device has eight Nvidia Tesla P100 graphics cards in each chassis. Which features an innovative switching design based on NVLink interconnect technology and the PCIe standard. This technology allows a processor to dynamically connect to any number of graphics cards. This enables cloud service providers to be able to offer a wide range of configurations on their machines who assemble their HGX-1 infrastructure.

Workloads in the cloud are more diverse and complex than ever before. AI training, inference, and HPC . While the workload runs optimally in different system configurations, with a processor connected to a variable number of GPUs. The highly modular design of the HGX-1 allows for optimum performance regardless of workload.

It provides up to 100 times deep learning performance compared to servers based on previous processor models. And it estimates at one fifth of the cost of AI training chip and one tenth of the cost of AI inference chip.