Tuesday, October 8, 2024

Amp: Nvidia brings A100 with 80GB HBM2e as a PCIe card

Date:

Image: Nvidia

Nvidia now also offers the “A100 Tensor Core GPU” based on the Ampere architecture in PCIe format with 80 instead of 40 GB of memory. Buyers should benefit from double the memory during AI training, especially with particularly large models. Support for the card is available from many companies in the server environment.

After the A100 as SXM4 and PCIe and the A100 80GB as SXM4, the A100 80GB PCIe introduced today is the fourth implementation of the “A100 Tensor Core GPU”.

HBM2e provides 2 TB / s memory bandwidth

The “A100 80GB GPU” has a comparable structure to the “A100 Tensor Core GPU”, as changes in hardware are limited to memory. With this, Nvidia is now switching from HBM2 to HBM2e for the PCIe card after the SXM4 module. With HBM2e, a memory stack consists of up to eight 16Gbit chips stacked on top of each other, so up to 16GB is possible with one stack instead of the 8GB with HBM2, which was already used by Volta. As with the “A100 Tensor Core GPU”, there are supposedly six HBM2e stacks gathered around the GPU, but in fact there are five stacks of 16GB each, resulting in a total of 80GB, and a dummy stack. around the compensation for the contact pressure of the large passive cooler. With five storage stacks of 410 GB / s each, Nvidia breaks the corresponding mark with 2,002 TB / s.

Especially large models should benefit from duplicated memory during AI training. When Ampere was introduced, it was already said that the architecture had been developed for the exponentially growing resource requirements of neural network training and inference in the data center. Examples for the use of the 80 GB variant are provided in the article announcing the SXM4 module with the same configuration.

The passive card uses airflow in the server

ComputerBase has received confirmation from Nvidia that the 80GB PCIe A100 is specified at 300 watts. That’s 50 watts more than with the A100 PCIe, while there is no TDP differentiation between 40GB and 80GB with the SXM4 modules. Despite the reduced TDP, Nvidia advertises the same performance data, but these are maximum values.

The 80GB PCIe A100 comes as a supposedly passively cooled, two-slot tall card that is actively cooled by airflow in the server. Supporters of the card in the server environment are vendors Atos, Cisco, Dell Technologies, Fujitsu, H3C, HPE, Inspur, Lenovo, Penguin Computing, QCT, and Supermicro. Cloud providers such as Amazon Web Services, Microsoft Azure and Oracle Cloud Infrastructure also participate.

Ebenezer Robbins
Ebenezer Robbins
Introvert. Beer guru. Communicator. Travel fanatic. Web advocate. Certified alcohol geek. Tv buff. Subtly charming internet aficionado.

Share post:

Popular

More like this
Related

How to Use Video Marketing to Promote B2C Products?

Video marketing has emerged as a powerful tool for...

Adapting to Change: The Future for Leopard Tortoise Environments

Leopard tortoises, known for their striking spotted shells and...

Debunking Common Misconceptions in Nail Care

Acrylic nails, a popular choice for those seeking durable...

Top Reasons to Buy Instagram Likes from InsFollowPro.com

Buying Instagram followers is a strategy some individuals and...