Nvidia has announced plans to build the world's fastest AI supercomputer. The company has benefited hugely from the AI boom of the last decade. It wants to provide more firepower to meet demand for data-intensive deep learning. It has also announced details of its new silicon architecture, Hopper.

OpenAI's GPT-3 and DeepMind's AlphaFold are the latest models to be trained by Nvidia. Such models have increased exponentially in size over the space of a few years. “Training these giant models still takes months,” said Nvidia senior director of product management Paresh Kharya in a press briefing.

Nvidia's new Hopper architecture is named after pioneering computer scientist and US Navy Rear Admiral Grace Hopper. The architecture is specialized to accelerate the training of Transformer models on H100 GPUs by six times compared to previous-generation chips. The H100 GPU itself contains 80 billion transistors and is the first GPU to support PCle Gen5 and utilize HBM3.

N Nvidia announces new H100 GPU and Grace CPU Superchip. H100 is three times faster than previous-generation A100, and six times faster at 8-bit floating point math. Grace is a new low-latency CPU that can be used for CPU-only or GPU-accelerated servers.
Posted by
Tap to Copy the Short Url to This Post: 
One-Stop Business News backed by Mark Cuban. Free to Use →