Nvidia announces new HGX H200 computing platform, with advanced memory to handle AI workloads

Nvidia Corp. today announced the introduction of the HGX H200 computing platform, a new powerful system that features the upcoming H200 Tensor Core graphics processing unit based on its Hopper architecture, with advanced memory to handle the massive amounts of data needed for artificial intelligence and supercomputing workloads.

The company announced the new platform (pictured) during today’s Supercomputing 2023 conference in Denver, Colorado. It revealed that the H200 will be the first GPU to be built with HB3e memory, a high-speed memory designed to accelerate large language model AIs and high-performance computing capabilities for scientific and industrial endeavors.

The H200 is the next generation after the H100 GPU, Nvidia’s first GPU to be built on the Hopper architecture. It includes a new feature called the Transformer Engine designed to speed up natural language processing models. With the addition of the new HB3e memory, the H200 has more than 141 gigabytes of memory at 4.8 terabits per second, capable of nearly double the capacity and 2.4 times the bandwidth of the Nvidia A100 GPU.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at Nvidia.

According to Nvidia, when it comes to AI model deployment and inference capability, the H200 provides 1.6 times the performance of the 175 billion-parameter GPT-3 model versus the H100 and 1.9 times the performance of the 70 billion-parameter Llama 2 model compared with the H100. On the high-performance computing simulation front, the H200 doubled its performance over the A100.

Although many of these improvements came from the hardware that the H200 is built on, some of it also is the result of software enhancements from Nvidia, including the recent release of open-source libraries such as TensorRT-LLM. With TensorRT-LLM, developers can optimize for deep learning inference and AI deployment using low-latency techniques for high throughput, which can speed applications by up to 36 times faster than CPU-only platforms.

Built into the HGX H200 platform server boards, the H200 can be found in four- and eight-way configurations, which are compatible with both the hardware and software of the HGX H100 systems. With these options, Nvidia said, the H200 can be deployed for any type of data center including on-premises, cloud, hybrid cloud and edge.

H200-powered systems are expected to become available from server manufacturers and cloud service providers in the second quarter of 2024.

Bringing the Grace Hopper Superchip to the Jupiter supercomputer
Nvidia also announced today that its GH200 Grace Hopper Superchips will power the upcoming Jupiter supercomputer, which will become Europe’s first so-called exascale supercomputer.

The supercomputer is planned to be installed in 2024 at the Jülich Supercomputing Center in Germany and will become the first in Europe to surpass the threshold of 1 trillion calculations per second. Owned by the EuroHPC Joint Undertaking and contracted to cloud company Eviden and modular supercomputing firm ParTec Inc., it’s being built in collaboration between Nvidia, ParTec, Eviden and HPC microprocessor design firm SiPearl.

Nvidia is supplying a new form factor of its GH200 Superchips in a quad-node configuration based on Eviden’s BullSequana XH3000 liquid-cooled architecture. The total number of superchips going into Jupiter, the company said, will be close to 24,000 and they will be interconnected together using Quantum-2 InfiniBand networking.

Once assembled, Jupiter will be one of the world’s most powerful AI systems capable of delivering over 90 exaflops, or a quintillion operations per second, of performance for AI training. That’s more than 45 times Jülich’s previous JUWELS Booster system, which is currently ranked as the eighth-fastest supercomputer in the world. It’s used by the Simulation and Data Laboratory for Climate Science at Jülich to detect gravity waves in the atmosphere by running software that constantly computes data downloaded from NASA.

“At the heart of Jupiter is Nvidia’s accelerated computing platform, making it a groundbreaking system that will revolutionize scientific research,” said Thomas Lippert, director of the Jülich Supercomputing Centre. “Jupiter combines exascale AI and exascale HPC with the world’s best AI software ecosystem to boost the training of foundational models to new heights.”

With the Jupiter supercomputer’s new class of capabilities and its ability to train new AI foundation models, produce super high-fidelity simulations and progress science endeavors, Nvidia says, it will become an amazing tool for discovery. With this supercomputing power, the scientific community could accelerate climate and weather prediction, elevate drug discovery, build quantum and computing technologies and transform industrial engineering development processes and materials sciences.

Image: Nvidia
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.  

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy


{Categories} _Category: Platforms,*ALL*{/Categories}
{Author}Kyt Dotson{/Author}
{Keywords}AI,NEWS,The-Latest,Top Story 2,artificial intelligence,Germany,GH200 Grace Hopper,GH200 Grace Hopper Superchip,Grace Hopper,Graphics Processing Unit,HGX H200,high-performance computing platform,Ian Buck,Jülich Supercomputing Center,Jupiter,Jupiter supercomputer,Nvidia Corp.,Thomas Lippert{/Keywords}

Exit mobile version