This is the first gaming performance test of the Nvidia Hopper H100 GPU, commonly used for AI acceleration.
Nvidia Hopper H100 is slower than AMD 680M
It is clear that gaming performance is not what the Nvidia H100 is designed for, that explains its performance below the Radeon 680M integrated GPU, RDNA 2 architecture, which is used in AMD notebook APUs.
The tests are conclusive in terms of performance in 3DMark, where the GPU is unable to beat the Radeon 680M, an integrated GPU from AMD. However, the card is a beast when it comes to AI and content creation tests, where it easily outperforms a RTX 4090, the most powerful GPU in the desktop PC market. We also see its performance with Red Dead Redemption 2 with a 1080p setting and DLSS on balanced. The GPU fails to reach 30 fps and power consumption is a mere 90W.
the card was run at the high 1080p setting and the DLSS “balanced” preset, and still delivered less than 30 FPS. Again, it can be seen that the card’s power is less than 100W, showing a gross underutilization of the H100 GPU.
The tests were conducted by Geekerwan, a Chinese content creator, who shows us for the first time the performance of the H100 running on a desktop PC. with up to a 4-way configuration.
The Nvidia H100 card features about 80 GB of HBM2e memory over a 5120-bit bus. The GPU operates with 114 SM enabled out of a full 144 SM on the GH100 GPU and 132 SM on the H100 SXM. The card is capable of delivering some 3200 FP8, 1600 TF16, 800 FP32 and 48 TFLOPs of FP64 compute power. Like the latest GeForce RTXs, it features 456 tensor units.
The GPU is not designed for gaming, as only 2 of its TPCs are available for traditional graphics processing tasks, all the rest of the GPU is dedicated to computational tasks. This explains its very low results in games, but so high in artificial intelligence and content creation tasks.
One of the most interesting tests was the LLAMA model with ChatGPT, where the H100 handles a total of 65 billion parameters, while the RTX 4090 can only run up to 6 billion parameters. This is the explanation why AI customers bet on an H100 instead of a traditional graphics card for AI acceleration.
In China, a GPU of this class costs between $30,000 and $50,000, a high price that only Nvidia’s large customers pay.