As we’ve talked about various aspects of GPU and memory technology, one point that readers have raised are how HBM2 and GDDR6 compare against each other, and which one is really seen as the better option for various types of applications.
A new video by SemiconductorEngineering dives into the topic, with a comparison by Steven Woo, a Rambus fellow.
One point Woo makes is that the differences between HBM2 and GDDR6 are less about raw capability and more about what kind of design tradeoffs the engineering team wants to make. With GDDR5 versus HBM, there was a clear and absolute bandwidth advantage that HBM could offer. This is less the case with GDDR6, thanks to that memory’s higher bandwidth capabilities, but there are still use-cases where HBM2 has an advantage.
This slide, from 11:44 in the talk, illustrates some of the differences between the two memory types. Data rates on GDDR6 are much higher per-pin, but there are far fewer pins overall. The amount of area dedicated to the PHY (the circuitry required to implement the actual physical connection) and the power costs required. The PHY area is 1.5x – 1.75x larger, while the power cost can be 3.5x – 4.5x higher.
This echoes statements AMD made back when it explained why it used HBM for its Fury X GPUs. AMD’s explanation was that, above certain capacities, the energy and area requirements for GDDR5 made it a substantially worse choice than HBM. Woo’s comments indicate that this remains the case today, with HBM2 still offering better performance/watt and better overall power consumption.
The reason not many people are using HBM2, however, is the design complexity and overall cost. The interposer layer is reportedly about $ 20 — that matches reports we’ve heard — and while $ 20 isn’t huge in the context of an $ 800 GPU, it’s quite a lot when considering a $ 200 GPU. According to Woo, the major benefits of HBM2 are when considering the need for maximum bandwidth in a power-constrained environment. This fits the criteria of data center-focused GPUs working on AI calculations or dense computing nodes in an HPC cluster, but it doesn’t leave the RAM a lot of room to work in the consumer space. That reflects Woo’s overall thinking — he says he doesn’t expect to see HBM2 to be widely used in consumer hardware, now that GDDR6 is available.
That’s our expectation as well. Navi, at least, is anticipated to be a GDDR6 chip and all of Nvidia’s refreshed Turing cards have relied on the RAM type. It’s generally expected that AMD will also move away from HBM2 once it refreshes its high-end GPUs.
Let’s block ads! (Why?)