Rumors about AMD’s upcoming 7nm Ryzen 3 family have been floating around for quite some time in various forms. Enthusiasts are closely interested to see what changes and improvements the company introduces with its shift to 7nm. The company disclosed some of its advances on 7nm as they concern its second-generation Epyc processor, codenamed Rome, which currently isn’t expected to ship for significant revenue until after Q2 (exactly when is still unknown). The Ryzen 7nm changes are still unknown, beyond the obvious expectation that Epyc and Ryzen will share a common CPU design.
A new set of rumors from AdoredTV suggest a CPU lineup from AMD that’s simply too starry-eyed to take seriously. The initial claim is that AMD will use the same chiplet and I/O strategy that it’s deploying for 7nm Epyc with Ryzen. AMD used the same die design for both Epyc and Ryzen in its first generation of products, so this could be what the company chose to do again — but it seems unlikely.
The entire point of building a cluster of chiplets around a single unified I/O block with Epyc is that AMD could centralize all of the DDR4 controllers, I/O controllers, and PCIe lanes in a single unified die with 64 CPU cores hooked up around the edges. AMD chose to keep the same ratio of eight cores per physical die, however, meaning that the base ‘unit’ of processing is still an eight-core chip. It’s not at all clear that it makes any kind of sense to have a separate I/O die built on 14nm and a single chiplet on 7nm as opposed to just building a single die to start with. AMD needed to change Epyc’s memory configuration to reduce RAM latency and improve performance; the 2990WX has scaling issues in many applications precisely because its configuration is lopsided, with some dies connected to memory controllers and other dies not. Connecting each chip on a top-end Epyc CPU to its own dedicated memory controller is something AMD would’ve evaluated but obviously decided not to do, possibly because it would leave lower core-count parts with fewer memory channels. When you have a lot of chiplets to hook to a single controller, this split 7/14 approach makes sense.
But given that AMD’s 14nm die is built at GlobalFoundries while its 7nm chiplets are built at TSMC, there’s a cost to splitting the work into two separate sections. With Epyc, the cost is obviously worth the benefit. It’s not clear Ryzen would benefit in anything like the same way, especially since AMD would be putting two chips on every chip with eight cores and below as opposed to one, and three chips on every Ryzen above eight cores as opposed to two live die and two dummies (aka Threadripper).
The real problem with these claims, however, lies in the number of cores and clocks AdoredTV believes AMD will ship. The table below is courtesy of Overclock3D.net. Normally I don’t spend time on debunking bad rumors, but these are egregiously bad.
First, it’s highly unlikely that AMD would kill off its entire product family below the six-core + SMT space or that it would stop using SMT as a feature differentiation in its products. SMT is useful to both AMD and Intel for the same reason — it allows both companies to offer a significant performance uplift as an incentive to customers to buy higher-performing parts, while costing virtually nothing in terms of die size or OS support, since all modern operating systems robustly support the feature. AMD already offers SMT on most of its Ryzen CPUs, but leaving it off the lowest-end models is an up-sell technique.
Second, the closest CPU to the claimed “Ryzen 3 3300X” is the current Ryzen 5 2600X (3.6GHz base, 4.2GHz boost), at $ 240. The chances that AMD slashes its equivalent CPU pricing by 54 percent on the basis of 7nm improvements is nil. AMD has spent the year emphasizing to investors that its margins should continue to improve over time. The absolute worst way to make that happen is to take a chainsaw to your own product pricing. If you wanted to see a meteoric leap in AMD’s price/performance ratio, the company already delivered it back in 2017.
Third, while it’s possible that AMD will choose to bump up its on-die GPU to 15/20 CUs, it’s not a particularly likely shift unless AMD is simultaneously going to start shipping APUs with HBM attached (something the company has shown no inclination towards doing, at least not yet). AMD’s on-die GPUs are heavily memory bandwidth limited. The more GPU cores you have, the more memory bandwidth you need to fill them. AMD has focused on improving its GPU efficiency far more than its core count. From 2011 – 2017, AMD improved its top-end core count from 400 (A8-3850) to 704 (Ryzen 5 2400G). The chances that AMD goes up to 1280 on-die cores seems low, given the bandwidth limitations of DDR4. We’d expect such a move to happen with the introduction of DDR5, which isn’t expected until AMD’s next socket shift in ~2021.
Fourth, the clock speed and core count targets at the top of the stack reflect exceedingly wishful thinking, not reality. An eight-core chip with 1280 on-die GPU cores and a 4GHz clock in a 95W TDP at $ 199? That’s the equivalent of a >RX 560 GPU (as far as core count) slapped down alongside a Ryzen 7 2700 (3.2GHz base, 4.1GHz turbo, $ 269). The cheapest RX 560 on Newegg is $ 104. Again, there’s simply no way that AMD is going to sell an APU with an onboard GPU at what amounts to a 53 percent price cut. This chart reads as though someone heard that 7nm offers a theoretical 50 percent increase in density and assumed that 100 percent of that improvement would (or could) be translated into price. Costs are also higher at 7nm, and the entire reason AMD didn’t shrink Epyc’s I/O in the conventional manner is because of the limited benefits of doing so. Even if it uses a single die for its 7nm Ryzen parts, not every part of the chip scales equally.
The upper CPUs all receive unrealistic price cuts and significant clock jumps, without the commensurate increase in TDP that would be required. CPU power curves bend upwards sharply as you exceed the targeted sweet spot for silicon. It makes no sense that moving from 3.2GHz base clock to 3.5GHz on six-core chips would require an additional 15W of TDP, but moving from 3.9GHz to 4.3GHz base clock on 16 cores would require just an additional 10W of TDP. And even if you chalk that issue up to the fact that TDP doesn’t represent power consumption, there’s still a major problem here: AMD is not going to slash the price of a 16-core chip from $ 899 to $ 449. Again, that’s exactly the wrong move if you’re trying to improve your margins relative to the competition (Intel’s margins are 60 percent and above, AMD has been operating in the upper 30s and low 40s).
Finally, it’s not clear if a 16-core Ryzen actually makes much sense with just two memory channels to work with. At the very least, I’d expect slightly worse scaling compared with a quad-channel platform, and AMD obviously decided to use a quad-channel design for Threadripper, even when it could have specified a lower-cost dual-channel variant. While it’s true that most desktop workloads aren’t particularly memory bandwidth bound, there’s still going to come a point when you don’t have enough bandwidth per-core to avoid negative scaling impacts.
Collectively, these rumors make no sense. They predict unprecedented price cuts, a complete abandonment of AMD’s established CPU feature distribution, large clock jumps without commensurate TDP increases, twice the core count on the same platform even when this makes no sense, and TDPs that must completely depart from the way AMD has reported and measured TDP with the first two generations of Ryzen. And not incidentally, they predict that most of these products will be launched at CES. We haven’t heard a whisper of anything of the sort from AMD.
I don’t believe these rumors. I don’t think you should, either.
Now Read: New Retail Data Shows AMD CPUs Outselling Intel 2:1, AMD Reports Earning Results, Significantly Improves Gross Margin, and New MSI BIOS Unlocks Overclocking on AMD’s $ 55 Athlon 200GE CPU
Let’s block ads! (Why?)