Apple to Redesign Mac Pro, Comments That GPU Cooling Was A Roadblockby Ryan Smith on April 4, 2017 10:20 AM EST
In what’s turning out to be an oddly GPU-centric week for Apple, this morning the company has revealed that they will finally be giving the long-neglected Mac Pro a major update in the 2018+ timeframe. Apple’s pro users have been increasingly unhappy by the lack of updates to the company’s flagship desktop computer, and once released, this update would be its first in what will be over 4 years.
Getting to the heart of matters, Apple invited a small contingent of press – including John Gruber and TechCrunch’s Matthew Panzarino – out to one of their labs to discuss the future of the Mac Pro and pro users in general. The message out of Apple is an odd one: they acknowledge that they erred in both the design and handling of the Mac Pro (as much as Apple can make such an acknowledgement, at least), and that they will do better for the next Mac Pro. However that Mac Pro won’t be ready until 2018 or later, and in the meantime Apple still needs to assuage their pro users, to prove to them that they are still committed to the Mac desktop and still committed to professional use cases.
Both of these articles are very well written, and rather than regurgitate them, I’d encourage you to read them. It’s extremely rare to see Apple talk about their future plans – even if it’s a bit vague at times – so this underscores the seriousness of Apple’s situation. As John Gruber puts it, Apple has opted to “bite the bullet and tell the world what your plans are, even though it’s your decades-long tradition — a fundamental part of the company’s culture — to let actual shipping products, not promises of future products, tell your story.”
However neither story spends too much time on what I feel is the core technical issue, Apple’s GPU options, so I’d like to spill a bit of ink on the subject, if only to provide some context to Apple’s decisions.
Analysis: GPUs Find Their Sweet Spot at 250 Watts
From a GPU perspective, the Mac Pro has been an oddball device from day-one. When Apple launched it, they turned to long-time partner AMD to provide the GPUs for the machine. What AMD provided them with was their Graphics Core Next (GCN) 1.0 family of GPUs: Pitcairn and Tahiti. These chips were the basis of AMD’s Radeon HD 7800 and HD 7900 series cards launched in early 2012. And by the time the Mac Pro launched in late 2013, they were already somewhat outdated, with AMD’s newer Hawaii GPU (based on the revised GCN 1.1 architecture) having taken the lead a few months earlier.
Ultimately Apple got pinched by timing: they would need to have chips well in advance for R&D and production stockpiling, and that’s a problem for high-end GPU launches. These products just have slow ramp-ups.
Complicating matters is the fact that the Mac Pro is a complicated device. Apple favored space efficiency and low-noise over standard form-factors, so instead of using PC-standard PCIe video cards for the Mac Pro, they needed to design their own cards. And while the Mac Pro is modular to a degree, this ultimately meant that Apple would need to design a new such card for each generation of GPUs. This isn’t a daunting task, but it limits their flexibility in a way they weren’t limited with the previous tower-style Mac Pros.
Mac Pro Assembled w/GPU Cards (Image Courtesy iFixit)
The previous two items we’ve known to be issues since the launch of the Mac Pro, and have commonly been cited as potential issues holding back a significant GPU update all of these years. However, as it turns out, this is only half of the story. The rest of the story – the consequences of Apple’s decision to go with dual GPUs and using a shared heatsink via the thermal core – has only finally come together with Apple’s latest revelation.
At a high-level, Apple opted to go with a pair of GPUs in order to chase a rather specific use case: using one GPU to drive the display, and using the second GPU as a co-processor. All things considered this wasn’t (and still isn’t) a bad strategy, but the number of applications that can use such a setup are limited. Graphical tasks are hit & miss in their ability to make good use of a second GPU, and GPU-compute tasks still aren’t quite as prevalent as Apple would like.
The drawback to this strategy is that if you can’t use the second GPU, two GPUs aren’t as good as one more powerful GPU. So why didn’t Apple just offer a configuration with a single, higher power GPU? The answer turns out to be heat. Via TechCrunch:
I think we designed ourselves into a bit of a thermal corner, if you will. We designed a system that we thought with the kind of GPUs that at the time we thought we needed, and that we thought we could well serve with a two GPU architecture… that that was the thermal limit we needed, or the thermal capacity we needed. But workloads didn’t materialize to fit that as broadly as we hoped.
Being able to put larger single GPUs required a different system architecture and more thermal capacity than that system was designed to accommodate. And so it became fairly difficult to adjust.
The thermal core at the heart of the Mac Pro is designed to be able to cool a pair of moderately powerful GPUs – and let’s be clear here, at around 200 Watts each under full load, a pair of Tahitis adds up to a lot of heat – however it apparently wasn’t built to handle a single, more powerful GPU.
The GPUs that have come to define the high-end market like AMD’s Hawaii at Fiji GPUs, or NVIDIA’s GM200 and GP102 GPUs, all push 250W+ in their highest performance configurations. This, apparently, is more than Apple’s thermal core can handle. In terms of total wattage, just one of these GPUs would be less than a pair of Tahitis, but it would be 250W+ over a relatively small surface area as opposed to the roughly 400W over nearly twice the surface area.
|Video Card Average Power Consumption (Full Load, Approximate)|
|AMD Tahiti (HD 7970)||200W|
|AMD Hawaii (R9 290X)||275W|
|AMD Fiji (R9 Fury X)||275W|
|NVIDIA GM200 (GTX Titan X)||250W|
It’s a strange day when Apple has backed themselves into a corner on GPU performance. The company has been one of the biggest advocates for more powerful GPUs, pushing the envelope on their SoCs, while pressuring partners like Intel to release Iris Pro-equipped (eDRAM-backed) CPUs. However what Apple didn’t see coming, it would seem, is that the GPU market would settle on 250W or so as the sweet spot for high-end GPUs.
Mac Pro Disassembled w/GPU Cards (Image Courtesy iFixit)
And to be clear here, GPU power consumption is somewhat arbitrary. AMD’s Fiji GPU was the heart of the 275W R9 Fury X video card, but it was also the heart of the 175W R9 Nano. There is clearly room to scale down to power levels more in-line with Apple’s ability, but they lose performance in the process. Without the ability to cool a 250W video card, it’s not possible to have GPU performance that will rival powerful PC workstations, which Apple is still very much in competition with.
Ultimately I think it’s fair to say that this was a painful lesson for Apple, but hopefully one they learn a very important lesson from. The lack of explicit modularity and user-upgradable parts in the Mac Pro has always been a point of concern for some customers, and this has ultimately made the current design the first and last of its kind. Apple is indicating that the next Mac Pro will be much more modular, which would be getting them back on the right track.
Source: Daring Fireball
Post Your CommentPlease log in or sign up to comment.
View All Comments
Maleorderbride - Tuesday, April 4, 2017 - linkIt can't handle 2x200W GPUs.
Talk to any nMP user who routinely keeps their GPUs rendering for 8+ hours per day. You will find that many of them have blown through 2-4 warranty replacements by now.
The D700's heat output is categorically too much for the nMP when used continuously over long spans of time. D500's are the true maximum for heavy usage.
Xajel - Wednesday, April 5, 2017 - linkMaybe they're waiting for Ryzen's HEDT platform for their Mac Pro, and looking that it's still not available. they need sometime for R&D.
Meteor2 - Wednesday, April 5, 2017 - linkHmmm? What do you think Ryzen 7 is, if not HEDT?
Xajel - Wednesday, April 5, 2017 - linkThere's a rumour that AMD is preparing an HEDT platform which will be shared with their Entry workstation/server platform. Quad Channel Memory, 12~16 Cores ( 24~32 Threads ), more PCIe lanes, 140W~180W TDP
Torrijos - Wednesday, April 5, 2017 - linkI actually like the design of the MacPro, and I would have been OK with something similar but with a choice (Nvidia or AMD etc).
GPUs modularity would have been a must :
- Some kind of PCI connector on the side or bottom
- Radiator part of the GPU module
- And the size and specs available to third parties makers so GPUs upgrades would have been offered. Opening also the GPU Driver side of macOS.
Another issue is the fact that it took software to be updated for both GPUs to be used properly, I think this machines would have been great if Heterogeneous System Architecture had been a reality.
Unfortunately we seem still far from it.
The CPU are still great, there is no beating Xeon even more when you need all the RAM you can get.
Intel hasn't been evolving them as fast (USB3, Thunderbolt 3) as one could hope either, and Apple has been too stubborn in their choice not to used third party controllers for ports.
BrokenCrayons - Wednesday, April 5, 2017 - linkI'll happily call it. 250W TDP is absurd. 75W TDP is pretty much over the top too. We're in 2017 and have 14/16nm transistors. There's no good reason for computer components to require cooling fans to move air over huge heatsinks for consumer workloads. The fact that GPUs have become the hottest-running, most bloated component in a computer in modern times says a lot about how inefficient we've gotten over the years.
tipoo - Wednesday, April 5, 2017 - link
Wut. GPUs are inherently parallel devices and so can make better use of as much thermal headroom as you throw at them and scale up, to the degree something else doesn't limit them. Keeping GPUs under 75 watts would by far limit their capabilities. If you need a silent GPU, cool, others want a high end part and that needs wattage.
tipoo - Wednesday, April 5, 2017 - link
It's not inefficiency either, in work done per unit energy we have more efficient GPUs than ever in history, they simply scale up to high wattages because they can and people want them to.
BrokenCrayons - Wednesday, April 5, 2017 - linkThe trouble is that they don't scale down very well as demonstrated by a lack of lower end products released in the last couple of generations.
gorbag - Saturday, April 8, 2017 - linkThe "lower end products" are going to the embedded market. E.g., see here: