As someone who analyzes GPUs for a living, one of the more vexing things in my life has been NVIDIA’s Maxwell architecture. The company’s 28nm refresh offered a huge performance-per-watt increase for only a modest die size increase, essentially allowing NVIDIA to offer a full generation’s performance improvement without a corresponding manufacturing improvement. We’ve had architectural updates on the same node before, but never anything quite like Maxwell.

The vexing aspect to me has been that while NVIDIA shared some details about how they improved Maxwell’s efficiency over Kepler, they have never disclosed all of the major improvements under the hood. We know, for example, that Maxwell implemented a significantly altered SM structure that was easier to reach peak utilization on, and thanks to its partitioning wasted much less power on interconnects. We also know that NVIDIA significantly increased the L2 cache size and did a number of low-level (transistor level) optimizations to the design. But NVIDIA has also held back information – the technical advantages that are their secret sauce – so I’ve never had a complete picture of how Maxwell compares to Kepler.

For a while now, a number of people have suspected that one of the ingredients of that secret sauce was that NVIDIA had applied some mobile power efficiency technologies to Maxwell. It was, after all, their original mobile-first GPU architecture, and now we have some data to back that up. Friend of AnandTech and all around tech guru David Kanter of Real World Tech has gone digging through Maxwell/Pascal, and in an article & video published this morning, he outlines how he has uncovered very convincing evidence that NVIDIA implemented a tile based rendering system with Maxwell.

In short, by playing around with some DirectX code specifically designed to look at triangle rasterization, he has come up with some solid evidence that NVIDIA’s handling of tringles has significantly changed since Kepler, and that their current method of triangle handling is consistent with a tile based renderer.


NVIDIA Maxwell Architecture Rasterization Tiling Pattern (Image Courtesy: Real World Tech)

Tile based rendering is something we’ve seen for some time in the mobile space, with both Imagination PowerVR and ARM Mali implementing it. The significance of tiling is that by splitting a scene up into tiles, tiles can be rasterized piece by piece by the GPU almost entirely on die, as opposed to the more memory (and power) intensive process of rasterizing the entire frame at once via immediate mode rendering. The trade-off with tiling, and why it’s a bit surprising to see it here, is that the PC legacy is immediate mode rendering, and this is still how most applications expect PC GPUs to work. So to implement tile based rasterization on Maxwell means that NVIDIA has found a practical means to overcome the drawbacks of the method and the potential compatibility issues.

In any case, Real Word Tech’s article goes into greater detail about what’s going on, so I won’t spoil it further. But with this information in hand, we now have a more complete picture of how Maxwell (and Pascal) work, and consequently how NVIDIA was able to improve over Kepler by so much. Finally, at this point in time Real World Tech believes that NVIDIA is the only PC GPU manufacturer to use tile based rasterization, which also helps to explain some of NVIDIA’s current advantages over Intel’s and AMD’s GPU architectures, and gives us an idea of what we may see them do in the future.

Source: Real World Tech

POST A COMMENT

191 Comments

View All Comments

  • TessellatedGuy - Monday, August 1, 2016 - link

    Lol true. Electricity bills would make them care afterwards tho. Reply
  • bigboxes - Tuesday, August 2, 2016 - link

    Nonsense. I currently have a GTX 970 in my main rig that I run 24/7. I could care less about power usage. I have four computers running 24/7. My biggest power draw is my HVAC, refrigerator, dryer and water heater. Video card is way down the list. All I care about is performance/$$. I'm looking to go 4k in 2017. I'll evaluate everything after Vega is released. Reply
  • Remon - Monday, August 1, 2016 - link

    It was the other way around. Suddenly with Maxwell, efficiency was important... Reply
  • Alexvrb - Monday, August 1, 2016 - link

    Yeah I was just gonna say I hardly saw any ATI/AMD guys who were really diehards over power efficiency. Now Nvidia guys, they just didn't talk about it until it became a good talking point for their team. Meanwhile I use cards from both vendors from time to time, but price/performance is the biggest factor. Electricity is fairly cheap and the GPU is idle most of the time. Reply
  • Chaser - Tuesday, August 2, 2016 - link

    Oh please. Nvidia has both a performance edge AND an efficiency edge. Reply
  • wumpus - Tuesday, August 2, 2016 - link

    AMD fans wanted performance and talked about performance (for the price. And AMD typically delivers: it took a 1060 to really beat the 390 at its own game, and now has to contend with the 480. Just don't expect the high end to make much sense). AMD marketing pushed PPW, especially leading up to the 480 launch and even during. Pretty embarrassing to have use the same power as the 1080. Reply
  • Scali - Monday, August 1, 2016 - link

    Well, NVidia has been ahead in performance and market share quite a few times.
    In fact, prior to the Radeon 8500/9700-era, ATi never made any cards that were very competitive in terms of performance. Back then, NVidia ruled the roost with the TNT and later the GeForce.
    And when the GeForce 8800 came out, ATi again a few years with not much of an answer.
    NVidia has been ahead of the pack more often and for longer periods than anyone else in the industry.
    Reply
  • JoeyJoJo123 - Monday, August 1, 2016 - link

    Nice shilling, boy.

    Do you recall the GTX 480 disaster? Woodscrew and house fire memes all around. At that time the HD5000 series, particularly the 5850, were _THE_ midrange cards to get in that day and age, and were even further popularized by the red team's better performance during bitcoin mining.

    In either both companies suck and should be providing better products for their customers.
    Reply
  • Scali - Monday, August 1, 2016 - link

    "Do you recall the GTX 480 disaster? Woodscrew and house fire memes all around."

    What's that have to do with it? It still had the performance crown.
    It's obvious who the shill is here.
    Reply
  • qap - Monday, August 1, 2016 - link

    It's funny someone calls GTX 480 "disaster". It was not a good card, but at least it was the fastest card on market.
    Last few years AMD sells cards with comparable power consumption and acoustic performance that are not even close to the top (basically anything based on Hawai/Grenada) and noone labels them that way.
    Reply

Log in

Don't have an account? Sign up now