NVIDIA GeForce GTX 770 Review: The $400 Fight
by Ryan Smith on May 30, 2013 9:00 AM ESTAs spring gets ready to roll over to summer, last week we saw the first phase of NVIDIA’s annual desktop product line refresh, with the launch of the GeForce GTX 780. Based on a cut-down GK110 GPU, the GTX 780 was by most metrics a Titan Mini, offering a significant performance boost for a mid-generation part, albeit a part that forwent the usual $500 price tier in the process. With the launch of GTX 780 the stage has been set for the rest of the GeForce 700 series refresh, and NVIDIA is wasting no time on getting to the next part in their lineup. So what’s up next? GeForce GTX 770, of course.
In our closing thoughts on the GTX 780, we ended on the subject of what NVIDIA would do for a GTX 770. Without a new mid/high-end GPU on the horizon, NVIDIA has instead gone to incremental adjustments for their 2013 refreshes, GTX 780 being a prime example through its use of a cut-down GK110, something that has always been the most logical choice for the company. But any potential GTX 770 is far more nebulous, as both a 3rd tier GK110 part and a top-tier GK104 part could conceivably fill the role just as well. With the launch of the GTX 770 now upon us we finally have the answer to that question, and the answer is that NVIDIA has taken the GK104 option.
What is GTX 770 then? GTX 770 is essentially GTX 680 on steroids. Higher core clockspeeds and memory clockspeeds give it performance exceeding GTX 680, while higher voltages and a higher TDP allow it to clock higher and for it to matter. As a result GTX 770 is still very much a product cut from the same cloth as GTX 680, but as a fastest GK104 card yet it is a potent successor to the outgoing GTX 670.
GTX 770 | GTX 680 | GTX 670 | GTX 570 | |
Stream Processors | 1536 | 1536 | 1344 | 480 |
Texture Units | 128 | 128 | 112 | 60 |
ROPs | 32 | 32 | 32 | 40 |
Core Clock | 1046MHz | 1006MHz | 915MHz | 732MHz |
Shader Clock | N/A | N/A | N/A | 1464MHz |
Boost Clock | 1085MHz | 1058MHz | 980MHz | N/A |
Memory Clock | 7GHz GDDR5 | 6GHz GDDR5 | 6GHz GDDR5 | 3.8GHz GDDR5 |
Memory Bus Width | 256-bit | 256-bit | 256-bit | 320-bit |
VRAM | 2GB | 2GB | 2GB | 1.25GB |
FP64 | 1/24 FP32 | 1/24 FP32 | 1/24 FP32 | 1/8 FP32 |
TDP | 230W | 195W | 170W | 219W |
Transistor Count | 3.5B | 3.5B | 3.5B | 3B |
Manufacturing Process | TSMC 28nm | TSMC 28nm | TSMC 28nm | TSMC 40nm |
Launch Price | $399 | $499 | $399 | $349 |
With GTX 780 based on GK110, GTX 770 gets to be the flagship GK104 based video card for this generation. At the same time to further differentiate it from the outgoing GTX 680, NVIDIA has essentially given GK104 their own version of the GHz Edition treatment. With higher clockspeeds, a new turbo boost mechanism (GPU Boost 2.0), and a higher power limit, GTX 770 is GK104 pushed to its limit.
The end result is that we’re looking at a fully enabled GK104 part – all 32 ROPs and 8 SMXes are present – clocked at some very high clockspeeds. GTX 770’s base clock is set at 1046MHz and its boost clock is at 1085MHz, a 40MHz (4%) and 27MHz (3%) increase respectively. This alone doesn’t amount to much, but GTX 770 is also the first desktop GK104 part to implement GPU Boost 2.0, which further min-maxes NVIDIA’s clockspeeds. As a result being that GTX 770 reaches its highest clocks more often, making the effective clockspeed increase greater than 4%.
But the more breathtaking change will be found in GTX 770’s memory configuration. With GTX 680 already shipping at 6GHz there’s only one way for NVIDIA to go – up – so that’s where they’ve gone. GTX 770 ships with 7GHz GDDR5, making this the very first product to do so. This gives GTX 770 nearly 17% more memory bandwidth than GTX 680, an important increase for the card as the 256bit memory bus means that NVIDIA has no memory bandwidth to spare for GTX 770’s higher GPU throughput.
We’ve talked in length about GDDR5 memory controllers before, noting that 7GHz has always been the planned limit for GDDR5. Good GDDR5 memory can hit it easily enough, but GPU memory controllers and memory buses are another matter. After faltering with the Fermi generation NVIDIA was able to hit 6GHz on their first shot with GK104, and now with their second shot and a new PCB NVIDIA is ready to certify GK104 as 7GHz capable. Given all the teething GDDR5 has gone through on both sides of the aisle, this is a small but impressive achievement for NVIDIA.
Moving on, between the higher GPU clockspeeds, higher memory clockspeeds, and the introduction of GPU Boost 2.0, NVIDIA is also giving GTX 770 a hearty increase in TDP, for both the benefits and drawbacks that brings. GTX 770’s TDP is 230W versus GTX 680’s 195W, and due to GPU Boost 2.0 the old 170W “power target” concept is going away entirely, so in some cases the difference in effective power consumption is going to be closer to 60W. Like GTX 780, this higher TDP is a natural consequence of pushing out a faster part based on the same manufacturing process and architecture, and we expect this to be the same story across the board for all of the GeForce 700 series parts. At the same time however we’d point out that the 230W TDP higher than usual for a sub-300mm2 GPU, reflecting the fact that NVIDIA really is pushing GK104 to its limit here.
Along with differentiating the GTX 770 from the GTX 680, these small improvements also serve to further separate the GTX 770 from the GTX 670, which because it’s based on the same GPU, makes this to some extent necessary to provide the necessary performance gains to justify the mid-generation refresh. As GTX 670 was a lower clocked part with only 7 of 8 SMXes enabled, the performance difference between it and the GTX 770 ends up being due to a combination of those two factors. With a clockspeed difference of 131MHz (14%), the theoretical performance difference between the two cards stands at about 30% for shading/texturing, 14% for ROP throughput, and of course 17% for memory bandwidth. This won’t be nearly enough to justify replacing a GTX 670 with a GTX 770, but it makes it a respectable increase as a mid-generation part, and very enticing for those GTX 470 and GTX 570 owners on 2-3 year upgrade cycles.
Moving on to the launch and pricing, unlike the GTX 780 last week, NVIDIA is being far more aggressive on pricing with the GTX 770, catching even us by surprise. From a performance standpoint the GTX 770 already makes the GTX 680 redundant, and if the performance doesn’t do it then the launch price of $399 will. $399 also happens to be the same price the GTX 670 launched at, so this is a fairly straightforward spec-bump in that respect.
At the same time NVIDIA is going to be phasing out the GTX 680 and GTX 670, so while these parts may see some sales to clear our inventory there won’t be any kind of official price cut. As such other than their lower TDPs these parts are essentially redundant at the moment.
For this reason NVIDIA’s real competition will be from AMD, with the $399 price tag putting the GTX 770 somewhere between AMD’s Radeon HD 7970 and Radeon HD 7970 GHz Edition. The price of the GTX 770 is going to be closer to the former while the performance is going to be closer to the latter, which will put AMD in a tight spot. AMD’s saving throw here will be their game bundles; NVIDIA isn’t bundling anything with the GTX 770, while the 7970 cards will come with AMD’s huge 4 game Level Up with Never Settle Reloaded bundle.
Finally, today’s launch is going to be a hard launch just like GTX 780 last week. Furthermore NVIDIA’s partners will be shipping semi-custom cards right at launch, and in fact we aren’t expecting to see any reference cards for sale in North America. This means there will be a great variety among cards, but not necessarily much in the way of consistency.
May 2013 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
AMD Radeon HD 7990 | $1000 | GeForce GTX Titan/GTX 690 | |||
$650 | GeForce GTX 780 | ||||
Radeon HD 7970 GHz Edition | $440 | GeForce GTX 680 | |||
$400 | GeForce GTX 770 | ||||
Radeon HD 7970 | $380 | ||||
$350 | GeForce GTX 670 | ||||
Radeon HD 7950 | $300 |
117 Comments
View All Comments
Enkur - Thursday, May 30, 2013 - link
Why is there a picture of Xbox One in the article when its mentioned nowhere.Razorbak86 - Thursday, May 30, 2013 - link
The 2GB Question & The Test"The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. The worst case scenario is only that these highest quality assets may not be usable at playable performance, but considering the high performance of every other aspect of GTX 770 that would be a distinct and unfortunate bottleneck."
kilkennycat - Thursday, May 30, 2013 - link
NONE of the release offerings (May 30)of the GTX770 on Newegg have the Titan cooler !!!! Regardless of the pictures in this article and on the GTX7xx main page on Newegg. And no bundled software to "ease the pain" and perhaps help mentally deaden the fan noise..... this product takes more power than the GTX680. Early buyers beware... !!geok1ng - Thursday, May 30, 2013 - link
"Having 2GB of RAM doesn’t impose any real problems today, but I’m left to wonder for how much longer that’s going to be true. The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. "Last week a noob posted something like that on the 780 review. It was decimated by a slew of tech geeks comments afterward. I am surprised to see the same kind of reasoning on a text written by an AT expert.
All AT reviewers by now know that next console will be using an APU from AMD that will have the graphic muscle (almost) comparable to a 6670 ( 5670 in PS4 case thanks to GDDR5) . So what Mr. Ryan Smith is stating is that a "8GB" 6670 can perform better than a 2GB 770 in video operations?
I am well aware that Mr Ryan Smith is over-qualified to help AT readers revisit this old legend of graphics memory :
How little is too little?
And please let us not starting flaming about memory usage- most modern OSs and gaming engines use available RAM dinamically, so if one sees a game use 90%+ of available graphics memory does not imply , at all, that such game would run faster if we double the graphics memory. The opposite is often the true.
As soon as 4GB versions of the 770 launch AT should pit these versions against the 2GB 770 and the 3GB 7970. Or we could go back months ago and re-read tests done when the 4GB versions of the 680 came out- only at triple screen resolutions and insane levels of AA would we see any theoretical advantage of 3-4Gb over 2GB, which is largely unpractical since most games can't run at these resolutions and AA with a single card anyway.
I think NVDIA did it right (again): 2GB is enough for today and we wont see next gen consoles running triple screen resolutions at 16xAA+. 2Gb means less BoM, which is good for profit and price competition and less energy consumption which is good for card temps and max Oc results.
Enkur - Thursday, May 30, 2013 - link
I cant believe AT is mixing up unified graphics and system memory on consoles with dedicated RAM of the graphics card. doesnt make sense.Egg - Thursday, May 30, 2013 - link
PS4 has 8GB of GDDR5 and a GPU somewhat close to a 7850. I don't know where you got your facts from.geok1ng - Thursday, May 30, 2013 - link
Just to start the flaming war- next consoles will not run in monolithic GPUs, but in twin jaguar cores. So when you see those 768/1152 GPU cores numbers, remember these are "crossfired" cores. And in both consoles the GPU is running at a mere 800Mhz, hence the comparison with the 5670/6670, 480 shaders cards@ 800Mhz.It is widely accepted that console games are developed using the lowest common denominator, in this case, the Xbox One DDR3 memory. Even if we take the huge assumption that dual jaguar cores running in tandem can work similar to a 7850 -1024 cores at 860Mhz- in a PS4 ( which is a huge leap of faith looking back to ho badly AMD fared in previous crossfires attempts using integrated GPU like these jaguar cores) that turns out to be the same:
Do an 8GB 7850 gives us better graphical results than a 2GB 770, for any gaming application in the foreseeable future?
Don't 4k on me please: both consoles will be using HDMI, not DisplayPort. and no, they wont be able to drive games across 3 screens. This "next gen-consoles will have more Video RAM than high GPUs in PCs, so their games will be better" is reminding of the old days of "1gb DDr2 cards are better than 256Mb DDr3 cards for future games" scam.
Ryan Smith - Thursday, May 30, 2013 - link
We're aware of the difference. A good chunk of that unified memory is going to be consumed by the OS, the application, and other things that typically reside on the CPU in a PC. But we're still expecting games to be able to load 3GB+ in assets, which would be a problem for 2GB cards.iEATu - Thursday, May 30, 2013 - link
Why are you guys using FXAA in benchmarks as high end as these? Especially for games like BF3 where you have FPS over 100. 4x AA for 1080p and 2x for 1440p. No question those look better than FXAA...Ryan Smith - Thursday, May 30, 2013 - link
In BF3 we're testing both FXAA and MSAA. Otherwise most of our other tests are MSAA, except for Crysis 3 which is FXAA only for performance reasons.