As spring gets ready to roll over to summer, last week we saw the first phase of NVIDIA’s annual desktop product line refresh, with the launch of the GeForce GTX 780. Based on a cut-down GK110 GPU, the GTX 780 was by most metrics a Titan Mini, offering a significant performance boost for a mid-generation part, albeit a part that forwent the usual $500 price tier in the process. With the launch of GTX 780 the stage has been set for the rest of the GeForce 700 series refresh, and NVIDIA is wasting no time on getting to the next part in their lineup. So what’s up next? GeForce GTX 770, of course.

In our closing thoughts on the GTX 780, we ended on the subject of what NVIDIA would do for a GTX 770. Without a new mid/high-end GPU on the horizon, NVIDIA has instead gone to incremental adjustments for their 2013 refreshes, GTX 780 being a prime example through its use of a cut-down GK110, something that has always been the most logical choice for the company. But any potential GTX 770 is far more nebulous, as both a 3rd tier GK110 part and a top-tier GK104 part could conceivably fill the role just as well. With the launch of the GTX 770 now upon us we finally have the answer to that question, and the answer is that NVIDIA has taken the GK104 option.

What is GTX 770 then? GTX 770 is essentially GTX 680 on steroids. Higher core clockspeeds and memory clockspeeds give it performance exceeding GTX 680, while higher voltages and a higher TDP allow it to clock higher and for it to matter. As a result GTX 770 is still very much a product cut from the same cloth as GTX 680, but as a fastest GK104 card yet it is a potent successor to the outgoing GTX 670.

  GTX 770 GTX 680 GTX 670 GTX 570
Stream Processors 1536 1536 1344 480
Texture Units 128 128 112 60
ROPs 32 32 32 40
Core Clock 1046MHz 1006MHz 915MHz 732MHz
Shader Clock N/A N/A N/A 1464MHz
Boost Clock 1085MHz 1058MHz 980MHz N/A
Memory Clock 7GHz GDDR5 6GHz GDDR5 6GHz GDDR5 3.8GHz GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 320-bit
FP64 1/24 FP32 1/24 FP32 1/24 FP32 1/8 FP32
TDP 230W 195W 170W 219W
Transistor Count 3.5B 3.5B 3.5B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Launch Price $399 $499 $399 $349

With GTX 780 based on GK110, GTX 770 gets to be the flagship GK104 based video card for this generation. At the same time to further differentiate it from the outgoing GTX 680, NVIDIA has essentially given GK104 their own version of the GHz Edition treatment. With higher clockspeeds, a new turbo boost mechanism (GPU Boost 2.0), and a higher power limit, GTX 770 is GK104 pushed to its limit.

The end result is that we’re looking at a fully enabled GK104 part – all 32 ROPs and 8 SMXes are present – clocked at some very high clockspeeds. GTX 770’s base clock is set at 1046MHz and its boost clock is at 1085MHz, a 40MHz (4%) and 27MHz (3%) increase respectively. This alone doesn’t amount to much, but GTX 770 is also the first desktop GK104 part to implement GPU Boost 2.0, which further min-maxes NVIDIA’s clockspeeds. As a result being that GTX 770 reaches its highest clocks more often, making the effective clockspeed increase greater than 4%.

But the more breathtaking change will be found in GTX 770’s memory configuration. With GTX 680 already shipping at 6GHz there’s only one way for NVIDIA to go – up – so that’s where they’ve gone. GTX 770 ships with 7GHz GDDR5, making this the very first product to do so. This gives GTX 770 nearly 17% more memory bandwidth than GTX 680, an important increase for the card as the 256bit memory bus means that NVIDIA has no memory bandwidth to spare for GTX 770’s higher GPU throughput.

We’ve talked in length about GDDR5 memory controllers before, noting that 7GHz has always been the planned limit for GDDR5. Good GDDR5 memory can hit it easily enough, but GPU memory controllers and memory buses are another matter. After faltering with the Fermi generation NVIDIA was able to hit 6GHz on their first shot with GK104, and now with their second shot and a new PCB NVIDIA is ready to certify GK104 as 7GHz capable. Given all the teething GDDR5 has gone through on both sides of the aisle, this is a small but impressive achievement for NVIDIA.

Moving on, between the higher GPU clockspeeds, higher memory clockspeeds, and the introduction of GPU Boost 2.0, NVIDIA is also giving GTX 770 a hearty increase in TDP, for both the benefits and drawbacks that brings. GTX 770’s TDP is 230W versus GTX 680’s 195W, and due to GPU Boost 2.0 the old 170W “power target” concept is going away entirely, so in some cases the difference in effective power consumption is going to be closer to 60W. Like GTX 780, this higher TDP is a natural consequence of pushing out a faster part based on the same manufacturing process and architecture, and we expect this to be the same story across the board for all of the GeForce 700 series parts. At the same time however we’d point out that the 230W TDP higher than usual for a sub-300mm2 GPU, reflecting the fact that NVIDIA really is pushing GK104 to its limit here.

Along with differentiating the GTX 770 from the GTX 680, these small improvements also serve to further separate the GTX 770 from the GTX 670, which because it’s based on the same GPU, makes this to some extent necessary to provide the necessary performance gains to justify the mid-generation refresh. As GTX 670 was a lower clocked part with only 7 of 8 SMXes enabled, the performance difference between it and the GTX 770 ends up being due to a combination of those two factors. With a clockspeed difference of 131MHz (14%), the theoretical performance difference between the two cards stands at about 30% for shading/texturing, 14% for ROP throughput, and of course 17% for memory bandwidth. This won’t be nearly enough to justify replacing a GTX 670 with a GTX 770, but it makes it a respectable increase as a mid-generation part, and very enticing for those GTX 470 and GTX 570 owners on 2-3 year upgrade cycles.

Moving on to the launch and pricing, unlike the GTX 780 last week, NVIDIA is being far more aggressive on pricing with the GTX 770, catching even us by surprise. From a performance standpoint the GTX 770 already makes the GTX 680 redundant, and if the performance doesn’t do it then the launch price of $399 will. $399 also happens to be the same price the GTX 670 launched at, so this is a fairly straightforward spec-bump in that respect.

At the same time NVIDIA is going to be phasing out the GTX 680 and GTX 670, so while these parts may see some sales to clear our inventory there won’t be any kind of official price cut. As such other than their lower TDPs these parts are essentially redundant at the moment.

For this reason NVIDIA’s real competition will be from AMD, with the $399 price tag putting the GTX 770 somewhere between AMD’s Radeon HD 7970 and Radeon HD 7970 GHz Edition. The price of the GTX 770 is going to be closer to the former while the performance is going to be closer to the latter, which will put AMD in a tight spot. AMD’s saving throw here will be their game bundles; NVIDIA isn’t bundling anything with the GTX 770, while the 7970 cards will come with AMD’s huge 4 game Level Up with Never Settle Reloaded bundle.

Finally, today’s launch is going to be a hard launch just like GTX 780 last week. Furthermore NVIDIA’s partners will be shipping semi-custom cards right at launch, and in fact we aren’t expecting to see any reference cards for sale in North America. This means there will be a great variety among cards, but not necessarily much in the way of consistency.

May 2013 GPU Pricing Comparison
AMD Radeon HD 7990 $1000 GeForce GTX Titan/GTX 690
  $650 GeForce GTX 780
Radeon HD 7970 GHz Edition $440 GeForce GTX 680
  $400 GeForce GTX 770
Radeon HD 7970 $380  
  $350 GeForce GTX 670
Radeon HD 7950 $300  


Meet The GeForce GTX 770
Comments Locked


View All Comments

  • JDG1980 - Thursday, May 30, 2013 - link

    TechPowerUp ran tests of three GTX 770s with third-party coolers (Asus DirectCU, Gigabyte WindForce, and Palit JetStream). All three beat the GTX 770 reference on thermals for both idle and load. Noise levels varied, but the DirectCU seemed to be the winner since it was quieter than the reference cooler on both idle and load. That card also was a bit faster in benchmarks than the reference.

    That said, I agree the build quality of the reference cooler is better than the aftermarket substitutes - but Asus is probably a close second. Their DirectCU series has always been very good.
  • ArmedandDangerous - Thursday, May 30, 2013 - link

    This article is in desperate need of some editing work. Spelling and comprehension errors throughout.
  • Nighyal - Thursday, May 30, 2013 - link

    I asked this on the 780 review, and it seems like it might be even more interesting for the 770 considering Nvidia's basically threw more power at a 680, but a performance per watt comparison would be great. If there was something that clearly showed the efficiency of each card in a way (maybe using a fixed work load) it would be interesting to see. Especially when compared to similar architectures or when comparing AMD's efforts with the GHz editions.
  • ThIrD-EyE - Thursday, May 30, 2013 - link

    Since when did 70-80C temperatures become acceptable? I had been looking to upgrade my MSI Cyclone GTX 460 which would never hit higher than 62C and I got a great deal on 2 560TIs for less than half the cost of them new. I have run them in single card and SLI; I see 80C+ when I run an overclock program like MSI Afterburner. I use a custom fan profile to bring the temps down to 75C or less at higher fan speed, but still in reasonable noise levels. It's still not quite enough.

    All these video cards may be fine at these temperatures, but when you are sitting next to the case and there is 80C being pumped out, you really feel it. Especially now with Summer heat finally hitting where I live. My $25 Hyper212+ keeps my OC'ed i7 2600k at a good 45-50C when playing games. I would buy aftermarket coolers if they were not going to take up 3 slots each (I have a card that I need, but would have to be removed.) and didn't cost nearly as much as I paid for the cards.

    AMD, NVIDIA and card partners need to work on bringing temperatures down.
  • quorm - Thursday, May 30, 2013 - link

    lower temperature readings do not mean less heat produced. better cooling just moves the heat from the GPU to your room more efficiently.
  • ThIrD-EyE - Thursday, May 30, 2013 - link

    The architecture of these video cards were obviously made for performance first. That does not mean they can't also work on lowering power consumption to lower the heat produced. One thing that I've found to help my situation is to set all games to run at 60fps without vsync if possible, which thankfully is most fo the games I play. Some games become unplayable or wonky with vsync and other ways of limiting fps without vsync, so I just deal with the heat from no fps limits.

    I hope that the developers of console ports from PS4 and Xbox One put in an fps limit option like Borderlands 2 if they don't allow dev console access.
  • MattM_Super - Friday, May 31, 2013 - link

    Although its not currently accessible from the driver control panel, Nvidia drivers have a built in fps limiter that I use in every game I play (never had any issues with it). You can access it with NvidiaInspector.
  • DanNeely - Thursday, May 30, 2013 - link

    Since 70-80C has always been the best a blower style cooler can do on a high power GPU without getting obscenely loud, and blowers have proven to be the best option to avoid frying the GPU in a case with horrible ventilation. IOW about when both nVidia and ATI adopted blowers for their reference designs.
  • JPForums - Thursday, May 30, 2013 - link

    70C-80C temperatures became acceptable after nVidia decided to release Fermi based cards that regularly hit the mid 90Cs. Since then, the temperatures have in fact come down. Of course, they are still high for my liking and I pay extra for cards with better coolers (I.E. MSI TwinFrozer, Asus DirectCU). That said, there is only so much you can do when pushing 3 times the TDP of an Intel Core i7-3770K while cooling it with a cooler that is both lighter and less ideally formed for the task (Comparing some of the best GPU coolers to any number of heatsinks from Noctua, Thermalright, etc.). Water cooling loops work wonders, but not everyone wants the expense or hassle.
  • Rick83 - Friday, May 31, 2013 - link

    The higher the temperatures, the less fan speed you need, because you have higher delta-theta between the air entering the cooler and the cooling fins, which results in more energy transfer at less volume throughput.
    Obviously the temperature is a pure function of the fan curve under load, and has very little to do with the actual chip (unless you go so far down in energy output, that you can rely on passive convection).

Log in

Don't have an account? Sign up now