Origin PC spoiled the GTX 680M launch party a bit with their announcement of their new EON15-S and EON17-S notebooks this morning, but NVIDIA asked us to avoid discussing the particulars of the new mobile GPU juggernaut until the official NDA time. As you’ve probably guessed, that time is now (or 6PM PDT June 4, 2012 if you’re reading this later). NVIDIA also shared some information on upcoming Ultrabooks, which we’ll get to at the end.

NVIDIA has had their fair share of success with Kepler so far, and the GTX 680 desktop cards continue to sell out. Newegg for example currently lists 18 GTX 680 cards, but only one is currently in stock: the EVGA GTX 680 FTW comes with a decent overclock and a starting price $70 higher than the standard GTX 680. On the laptop side, we’ve already had a couple Kepler-based GK107 laptops in for review already, and graphics performance has shown a large improvement relative to the previous Fermi midrange cards.

On the high-end notebooks, so far the only Kepler GPU has been a higher clocked GK107, the GTX 660M, but increasing the core clocks will only take you so far. NVIDIA has continued to sell their previous generation GTX 570M and 580M as the GTX 670M and 675M (with a slight increase in core clocks), but clearly there was a hole at the top just waiting for the GTX 680M, and it’s now time to plug it. Below is a rundown of the three of NVIDIA’s fastest mobile GPUs to help put the GTX 680M in perspective.

NVIDIA High-End Mobile GPU Specifications
  GeForce GTX 680M GeForce GTX 675M GeForce GTX 660M
GPU and Process 28nm GK104 40nm GF114 28nm GK107
CUDA Cores 1344 384 Up to 384
GPU Clock 720MHz 620MHz 835MHz
Shader Clock - 1240MHz -
Memory Eff. Clock 3.6GHz 3GHz 4GHz
Memory Bus 256-bit 256-bit 128-bit
Memory Bandwidth 115.2GB/s 96GB/s 64GB/s
Memory Up to 4GB GDDR5 Up to 2GB GDDR5 Up to 2GB GDDR5

Just running the raw numbers here, the GTX 680M has up to 20% more memory bandwidth than GTX 675M/580M, thanks to the improved memory controller and higher RAM clocks available with Kepler. The bigger improvement however comes in the computational area: even factoring in the double-speed shader clocks, GTX 680M has potentially 103% more shader performance than its predecessor. NVIDIA gives an estimated performance improvement of up to 80% over GTX 580M, which is a huge jump in generational performance. And while Fermi on the desktop still offers potentially better performance in several compute workloads, there’s a reasonable chance that the gap won’t be quite as large on notebooks—not to mention compute generally isn’t as big of a factor for most notebook users. (And for those that need notebooks with more compute performance, there’s always the Quadro 5010M—likely to be supplemented by a new Quadro in the near future.)

Unfortunately, we’ll have to wait a bit longer to do our own in-house investigation of GeForce GTX 680M performance, as we don’t have any hardware in hand. NVIDIA did provide some performance benchmarks with a variety of games, though, and we’re going to pass along that information in the interim. As always, take such information with a grain of salt, as NVIDIA may be picking games/settings that are particularly well suited to the GTX 680M, but for many of the titles there’s a canned benchmark that should allow for “fair” comparisons.

Assuming the above chart uses the built-in benchmarks in the games that support it, we do have a few points of comparison with the Alienware M18x in GTX 580M and HD 6990M configurations. We’ll skip those, however, as the only game where we appear to run at identical settings is DiRT 3 (43.8FPS if you’re wondering). Luckily, NVIDIA has included similar performance tables in previous launches, so we do have some overlap with their GTX 580M information. First, here’s their full benchmarking page from 580M, and then we’ll summarize the points of comparison.

Tentative Gaming Performance Comparison
(Using NVIDIA GTX 580M/680M Results)
  GTX 680M
(NVIDIA i7-3720QM)
GTX 580M
(NVIDIA i7-980X)
Aliens vs. Predator 59.7 39 53%
Civilization V 65.6 48 37%
DiRT 3 69.5 43 62%
Far Cry 2 115.6 79 46%
Lost Planet 2 57.9 33 75%
Metro 2033 56.2 40 41%
Stalker: Call of Pripyat 96.4 50 93%
StoneGiant (DoF Off) 67 46 46%
StoneGiant (DoF On) 36 25 44%
Street Fighter IV 165.5 138 20%
Total War: Shogun 2 97.8 59 66%
Witcher 2 High 43.7 26 68%
Witcher 2 Ultra 20.1 10 101%
Average Performance 73.2 48.9 50%

Even given the discrepancies between test notebooks (Clevo’s X7200 with an i7-980X compared to the i7-3720QM), given both chips have the same maximum Turbo Boost clocks (3.6GHz on i7-980X and i7-3720QM) plus the fact that we should be GPU limited and the above scores look pretty reasonable. The only games that don’t see a >40% increase are Civilization V (which has proven to be CPU limited in the past) and Street Fighter IV (which is running at >120FPS on both GPUs already). There are a few titles where we even see nearly a doubling of performance. We don’t have raw numbers, but NVIDIA is also claiming around a 15-20% average performance advantage over AMD’s Radeon 7970M—hopefully we’ll be able to do our own head-to-head in the near future.

Overall, using NVIDIA’s own numbers it looks like GTX 680M ought to be around 50% faster than GTX 580M. If that doesn’t seem like much, consider that the difference between GTX 480M and GTX 580M was only around 20% (according to NVIDIA and using 3DMark11). A 50% increase in mobile graphics performance within the same power envelope is a huge step; if Kepler manages to reduce power use at all then it will be an even bigger jump. Put another way, a single GTX 680M in the above games using NVIDIA’s own results ends up offering 86% of the performance of GTX 580M SLI, and it will definitely use a lot less power and have fewer headaches than mobile SLI.

As usual, NVIDIA had a wealth of other information to share about their product and software features, and with their latest drivers NVIDIA is adding a few new items. No, we’re not even talking about CUDA or PhysX here (though NVIDIA does at least list those as important features). Optimus also gets a plug, and just as with the 400M and 500M series, all 600M GPUs support Optimus. The difference is that this time around, instead of just Alienware supporting Optimus with their M17x R3, NVIDIA also has MSI and Clevo on board for GTX 680M Optimus.

Briefly covering the other features, Kepler adds TXAA support (Time based anti-aliasing), a frame based anti-aliasing algorithm that NVIDIA touts as providing quality near the level of 8xMSAA but with a performance hit similar to that of 2xMSAA—or alternately, even better quality for a performance hit similar to 4XMSAA. It sounds like TXAA for now will require application support, and NVIDIA provided the above slide showing some of the upcoming titles that will have native TXAA built into the game. NVIDIA also made mention of FXAA (sample based anti-aliasing), which is a full scene shader technique that can help remove jaggies with a very minor performance hit (around 4%). New with their latest drivers is the ability to force-enable FXAA on all games.

Another newer addition is Adaptive V-Sync, which sounds similar in some ways to Lucid’s Virtu MVP solution. In practice, however, it sounds like NVIDIA is simply enabling/disabling V-Sync based on the current frame rate. If a game is running at more than 60FPS, V-Sync will turn on to prevent tearing, while at <60FPS V-Sync will turn off to improve performance and reduce stuttering.

Besides GTX 680M, there should be quite a few Ultrabook announcements coming out at Computex with support for NVIDIA GPUs. We’ve already looked at Acer’s TimelineU M3, and we mentioned ASUS’ UX32A/UX32VD and Lenovo’s new U410. Ultrabooks are quickly reaching the point where they’re “fast enough” for the vast majority of users; the one area where they appear deficient is in graphics performance. Ivy Bridge and HD 4000 in a ULV chip simply aren’t able to provide the same sort of performance we find in the higher TDP chips.

That’s where NVIDIA plans on getting a lot of wins with their GT 610M (48 core Fermi) and their GT 620M (96 core Fermi); GT 620M will initially be available as a 40nm and 28nm part, but we're still trying to find out if GT 610M will also have a 28nm variant. For larger laptops, GT 610M wouldn’t make much sense, but in an Ultrabook it may be just what you need. If so, keep your eyes on our Computex 2012 and Ultrabook coverage, as there’s surely more to come.



View All Comments

  • marc1000 - Tuesday, June 5, 2012 - link

    Especially, midrange PRICE. no way on earth I, would spend more than 300 $ on any video card. I mentioned the tdp because last-gen nvidia cards are too power-hungry in my opinion, so i wont buy one. This makes Amd the only option. Reply
  • iwod - Tuesday, June 5, 2012 - link

    It looks like Kelper is superior then AMD offering this time around. Something that cant be said for the past 2 - 3 years.

    But the GTX660M Mid Range Mobile GPU is too far away from the High End Range GTX 680M.

    I would love to get the GTX 680M on the coming Macbook Pro and iMac, but i am guessing lower end will be stuck with GTX660M again, there is a Staggering 2.5x EXTRA CUDA Cores difference between them.

    BTW, why are there no the TDP of GTX680M?
  • JarredWalton - Tuesday, June 5, 2012 - link

    NVIDIA doesn't typically disclose TDPs for mobile GPUs, but the standards are something like 30W, 45W, 60W, and 100W (give or take) -- that's for the entire MXM module. AMD basically targets the same power usages, as many OEMs like to support either AMD or NVIDIA. Reply
  • ExarKun333 - Tuesday, June 5, 2012 - link

    I was thinking the same. After more thought, the issue might be the memory bandwidth. If they go 2x 660M cores (~768) I suspect they would need to go the route of the 256-bit memory bus, because 128 would just not feed it enough. That probably means more TDP and a more expensive chip. That would be the 'sweet spot' though IMHO; 'enough' cores with sufficient memory bandwidth would make it a great laptop GPU, without breaking the bank or power consumption. It would also be just slower (likely) than the current 580M with much better power efficiency. I guess time will tell if NV decides to release such a chip. Reply
  • Spunjji - Tuesday, June 5, 2012 - link

    Even with memory bandwidth limitations, that would still be a nice chip. Those limitations would play into their hands too and keep it away from the 680M. Unfortunately they can't release such a chip this generation because the marketing bastards have plugged that gap with Fermi. Of course, nVidia don't usually shy away from releasing a totally different card with the same name, so maybe we will see a genuine 670M! Reply
  • iwod - Tuesday, June 5, 2012 - link

    Yes, 768 sounds good to me, a perfect balance of Performance and Power on Laptops. Last week i asked and discovered Intel are no longer binning their CPU and instead pretty much produce the chips as spec with little disabled parts. This actually sits right in line with what Nvidia has been complaining about wafer cost. Disabled Parts will just be too expensive and possible no longer provides any financial incentive.

    SO we wont be getting a Disabled GK104, and GK107 dont offer any real performance, we are pretty much stuck in the middle of no where.

    Sounds like a gap of opportunity AMD may want to explore.
  • Spunjji - Tuesday, June 5, 2012 - link

    This gap has happened for the line-ups of both companies and it's very frustrating. They've decided to force people who like the mid-range cards to pony up more cash than usual or suffer with lower-mid-range performance. :/ Reply
  • bobburn - Tuesday, June 5, 2012 - link

    "Superior" if you like paying double the money for something that bests the equivalent AMD branded card in maybe 1/3 of the games that people usually play, ties in another 1/3, and is downright trounced in another 1/3. Reply
  • JarredWalton - Tuesday, June 5, 2012 - link

    FUD much? What are the 1/3 of games where the GTX 680M is actually trounced by the HD 7970M? Can you even name two? If someone has actually tested the two GPUs in a notebook (e.g. not simulated with desktop hardware) and done a thorough comparison, I'd love to see it. We're still trying to get hardware, and I think the same is true of every other site. Reply
  • Riek - Tuesday, June 5, 2012 - link

    the question can also be: can somebody name 2 games where the 680M will trounce the 7970M. Because at this point that cannot be found either.

    All we know at this point imo is:

    7970M is smaller than 680M

    680M has the layout of the 670 while the 7970M has the layout of the 7870. So basically we can assume the 680M will have higher performance. Altough the article talks about 680M having a 50% higher performance than the 675M... the same is true for the 7970M... and that was tested by 3rd parties.

Log in

Don't have an account? Sign up now