NVIDIA Announces GeForce MX150: Entry-Level Pascal for Laptops, Just in Time for Computex
by Ryan Smith on May 26, 2017 7:30 AM ESTThis morning NVIDIA has taken the wraps off of a new video card for laptops, the GeForce MX150. Aimed at the entry-level market for discrete GPUs – that is, laptops that need performance only a bit above an integrated GPU – the MX150 is NVIDIA’s Pascal-based successor to the previous 930M/940M series of laptop adapters that have been in computers over the last couple of years. Today’s reveal is undoubtedly tied to next week’s Computex trade show, so we should expect to see a number of laptops using the new adapter announced in the coming days.
From a technical perspective, details on the GeForce MX150 are very limited. Traditionally NVIDIA does not publish much in the way of details on their low-end laptop parts, and unfortunately the MX150’s launch isn’t any different. We’re still in the process of shaking down NVIDIA for more information, but what usually happens in these cases is that these low-end products don’t have strictly defined specifications. At a minimum, OEMs are allowed to dial in clockspeeds to meet their TDP and performance needs. However in prior generations we’ve also seen NVIDIA and OEMs use multiple GPUs under the same product name – mixing in GM107 and GM108, for example – so there’s also a strong possibility that will happen here as well.
Officially, all NVIDIA says about the new video card is that it uses GDDR5 and that it offers around 33% better performance than the GeForce 940MX, a (typically) GM108-based product. Based on the market segment and NVIDIA’s recent activities in the desktop space, the “baseline” MX150 is without a doubt GP108, NVIDIA’s entry-level GPU that was just recently launched in the GeForce GT 1030 for desktops. Information about this chip is limited, but here’s my best guess for baseline MX150 specifications.
Best Guess: NVIDIA Laptop Video Card Specification Comparison | ||||
Typical MX150 | Typical 940MX | |||
CUDA Cores | 384? | 384 | ||
ROPs | 16 | 8 | ||
Boost Clock | Variable | Variable | ||
Memory Type | GDDR5 | GDDR5/DDR3 | ||
Memory Bus Width | 64-bit? | 64-bit | ||
VRAM | <=2GB | <=2GB | ||
GPU | GP108? | GM108 | ||
Manufacturing Process | TSMC 16nm | TSMC 28nm | ||
Launch Date | 05/26/2017 | 03/2016 |
The limited 33% performance improvement over the existing 940MX comes as a bit of a surprise, but it makes sense within the context of the specifications. Relative to a GDDR5 940MX, the MX150 does not have a significant specification advantage over the aforementioned 940MX, with the same number of CUDA cores and similar memory bandwidth. The one stand-out here is ROP throughput, which doubles thanks to GP108’s higher ROP count.
Ultimately what this means is that most of MX150’s performance advantage over the 940MX comes from clockspeed improvements, with a smaller uptick from architectural gains. The counterpoint to that is that these are entry-level laptop parts that are frequently going to be paired with 15W Intel U-series CPUs, so vendors are going to play it safe on clockspeeds in order to maximize energy efficiency. NVIDIA does advertise these GPUs as offering multiple times the performance of Intel’s HD 620 iGPU, however given the higher power consumption of the GPU, I’m more curious how things would compare against Intel’s 28W Iris Plus 650 configurations.
Owing to OEM configurability and general NVIDIA secrecy, NVIDIA does not publish official TDPs for these parts. But it’s interesting to note that while performance has only gone up 33%, NVIDIA is claiming that power efficiency/perf-per-watt has tripled. This strongly implies that NVIDIA’s baseline specifications for the product are favoring TDP over significant clockspeed gains, so I’m very interested to see what the real-world TDPs are going to be like. 940MX was a 20-30W part (depending on who you asked and what GPU they used), so with the jump from 28nm to 16nm, NVIDIA should have a good bit of room for drawing down TDPs. Though ultimately what this may mean is that MX150 is closer to a 930M(X) replacement than a 940M(X) replacement if we’re framing things in terms of power consumption.
Otherwise, as a GP108 part this is the Pascal architecture we’ve all come to know and love. Relative to NVIDIA’s desktop parts, this is actually a more substantial upgrade, as the previous 930M/940M parts were based on NVIDIA’s Maxwell 1-generation GM108 GPUs, and not the newer Maxwell 2 GM2xx series. The difference being that these earlier parts lacked the DirectX feature level 12_1, HDMI 2.0, and low-level performance optimizations (e.g. newer color compress) that we better know the Maxwell family for. So while MX150 isn’t meant for serious gaming laptops, it has a much richer feature set to draw from for both rendering and media tasks. CUDA road coders will likely also appreciate the fact that the newer part will offer CUDA compute capabilities much closer to NVIDIA’s current-generation server hardware, such as fine-grained preemption.
Finally, like its predecessor, expect to see the GeForce MX150 frequently paired up with Intel’s U-series CPUs in ultrabooks. While this SKU isn’t strictly limited to slim form factors – and someone will probably put it into a larger device for good measure – it’s definitely how NVIDIA is positioning the part, as the GTX 1050 series is for larger devices. Also expect to see most (if not all) MX150 parts running in Optimus mode, which continues to be a strong selling point for encouraging OEMs to include a dGPU.
With Computex kicking off next week, we should see a flurry of laptop announcements. Though not all of the relevant laptop announcements have gone out yet, NVIDIA’s announcement names Acer, Asus, Clevo, MSI, and HP as laptop vendors who will all be shipping MX150-equipped laptops. NVIDIA and their various partners will in turn hit the ground running here, as NVIDIA’s announcement notes that MX150-equipped laptops will begin shipping next month.
Source: NVIDIA
15 Comments
View All Comments
tipoo - Friday, May 26, 2017 - link
The best current ones are around the 940MX this replaces, so 33% better should put it on top of any Iris Plus.Samus - Friday, May 26, 2017 - link
The real question is, will it be worth the tradeoff. It could be slightly faster, but at a steep expense to the power budget of a laptop. Intel graphics are pretty good, especially in laptops that still come with 1366x768 (please kill me) displays. This GPU will probably find its way into laptops with UHD/QHD displays where Intel graphics, even Iris, might struggle to keep things flowing, especially with Microsofts aggressive UI update in the Fall Creators Update.I'm running the beta and it is very GPU hungry compared to any previous Windows UI. And its beautiful for it.
BrokenCrayons - Friday, May 26, 2017 - link
That's great news as is the relase of the GT 1030 for desktops which I'd not yet heard about.There are a few little typos in the article:
"...here’s my best guess for baselime MX150 specifications." - Maybe baseline instead of baselime?
"...as laptop vendors who will all be shipping MX150-eqipped laptops." - The word eqipped looks like an error as well.
eddman - Friday, May 26, 2017 - link
A bit off-topic; could you do an HTPC GT 1030 review (+ maybe some gaming numbers vs. intel HD graphics) by any chance?yhselp - Saturday, May 27, 2017 - link
I wonder whether we'd get Pascal replacements for 945M (512 cores) and 920MX (256 cores). If this new MX150 is comparable to 930MX in terms of power draw, then we could maybe get a 512-core GPU with the same TDP as 940MX.More interestingly, it now seems like it's possible for Nvidia to create a Pascal version of the 920MX that is passively cooled. It would be so cool to see one of these paired with a Core m/Y in a fanless design. On top of possibly adding more performance, a dGPU in a fanless design should also alleviate the CPU's power budget, which should result in higher clockrates during sustained, GPU-heavy workloads. By my utterly amateurish calculations a dGPU should be around 30% faster than an IGP in a SoC with the same TDP, which is not much, but then again when the IGP is going at full pelt CPU clockrates take a huge dive well below the base frequency, which could be avoided by using a dGPU at the expensive of a higher power-draw, of course.
We're at a point in time when it definitely seems like it's possible to build a more serious handheld gaming device like Switch, albeit bigger, that runs Windows.
We should be seeing more design wins nowadays; it often feels like OEMs are sleeping under a tree.