Intel Teases Ice Lake-U Integrated Graphics Performanceby Ian Cutress on May 26, 2019 11:05 AM EST
- Posted in
- Trade Shows
- Ice Lake
- Sunny Cove
- Computex 2019
Another snippet of information from Intel today relates to the company’s future mobile platform CPU. We know it’s called Ice Lake-U, that it is built on Intel’s 10nm process, that it has Sunny Cove cores, and has beefy Gen11 integrated graphics. We’re still waiting on finer details about where it’s going to be headed, but today Intel is unloading some of its integrated graphics performance data for Ice Lake-U.
It should be noted that this data is performed by Intel, and we have had no ability to verify it in any way. Intel shared this information with a number of press in order to set a level of expectations. We’ve been told that this is Intel’s first 1 TeraFLOP graphics implementation, and it performs as such. The presentation was given by Ryan Shrout, ex owner and editor-in-chief of PC Perspective, and data was performed by his team inside Intel.
Ryan first showed us a direct comparison between the Gen9 graphics found in Intel’s latest and best Whiskey Lake platform at 15W up against a 15W Ice Lake-U product. The results make for pleasant reading. In the game demo scenes that Intel showed us, we saw upwards of a 40% gain in performance in average frame rates. Percentile numbers were not shown.
When comparing to an equivalent AMD product, Intel stated that it was almost impossible to find one of AMD’s latest 15W APUs actually running at 15W in a device – they stated that every device they could find was actually running one of AMD’s higher performance modes. To make the test fair, Intel pushed one of its Ice Lake-U processors to the equivalent of a 25W TDP and did a direct comparison. This is essentially AMD’s Vega 10 vs Intel’s Gen 11.
For all the games in Intel’s test methodology, they scored anywhere from a 6% loss to a 16% gain, with the average somewhere around a 4-5% gain. The goal here is to show that Intel can focus on graphics and gaming performance in ultra-light designs, with the aim to provide a smooth 1080p experience with popular eSports titles.
Update: As our readers were quick to pick up on from Intel's full press release, Intel is using faster LPDDR4X on their Ice Lake-U system. This is something that was not disclosed directly by Intel during their pre-Computex presentation.
|Intel Test Systems Spec Comparison|
|Ice Lake-U||Core i7-8565U
|Ryzen 7 3700U
|UHD Graphics 620
|Storage||Intel SSD 760P
|Intel SSD 760P
|SK Hynix BC501
For some background context, LPDDR4X support is new to Ice Lake-U, and long overdue from Intel as a consequence of Intel's 10nm & Cannon Lake woes. It offers significant density and even greater bandwidth improvements over LPDDR3. Most 7/8/9th Gen Core U systems implemented LPDDR3 for power reasons, and OEMs have been chomping at the bit for LPDDR4(X) so that they don't have to trade off between capacity and power consumption.
That Intel used LPDDR4X in Ice Lake-U versus DDR4 in the AMD system means that Intel had a significant memory bandwidth and latency advantage – around 56%, on paper at least. This sort of differential matters most in integrated graphics performance, suggesting that this is one angle that Intel will readily leverage when it comes to comparisons between the two products.
Moving on, the last set of data comes from Intel’s implementation of Variable Rate Shading (VRS), which was recently introduced in DirectX 12. VRS is a technique that allows the game developer to change the shading resolution of an area on the screen on the fly, allowing a developer to reduce the amount of pixel shading used in order to boost performance, and ideally doing this with little-to-no impact in image quality. It is a new supported feature on Gen11, but it does require the game to support the feature as well. The feature is game specific, and the settings are tuned by the game, not the driver or GPU.
Intel showed that in an ideal synthetic test, they scored a 40% uplift with VRS enabled, and in the synthetic test comparing VRS on and off, that extra performance put it above an equivalent AMD Ryzen system. AMD’s GPU does not support this feature at this time.
Intel is also keen to promote Ice Lake as an AI CPU, due to its AVX512 implementation, and any software than can take advantage of AI can be equipped with accelerated algorithms to speed it up.
We expect to hear more about Ice Lake this week at Computex, given Intel’s keynote on Tuesday, but we also expect to see some vendors showing off their Ice Lake-U designs.
|Want to keep up to date with all of our Computex 2019 Coverage?|
|Follow AnandTech's breaking news here!|
Post Your CommentPlease log in or sign up to comment.
View All Comments
HStewart - Sunday, May 26, 2019 - linkKeep in mind, not only 12 hour time difference but also that tomorrow is not consider a holiday in Taiwan. So it is business as usual.
Phynaz - Sunday, May 26, 2019 - linkOh look, the AMD fanboy is all butthurt
Ryan Smith - Sunday, May 26, 2019 - link"May I ask Why AnandTech hasn't covered or mention, even in the news pipeline about Zombieload or MDS?"
Backlogged on testing. I'm in the middle of something, but I ran out of time before Computex.
I don't want to put up an article without data; there's too many misconceptions and wishcasting on the subject, which is leading to everyone losing their minds.
ksec - Monday, May 27, 2019 - linkThx for the reply. I just thought since Intel release an actual statement and CVE, it would be great if Anandtech just mention with a few sentence post, and with further details to come later. Just one or two sentence will do.
I mostly limited to reading Anandtech as the only tech web site, when I saw there were discussion on twitter a few days after the Zombieload I was suppressed I didn't read about it earlier.
AshlayW - Sunday, May 26, 2019 - linkDon't believe a word this scumbag company puts out when comparing products, especially to AMD. Do I need to remind everyone of the "9900K 50% faster than 2700X" 'study' they commissioned?
Intel would have done everything in its power, skirting the boundary of deceit, to make the Intel CPU have an advantage.
Klimax - Monday, May 27, 2019 - linkAs if there is any difference between AMD and Intel. (AMD is forced to sort of behave, for now)
CBeddoe - Sunday, May 26, 2019 - linkSo...
They are benchmarking thd intel system with high frequency high capacity memory
And the AMD system with half the capacity and lower frequency ram.
They must have given the marketing department lots of leeway on this one.
Does Intel make allowances for their sieve like security and performance losses from patches?
PeachNCream - Sunday, May 26, 2019 - linkBenchmarks from the manufacturer of a product are never biased. Never. Not at all. I'll believe the numbers when I see a credible independent third party like notebookcheck post benchmarks.
With that said, that applies to the Ryzen vs Intel information. I doubt they would be as untrustworthy when comparing their own GPUs to one another. Still, grains of salt are being taken until someone gets their hands on retail hardware.
Krayzieka - Sunday, May 26, 2019 - linknow intel getting to the point to marketing. i suggest people go support AMD all the way
abufrejoval - Monday, May 27, 2019 - linkLooking at the numbers and the discussion there seems to be some consensus that it puts the new Ice Lake standard iGPU on a similar performance level as the GT3 variants from previous generations.
I have always been fascinated by these chips, because they are oddly priced.
They are extremely hard to get outside a Mac, where their end-user price is obviuosly insane.
The only other form, where you can get them easily is a NUC, where they conform to a classic Intel rule: Don't charge for the iGPU no matter what size!
So even if common sense would dictate that the extra 64/128MB of eDRAM as well as the double-sized GT3 (or quad GT4) iGPU should cost extra money, end-user pricing on NUCs doesn't reflect that: Those are soley priced on Pentium/i3/i5/i7 or CPU power "merits", even if the GPU in these configurations is taking up much more die space then the CPU.
But Intel doesn't seem to sell them to anyone but Apple.
There is one single other instance where I have ever seen an Iris Pro/Plus outside a Mac or a NUC and that was a Medion notebook sold via Aldi in Germany, based on the i5-5257U and sold at €600, quite an ordinary price for an ordinary (HD520) i5 Skylake at the time, and an obvious bargain at double GPU power for free. So I grabbed one, especially because the dGPUs at the time were all still 28nm and very clunky.
Alas, while double GPU power turned out to be true and the machine is fine and remains in good shape with great Linux compatibility, it doesn't turn the notebook into a viable gaming device, nor very likely into a viable AI inferencing monster.
At least not when you have desktops with Nvidia dGPUs running next door or somewhere in a cloud close by.
So to all this hot-headed discussion that's been going on in this thread I say: I doesn't really matter if Intel is cheating here or has made radical improvements, because every machine with either generation (or iGPU configuration) essentially remains a 2D device. It still takes GDDR RAM and at least 50 Watts of pure GPU power to make most of my games playable at the full resolution of the screen. So an Ultrabook simply isn't going to cut it, if it's PC gaming you're after (Android games work, but rarely attractive).
But what also works just as well in both configurations is Steam streaming. A GTX 1060 is good enough for 1920x1080 on the server side and will give you the performance no APU or beefed up iGPU will give you for a long time to come on a 15 or even 10Watt ultrabook without even running hot or short.
So that's what I do. I show people my ultrabook and impress them with the most demanding games running at full hilt seemingly without even breaking a sweat at battery power.
Some actually figure out that I must be cheating, but most people actually believe in both magic and advertisement.