Khronos Announces OpenGL ES 3.0, OpenGL 4.3, ASTC Texture Compression, & CLU
by Ryan Smith on August 6, 2012 9:00 AM ESTAs we approach August the technical conference season for graphics is finally reaching its apex. NVIDIA and AMD held their events in May and June respectively, and this week the two of them along with the other major players in the GPU space are coming together for the graphics industry’s marquee event: SIGGRAPH 2012.
Outside of individual vendor events, SIGGRAPH is typically the major venue for new technology and standards announcements. And though it isn’t really a gaming conference – the show is about professional usage in both senses of the word – a number of those announcements do end up being gaming related. Overall we’ll see a number of announcements this week, and kicking things off will be the Khronos Group.
The Khronos Group is the industry consortium responsible for OpenGL, OpenCL, WebGL, and other open graphics/multimedia standards and APIs. Khronos membership in turn is a who’s who of technology, and includes virtually every major GPU vendor, both desktop and mobile. Like other industry consortiums the group’s president is elected from a member company, with NVIDIA’s Neil Trevett currently holding the Khronos presidency.
OpenGL ES 3.0 Specification Released
Khronos’s first SIGGRAPH 2012 announcement – and certainly the biggest – is that OpenGL ES 3.0 has finally been ratified and the specification released. As the primary graphics API for most mobile/embedded devices, including both iOS and Android, Khronos’s work on OpenGL ES is in turn the primary conduit for mobile graphics technology development. Khronos and its members have been working on OpenGL ES 3.0 (codename: Halti) for some time now, and while the nitty-gritty details of the specification have been finally been hammered out, the first OpenGL 3.0 ES hardware has long since taped-out and is near release.
OpenGL ES 3.0 follows on the heels of OpenGL ES 2.0, Khronos’s last OpenGL ES API released over 5 years ago. Like past iterations of OpenGL ES, 3.0 is intended to continue the convergence of mobile and desktop graphics technologies and APIs where it makes sense to do so. As such, where OpenGL ES 2.0 was an effort to bring a suitable subset of OpenGL 2.x functionality to mobile devices, OpenGL ES 3.0 will inject a selection of OpenGL 3.x and 4.x functionality into the OpenGL ES API.
Unlike the transition from OpenGL ES 1.x to 2.0 (which saw hardware move from fixed function to programmable hardware), OpenGL ES 3.0 is going to be fully backwards compatible with OpenGL ES 2.0, which will make this a much more straightforward iteration for developers, and with any luck we will see a much faster turnaround time on new software taking advantage of the additional functionality. Alongside an easier iteration of the API for developers, hardware manufacturers are far more in sync with Khronos this time around. So unlike OpenGL ES 2.0, which saw the first mass-market hardware nearly 2 years later, OpenGL ES 3.0 hardware will be ready by 2013. Desktop support for OpenGL ES 3.0 is also much farther along, and while we’ll get to the desktop side of things in-depth when we talk about OpenGL 4.3, it’s worth noting at this point that OpenGL 4.3 will offer full OpenGL ES 3.0 compatibility, allowing developers to start targeting both platforms at the same time.
On that note, OpenGL ES 3.0 will mark an interesting point for the graphics industry. Thanks to its use both in gaming and in professional applications, desktop OpenGL has to serve both groups, a position that sometimes leaves it caught in the middle and generating some controversy in the process. The lack of complete modernization for OpenGL 3.0 ruffled some game developers’ feathers, and while the situation has improved immensely since then, the issue never completely goes away.
OpenGL ES on the other hand is primarily geared towards consumer devices and has little legacy functionality to speak of, which makes it easier to implement but also allows OpenGL ES to be relatively more cutting edge. All things considered, for game developers OpenGL ES 3.0 is going to very nearly be the OpenGL 3.0 they didn’t get in 2008, although geometry shaders will be notably absent. Consequently, while OpenGL 4.x (or even 3.3) is still more advanced than OpenGL ES 3.0, it’s not out of the question that we’ll see a bigger share of OpenGL desktop game development use OpenGL ES as opposed to desktop OpenGL.
46 Comments
View All Comments
whooleo - Monday, August 6, 2012 - link
Concerning the desktop, OpenGL is hardly being used in games due to DirectX being better and because many of the Linux and Mac OS X users aren't gamers. I mean there are games on Mac OS X but not many gamers on Macs. Plus it shows, Apple's graphics drivers aren't where they should be compared to Linux and Windows along with the graphics cards they include with their PCs. Enough about Mac OS X, Linux also has too many issues with graphics cards and drivers to be friendly enough to the average gamer. This is just a summary of the reasons why desktop OpenGL adoption is low. Now don't get me wrong, OpenGL can be a great open source alternative to DirectX but some things need to be addressed first.bobvodka - Tuesday, August 7, 2012 - link
Just a minor correction; OpenGL isn't 'open source' it is an 'open standard' - you get no source code for it :)whooleo - Tuesday, August 7, 2012 - link
Whoops! Thanks for the correction!beginner99 - Tuesday, August 7, 2012 - link
maybe I got it all wrong but aren't the normal gaming gpus rather lacking in OpenGL performance? I always thought that that was one of the factors the workstations cards differ in (due to driver). Wouldn't that impact game performance as well?Cogman - Tuesday, August 7, 2012 - link
> As OpenCL was designed to be a relatively low level language (ANSI C)OpenCL was BASED on C99. It is not, however, C99. They are two different languages (in other words, you can't take OpenCL code and throw it into a C compiler and vice versa).
Sorry for the nit pick, however, it is important to note that OpenCL is its own language (Just like Cuda is its own language).
UrQuan3 - Thursday, August 16, 2012 - link
Maybe someone can set me straight on this. Years ago, I had a PowerVR card for my desktop (Kyro II). While this was not a high end card by any means, I seem to remember a checkmark box "Force S3TC Compression". The card would then load the uncompressed textures from the game and compress them using S3TC before putting them in video RAM. The FPS performance increase was very noticeable although load times went up a little.Am I confused about that? If I'm not, why isn't that more common? Seems like that would solve the problem of supporting multiple compression schemes. Of course, if a compression scheme isn't general purpose, that could cause problems.