Alright, so as soon as I started bitching about graphics, my coworker, lets just call him Linux Graphics Hater (warm applause everyone! ready those tomatoes!), went off on a rant the technical reasons why open source ATI and intel drivers still suck ass. He also corrected me that nvidia might actually be making money from some of these linux drivers. Good for them, but as long as they're still kind of hiding the fact that they're only really doing it for their paying customers, I think it supports my overall point.
Anyways, without further ado, I present Linux Graphics Hater's inaugural rant...
So, everyone keeps ranting on about how wonderful it is that Intel and ATI have open drivers and specs and how nvidia needs to get with the program and how they won't buy nvidia parts anymore.
Now, hang on a second - if you are lucky enough to have an intel/ati system and an nvidia system sitting side by side, you can play along, but otherwise, read on.
Run glxinfo on each machine and compare the output - I think you'll find the results... instructive, but let's ask ourselves a series of questions.
- Which driver(s) support pbuffers
- Which driver(s) support framebuffer objects
- Which driver(s) support GLSL (shaders)
- Which driver(s) support redirected Direct Rendering
- Which driver(s) offer full OpenGL 2.1 with hardware acceleration
- Which driver(s) offer full GLX 1.4 with hardware acceleration
If you've spent any time trying to do any OpenGL work, you'll know the answers to these questions - and it's the same answer to all of them. The nvidia driver is the only one out there that actually has full OpenGL support. The Mesa guys will happily tell you how it supports the full 2.1 spec as well - and then mumble something about a software renderer - yes that's right, as long as you don't need any hardware acceleration, Mesa is the tool for you - or maybe we should reevaluate who the tool is...
The sad truth is that none of the open source drivers actually offer the hooks necessary to enable full OpenGL support, even when the hardware itself is capable. Publishing documentation and having paid fulltime developers in house has not fixed this problem for either ATI or Intel. (Full disclosure, the closed-source ATI drivers have support for some of these features but no freetard is interested in them anymore). Why? Because there's no infrastructure - the Linux DRI/DRM layer is broken and efforts to fix it continue at a glacial pace.
How did nvidia avoid this? They bypassed it completely - the nvidia driver may look like a regular Xorg video driver but it's actually very invasive and replaces the bottom third of the X server (Most bits of X are driven through overridable function tables - glorious eh?). They had no choice: You can have the world's most awesome hardware and developers but if you have to be compatible with DRI/DRM - you're screwed and none of that will help you.
It's a crude approximation but the most crucial difference between the nvidia architecture and DRI/DRM is that nvidia actually have a memory manager - and a unified one at that. Without a memory manager it's impossible to allocate offscreen buffers (hence, no pbuffers or fbos) and without a unified memory manager it's impossible to reconcile 2D and 3D operations (hence no redirected Direct Rendering). The Accelerated Indirect GLX feature that the freetards were busy raving about is an endless source of confusion - and ultimately a hack to workaround their lack of a memory manager.
Indirect rendering is when a GL application delegates 3d operations to the X server instead of talking directly to the 3D driver. This makes operations slower, but not necessarily unusably slow - as long as the X server itself is capable of talking to the 3D driver and making hardware accelerated calls. Now, in DRI/DRM land, the X server originally *could not* talk to the 3D driver because only one direct client could run at a time - so the server itself was excluded because most people wanted their 3D apps to do the talking. However, they realised that if they forced all 3D apps to use indirect rendering, they could avoid the need for a memory manager because the X server itself acts as a single point of control over all 2D and 3D rendering - so they went and fixed things so that the Server could be a 3D client and accelerate indirect rendering, and thus AIGLX the born as a feature to be shouted about from the rooftops. Never mind that 3D apps would then have to use indirect rendering and be slowed down. Never mind that nvidia's driver offered Accelerated Indirect Rendering from day one back in 2000. Never mind that nvidia don't need to use it because they can do redirected direct rendering properly.
So, this is clearly a problem and they're not fools, so they've been trying to fix it - for how many years? Even today, despite all the apparent appearances of progress, if you go out and install the latest release of your favourite distro, you will get a memory manager-less driver and no support for any of these features. Only if you go and dig into exotic experimental branches of drivers and mesa, and apply patches to your kernel tree, will you get something that vaguely approximates a memory-manager equipped driver and only for a subset of the 'supported' intel/ati hardware. That's just great.
So, why do you think nvidia doesn't give two shits about the all the petitions and ranting and pleading and threats to go use someone elses hardware? Guess what - they write linux drivers because paying customers want them - and these places do serious rendering and need these full OpenGL features - otherwise nvidia wouldn't have added them in the first place! They aren't going to give you the time of day when you come to them with your shitty little open source driver that doesn't support features invented over 10 years ago (pbuffers at SGI - 1997)
And fuck, why do my nvidia boxes suspend/resume successfully while my Intel graphics one has to run an old patched driver because the latest one hangs on resume. Fucking awesome. Good work guys.