I'm still trying to get my head around the Xorg archetecture and how it relates to DRM, but I went through this stuff a little bit with my netbook. It's an acer aspire one that had the misfortune of using a PowerVR GPU in conjunction with the intel Atom processor (AKA the GMA500). I have accelerated 2D and 3D in X on it, so things are fairly snappy, and I wanted to help in any efforts to bring more of the same to the pi.
As I understand it, the kernel DRM module provides the kernel space support for the libdrm library that sits in user space, and handles all of the drawing commands that come from X (or other applications?). I'm a graphics/game developer professionally, but I'm not super experienced with embedded systems.
On my netbook there's an open DRM driver that talks to a bunch of closed PowerVR stuff, and then there's also, I believe, a DRI2 driver and GLX driver that collectively provide X with the 2D and 3D acceleration support it requires.
But ultimately, my take is that any solution that isn't taking advantage of the GPU is probably going to cut itself short? In my head I envision a system where the XServer translates all of the protocol requests to gl commands and hands them off to the hardware accelerator. Each window can get its own render surface which provides pixel level RW support, and then it's just a matter of texture mapping them onto primitives for compositing. Maybe that doesn't require any kernel space implementation? Is that what Glamour is supposed to do?
Or is the correct path to do something kernel level that makes use of the OpenMAX IL API that's documented in the firmware repo?
At any rate, I'd like to help you get a git repo setup for your own work if you haven't done so already, just so that I, and others like me, can start looking at where you're at so that we can get up to speed and assist. Otherwise I just run the risk of playing catchup to your efforts and duplicating work.
But basically the stage that I'm at right now is trying to understand how the parts of the video stack fit together, and if there exists enough API access to the closed sections of the video core to make GL and VG assisted X rendering possible and/or practical. It sounds like Dom has a lot of insight into what is/isn't possible, and what might become possible in the future. I'm assuming that with all of the broadcom employees that are part of the pi foundation that we essentially have 'vendor support', that there exist people with both access to the broadcom specs and tools, and inclination to support the needs of the open portions of the pi so that we can all get the most from this wonderful device?
Anyway, tl;dr version of the above is:
- What's the best way to get 2D and 3D acceleration for X running on the GPU?
- Is there enough exposed API that we non-NDA developers can even pull this off?
- If not, is there anyone we can talk to about getting those API's made available so that we can move forward?
- Generally speaking, any 2D windowing environment can be represented in an orthographic 3D space, so in theory OpenGL ES should be able to do all the heavy lifting. There's also OpenVG to consider. Is there a sane way to hack that into X?
- I am an experienced 3D game developer, with some kernel hacking experience. How can I assist in these efforts directly?
Looking forward to pitching in...