You are viewing libv

LIBV Intentionally Breaks Videodrivers - The linux desktop is dead! [entries|archive|friends|userinfo]
Luc Verhaegen

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

The linux desktop is dead! [Sep. 17th, 2010|03:41 pm]
Previous Entry Share Next Entry
[Tags|, , , , , , , , , , , ]
[Current Location |France, Toulouse, XDS2010]
[mood |blahblah]

Or so it will be, soon, if these guys get their way.

Apparently, and this has been the hot new idea for the last year or two; for Xserver 1.10 people want to get rid of one of the greatest things that XFree86 brought us, and one of the better changes that happened after the X.org fork: modular graphics drivers.

While the current proposal is simply to undo the modularization work of the mid-naughties (thanks jezza!), it immediately sparked the imagination of others to go even further (to which Alanc answered rather strikingly). But merging drivers back is in itself already a very damaging move.

So what is the goal behind merging drivers?

The official reason for this is "cleaning up the API", but I fail to see any logical link between being able to clean up APIs and mashing everything together.

There is simply nothing that stops APIs from being improved when drivers are not a full and whole part of the xserver build tree.

A mashed-together tree has no more advantage than a buildsystem like our tinderbox.

And having modular drivers does not mean that one has to have a fully static API and ABI, you just need to have dependable ABI bumping, and, for the sake of overhead, sane and forward-looking API changes. Free software drivers are of course best able to keep in sync with API changes, but this is no different whether they are external or internal to the server build tree.

However, there is a difference in how one approaches API cleanups in a modular world, as one needs to think a bit more about how to do such API changes. This often leads to a cleaner design, a better structure, and it often means that people spend time trying to understand existing code, and how to best adjust it to fit the new needs, without throwing out the baby with the bathwater. By moving the drivers into the xserver tree, and outlawing the API, we will only open the door for having a libpciaccess type breakage every month.

So maybe this is the real wish behind wanting to merge back drivers: being able to pull crazy stunts with halfarsedly designed, badly structured and untested code, without implications, without accountability.

Apart from APIs degrading further, there are other more fundamental issues with this, with actually far reaching consequences.

When tying in the graphics drivers with the X server, the only way one could get driver updates, to get bugfixes, new features or new hardware support, is by installing a new Xserver.

This is probably going to be claimed as a benefit, as people want more testing of upstream code, but a slight increase in usage of upstream code, will mean a much bigger decrease in userbase on released code, and people will be even more afraid of updating anything in their system than today.

But this is how the kernel does it!

We've all heard this from our mothers: "If some other kid jumps off a cliff, is that a reason to jump off that cliff as well?"

Basically, while it might be a good idea for the often much simpler devices that have rather complete drivers (at least compared to graphics drivers :)) in the kernel to be a full and whole part of the kernel, it does not and will not work well for graphics drivers.

The complexity and the amount of movement in graphics drivers, especially with the many parts staying in userspace and the very unstable interfaces to them, makes this rather messy. And the only way that this is feasible is when those drivers are rather stable, and they definitely need to have a very stable ABI to userspace.

No-one will be able to maintain such a level of stability for graphics drivers, and i am sure that no-one will stand up to defend going that route, if this requirement is mixed into the discussion.

How to sneak in a 1 to 1 version dependency between xserver, mesa and the linux kernel... Pt. 1.

In January this year, in the run-up to xserver 1.8, there was a commit to the xserver, labelled "xserver: require libdri 7.8.0 to build", where an autoconf rule was added to depend on this version of "libdri". I believe that this was mainly because of DRI2 changes.

When I say depend here, there is not a complete dependency on a given version of libdri. One can always build the xserver without any DRI support whatsoever. But who, on the desktop, really wants that today?

So while this all-or-nothing decision is in itself questionable, there is another question to be asked here: what is this libdri?

There is a dri.pc on most systems today, and there is a libdri.so on most systems today. The former is a package config file coming from the mesa tree, the latter, is an xserver internal convenience library (hence the lack of so versioning). Smells fishy, doesn't it?

Now, while you might want to spend time looking high and low for the libdri from mesa, you will not find it. Mesa comes with 10 or more different libdris, one for each driver it supports, with the whole of the mesa linked in statically, in the form of driver_dri.so...

Urgh, how broken is that?

So, the xserver now depends on 10 or more different, driver specific, enormous binaries, all because its dri support now depends on a given version of the dri protocol. Or, re-stating that, the xserver depends on a very specific version of the monolithic, 80s style, mesa tree.

Expanding the logic for the xserver and the drivers: why not just mash the mesa and xserver trees together then? :)

More parts come into play... (or dependency Pt. 2)

The xserver depends on the standard drm infrastructure, and this is compatible up to a 4+ year old release of libdrm, namely version 2.3.0, as the basic libdrm code has barely changed since.

Mesa, however, is a different story altogether. It depends, hard, on the latest version of libdrm, and this has been so since Oktober 2008, when intel introduced libdrm_intel in libdrm 2.4.0.

In essence, this libdrm_intel is nothing more than a driver-stack internal convenience library. It only contains code that is specific for intel hardware and the only dependencies are parts of the intel driver stack (if those parts were living separately already). There are no direct dependencies from anything else.

But, ever since Oktober 2008, both the intel x driver and the intel mesa driver depend on the latest libdrm version, and since then, both radeon and nouveau joined in the frenzy.

So, while there might be some backwards compatibility between dri drivers and libdrm drivers, the reality is that intel, radeon and nouveau are today playing hopscotch. Because mesa is monolithic, and at least one of its drivers is going to depend on the latest libdrm version, the whole of monolithic mesa simply depends on the latest libdrm version.

Since mesa has been depending on the latest libdrm for a few years now, and the xserver has been depending on the latest mesa version since the start of 2010, in turn, the xserver now depends on the latest libdrm version.

Nice!

How does this tie in the kernel? (dependency Pt. 3).

Well, since libdrm has the driver specific sublibraries, those of course call drm driver specific ioctls, and of course, these ioctls change all the time. While some people claim that they try to abstract at this layer (and that this strategy is good enough for everyone...), and claim to try to keep the kernel to userspace interface stable, this of course is only true for a very limited range of kernel and userspace parts. Now, we have intel, radeon _and_ nouveau playing at this level, dividing whatever median compatibility range there is, by three.

The result is that libdrm can pretty much only be backwards compatible to the kernel by accident.

So, continuing our logic from earlier, the latest xserver depends on the latest mesa, the latest libdrm and the latest kernel.

Smashing lads! Well done! And all of this on a set of connections and foundations that make a house of cards look like a block of granite.

The root of the problem.

Graphics hardware is horribly complex. Several years ago, a single graphics card already broke the terraflop boundary, managing what a huge IBM supercomputer only managed a good decade earlier. Single graphics cards come with many hundreds of shader cores, running at frequencies above 1Ghz, have multiple gigabytes of ram, eat 200+ Watts, and can drive up to 6 displays today. There is no other single piece of hardware which is this complex.

And this complexity is of course also there in software.

You cannot count the different parts of a modern graphics driver stack on free software on one hand anymore. There is the kernel drm part, the firmware, the libdrm part, the X driver, a pair of mesa drivers, an xvmc and possibly another media acceleration driver. A graphics driver stack, can be made up of up to 8 parts today.

All of those parts are scattered over the system. There is 2 parts shipped with the kernel, 1 part shipped with libdrm, 2 drivers shipped with mesa, and the remainder can be found in an xf86-video tree.

Naturally, in order to work most optimally, these different parts have a very direct and acute dependency on each other. Bugs, new features and new hardware support usually incur changes to interfaces between those different parts all the time.

The way that those different parts are spread all over the place today make it almost impossible to have an optimal setup. Most of the time one is glad if it works at all. What's more, this spread is the core reason for the de-facto 1-1 version tie between kernel, libdrm, xserver and mesa.

The consequences of a 1-1 version tie between kernel, xserver and mesa.

With graphics hardware and graphics drivers being this complex, there is simply no way to have them in a bugfree or a constant "useful" state.

We just _have_ to live with the fact that graphics drivers will be buggy, and we should try to handle this as gracefully as possible.

This means that we should be able to replace all or parts of the graphics driver stack at any time, without negatively affecting other parts of the system.

This is what our audience, our customers as it were, expect from us.

But, by having kernel, libdrm, xserver and mesa tied together, and the different parts of the driver stack spread over them, it is impossible to exchange 1 part of the graphics driver stack, or to exchange just the graphics driver stack, without changing the whole.

By forcing our users to update all this infrastructure each, we will usually trigger a cascade of updates that reach far up the whole software stack, to the extent where trying to fix some small issue in the graphics driver, might mess up openOffice or another program that your average linux desktop user depends on.

Also, what is the chance of getting both wireless, suspend/resume and your graphics driver working to an acceptable level at the same time? This becomes very very small, and when it does work, you better not run into issues somewhere else, as an update might ruin that very precarious balance.

Killing the desktop for everyone.

No normal person can then run a free software desktop system, and expect to use it, because an arbitrary mix of hardware cannot possibly work together acceptably, at least not for a measurable amount of time.

What will be left over is preloads and embedded system.

Preloads is when some OEM, either itself, or through a linux distributor, spends many many man-years on making all parts work together properly. In the end, images will be produced which install on a very specific system and cannot be updated or maintained, except by a specialised team of people. Embedded systems basically work the same way: one combination of hardware, one image, no updates for average users except those provided by the manufacturer or their partners.

So while people might buy a free software based system in a shop around the corner, and be somewhat happy with it for a while, normal desktop users will be left out in the cold.

Looking further, by shutting out our own users, we will take away the breeding ground that free software is based on.

What solution is there?

By now, that should be pretty obvious.

Bring the different parts of the graphics driver stack together, and make its parts independent of the infrastructure they depend on.

This allows driver developers to change internal structure and API at will, while at the same time providing the infrastructure compatibility that users, hardware and distribution vendors require.

All it takes is a little care in designing infrastructure APIs, and a little care in keeping driver stacks compatible, even if that compatibility comes at the cost of disabling some features for some combinations of the infrastructure.

This is not hard to do, and it is done in multiple places.

Why the Nvidia binary driver is that popular.

In a recent phoronix survey, the amount of users using Nvidia hardware and drivers is larger than the users using any other combination.


This has a reason, and it has nothing to do with Nvidia being a completely closed source shop. Nvidia gives users the ability to install any graphics driver stack, and it should mostly be compatible with the environment it is installed in. This is simply what our users need.

What is affected by Nvidia being binary only, is that Nvidia has to put in a lot of work on making things compatible. Free software drivers have a much much easier task, or at least they would, if they, and the infrastructure they depend on, was developed in a different fashion than is the case today.

An open proof of concept.

My talk at FOSDEM, of course mentions my unichrome driver a lot, as it pretty much is my playground these days.

Even though the featurelist of this driver is very limited, it is now integrating X, DRM and DRI drivers in one massively backwards compatible build-system, with autotools detecting all the API changes across all currently used versions of the necessary infrastructure. What one can see there is that, when some care is taking in structuring the driver, it is not that hard to achieve this: it basically just takes the will to do this.

When I talked at FOSDEM, some people were stating that, while it might be possible for DRM and the Xserver, it would be totally impossible on Mesa/DRI, but for Mesa/gallium it should be easy.

In the next month or so, I took all Mesa versions that were out in the wild, and split off the main libraries from the actual DRI drivers, created a set of headers as required by the drivers, created package config files, and then move the drivers out to their own git repositories. Basically, a DRI SDK was created, and the drivers were now building and running externally to this SDK. This across 3 years of DRI development.

When I took that back to the Mesa community, what I of course got was indifference, and, suddenly, claims that while this SDK might be possible for mesa/DRI it would definitely not be possible for Mesa/gallium!

The future?

The proposed future direction for graphics drivers is to create graphics driver stacks. If not, we, the developers, might just as well stop working on free software graphics drivers altogether.

And while the current situation currently is bad, it is not impossible to fix. The problems are known and clear, a path to the solution should by now also be clear, but the willingness to put in the bit of extra thought is simply lacking.

So guys, if you really want to move into the wrong direction, please state the real reasons for doing so, state the consequences to your users; and know what the end result will be.
linkReply

Comments:
From: https://www.google.com/accounts/o8/id?id=AItOawmMdc7BkMiILCRGT9Gtk1LaQfmUfWAHXDw
2010-09-17 03:16 pm (UTC)

Code reuse vs dependency hell

(Link)

I think that the key factor that determined NVidia's blob success is that they reimplemented everything on their own instead of reusing and/or adapting the existing infrastructure. This is a completely reasonable move for closed-source driver authors, but if one tries to do the same in the open-source world, he will immediately get bashed for code/functionality duplication. Look what happened with reiser4 plugins that were written exactly because the existing vfs infrastructure happened to be inadequate.

Refactoring the common and universal infrastructure is always harder than rolling one's own, so the open-source drivers are in a disadvantageous position right from the outset because of that requirement.

I think this blog post is also relevant to what I already expressed: http://www.yosefk.com/blog/redundancy-vs-dependencies-which-is-worse.html
From: (Anonymous)
2010-09-17 03:49 pm (UTC)

Re: Code reuse vs dependency hell

(Link)

>I think that the key factor that determined NVidia's blob success is that they reimplemented everything on their own instead of reusing and/or adapting the existing infrastructure.

They did that because there was no (appropriate) infrastructure at the time, you know.
From: (Anonymous)
2010-09-20 12:16 pm (UTC)

Re: Code reuse vs dependency hell

(Link)

On top of that NVIDIA drivers just work and are fast. On the development side they support the last OpenGL API to date and give developers and Linux-based graphics/compute-intensive companies access to all the NVIDIA ecosystem (Cg, OpenCL, CUDA, VDPAU, bindless graphics, etc). That's certainly the main reasons most people use them.
From: (Anonymous)
2010-09-17 03:26 pm (UTC)

Muuh

(Link)

You can start a new sentence without starting a new paragraph you know.
From: (Anonymous)
2010-09-17 06:54 pm (UTC)

Re: Muuh

(Link)

That's the new modular approach to blogging :oD
From: (Anonymous)
2010-10-02 09:03 pm (UTC)

Re: Muuh

(Link)

You got beaten up a lot in school, didn't you?
From: (Anonymous)
2010-09-17 05:12 pm (UTC)

:)

(Link)

:)
From: (Anonymous)
2010-09-17 06:00 pm (UTC)

For what it's worth

(Link)

Check Phoronix, they had a stat that showed who used what graphics driver. Outside of Intel graphics, most users load the proprietary drivers (ATI / nVidia) instead of the FOSS drivers so how does this affect xorg? Seems pointless...
From: (Anonymous)
2010-09-20 07:05 am (UTC)

Re: For what it's worth

(Link)

http://www.zdnet.com/blog/computers/ati-pulls-ahead-of-nvidia-in-discrete-graphics-sales-for-second-quarter/3447
From: (Anonymous)
2010-09-17 06:15 pm (UTC)

(Link)

I regulary test the intel drivers, including git, or the release candidates.

I won't test any new driver releases if the drivers are moved back into Xorg.
It would be way too much pain for me, I am using a distribution-compiled Xorg and I am not willed at all to compile all that stuff just to report some bugs on a video driver.

- Clemens Eisserer
From: (Anonymous)
2010-09-17 06:39 pm (UTC)

(Link)

Same here for radeon.

And it's not like there are a lot of testers... I personally reported at least a couple of bug in development who could've meant a lot of hardlocked system, or crashes while simply browsing arbitrary the web with firefox if unnoticed before the next release.

From: soig
2010-09-17 07:58 pm (UTC)

Mandriva experience with modularized xorg

(Link)

As a MandrakeSoft/Mandriva employee from 1999 to 2010, I've seen how much the X11 modularization helped us distributors.
Before, it take huge & long time to build X11 from the 45mb tarball. This made us wait for releasing fixes since we'd better accumulate them.
Since we xorg 6.9/7.0, packaging x11, upgrading it, pushing fixes got a much more pleasant experience. It got way faster to push/upload a bug fix by just releasing some small updated package.

What's more it's easy (for now) to test a new driver, by just building a git snapshot, to test gallium, ...

It makes easier to work as a packager group on x11.

So I certainly sympathize with your views and with what you propose.
My 2 cents
[User Picture]From: libv
2010-09-18 09:17 am (UTC)

Thanks.

(Link)

Thanks guys, there are very constructive and good feedbacks here, confirming that this is what is needed.

But is this widespread enough to be able to stop this process towards disaster, because some people have been rather hard of hearing on such topics for a long long time ;)
From: (Anonymous)
2010-09-18 12:12 pm (UTC)

(Link)

Luc, have you tried raising this with the Xorg folk *reasonably*? And I emphasize "reasonably", because while I tend to agree with what you say (on this post and past ones), you're also *extremely* antagonistic. Angry tirades questioning their competence aren't going to encourage the Xorg developers to listen to you - quite the opposite, in fact.
[User Picture]From: libv
2010-09-18 02:01 pm (UTC)

(Link)

Assertivity is very much needed in this environment, sadly. I had to learn this the very hard way pretty early on in my X career, specifically, with my modesetting work.

Another part of this is the rather direct way in which i tend to communicate, which works better in real life, where facial expressions and body language are available too.

Then there is the native speaker advantage, i speak dutch, english, german and some french, and that limits the experience and the richness of each language, and most notably it shows in the way in which i construct sentences.

And then, especially in this environment, i actually am antagonistic, as i know what the same people will state, despite solid logic and proofs of concept. The anticipation of such a response, influences the formulation of the message.

For a sample of how that works, look at the video or listen to the audio of my talk at FOSDEM. I was being naysaid, rather baselessly, and my response to that was very rough and hard. My own mindset at the time was very relaxed, as i was where i am used to be, and these people were doing exactly what i was expecting them to do. I actually was thinking "there we go" and "it took a while, but here it is indeed", while listening to the different responses from those two people.
[User Picture]From: skierpage [skierpage.com]
2010-09-20 12:00 am (UTC)

(Link)

as i know what the same people will state, despite solid logic and proofs of concept

Despite your excellent English skills, you're confusing "antagonistic" with "asshole".
From: (Anonymous)
2010-09-20 07:10 am (UTC)

(Link)

Sorry that Eric and I questioned you; I'll do my best to keep quiet and not ask a single question next time you speak. -daniels
[User Picture]From: libv
2010-09-20 11:12 am (UTC)

(Link)

I did not really see that as simply "asking questions", but i guess that that might be my anticipation colouring my interpretation of such questions. And I definitely did not see you being as unreasonably negative as Eric.

I do believe that your statements about pvr were incorrect, but that doesn't matter, as the way pvr is dealt with today means that it was not valid for this discussion anyway, as it still lives in the embedded world and not in the desktop world.
From: (Anonymous)
2010-09-23 12:12 am (UTC)

to quote Anais Nin

(Link)

... "We don't see things as they are, we see them as we are."
From: (Anonymous)
2010-09-18 12:52 pm (UTC)

(Link)

If your position is incompatible with theirs and they don't want to hear you, why not fork ? Many devs and users are willing to support a forking initiative.
From: (Anonymous)
2010-09-18 05:45 pm (UTC)

LINUX DESKTOP CAN NEVER DIE!

(Link)

Yes I agree!
Fork and ask for support in the wider circle of programmers!
We can live with this as we live having both kde and gnome, alsa and oss.
So no big deal.

Btw Luc!
LINUX DESKTOP CAN NEVER DIE!
It is supported by millions of dedicated users, the Debian family, the Ubuntu community, the awesome Kde and the functional Gnome, the all mighty Linux kernel and thousands of native apps that continually evolve and mature among others factors...

So you have your points and you are doing well that you address to the community but do not exaggerate about the eminent catastrophe of linux desktop.
Linux Desktop will find its way to evolve and become better.
From: (Anonymous)
2010-09-18 07:45 pm (UTC)

I agree completely

(Link)

When I read about this I thought along the same lines you did. How on earth do they intent to get the graphic drivers in a comparable state to say in kernel nic drivers. All the work to that went into modular X undone.

Very good arguments in your post, hopefully it'll be read by the right people. (Actually, I have no doubt that it will be, but hopefully they'll listen ;) ).
From: ext_260791
2010-09-18 09:53 pm (UTC)

Re: I agree completely

(Link)

One of the things I have learned from my on-line experience is that people are protective towards their ideas when they have invested time or prestige around them. When such a move is decided, there must have been at least some discussion, and the winning side surely took its measures and made its commitments in order to win support. Backing now would spoil the leadership of those involved, at least to some degree.

For this reason, simply arguing will not cause any change of mind because any change would curtail these people's future ability to lead. When a debate comes to the point this one has come it is not about which side right or wrong any more, but about which is the most capable of imposing its view.

A fork is the only solution, and let the user community choose (that is, the distributions which choose for it). That happened in the past, when Xorg was forked from XFree86, that has happened more than once, and that's the strength of free software: if you don't like the way your favourite software is being directed (or "diverted" if you feel this way) you can try an alternative path.

B.T.W - how many distros out there use XFree86?
From: krc [clowersnet.net]
2010-09-20 01:38 am (UTC)

Re: I agree completely

(Link)

>how many distros out there use XFree86?
I am pretty sure the answer is zero. Even OpenBSD finally made the move.
From: (Anonymous)
2010-09-19 02:35 am (UTC)

Experience of a Linux user (not developer)

(Link)

My experience with the Linux desktop is exactly what "Luc Verhaegen" mentions, I have to hold updates for the video drivers/xorg or else my setup breaks every few weeks! I have an old IBM T43 that has been crashing randomly for months because of shoddy Intel drivers.

Updates scare me greatly.

It is supposed that once Intel went open source with their drivers all would be bliss with intel chipsets... well quite the contrary.

Next time I pick up a laptop would be an Nvidia one, or will put Windows on it.

I do not care who's fault it is of the botched state of the linux graphics stack, the only thing I care is to do my job so I can get paid, and a random crashy laptop won't help me, it frustrates me and drives me away for using Linux on my desktop machines. Why bother with Linux if I can SSH from Windows to the servers I admin?
From: (Anonymous)
2010-09-19 07:50 pm (UTC)

Don't fork it, scratch it..

(Link)

Don't fork it, just start fresh.

Use Xorg as a reference for both the good and the bad design elements, then build a new modern X. (think X12R1)

Dump all the legacy cruft and focus on the primary architectures and graphics tech that is current from no more then 5yrs ago, target only GNU/Linux (FreeBSD, and the others can add there own port once the base is stable).

Don't try to just rewrite the existing X11, re-engineer the whole thing starting with an XDMCP that meets modern needs.

But do make it modular, the more so the better.
From: (Anonymous)
2010-09-19 08:11 pm (UTC)

Please just FORK X.Org!

(Link)

Hi,

I stopped reading after "How to sneak in a 1 to 1 version dependency between xserver, mesa and the linux kernel... Pt. 1." ... not because it wasn't nice, but since i just believe it all anyway and since the overuse of enters is making it horrible to read it.

Anyway, You have so many objections in the current X.Org development and you even have a working proof of concept for better drivers so i really urge you to just stop telling the current X.Org development sucks and just fork it! If you're forking it you can once and for all do a massive X.Org code cleanup to remove the crappy parts and start clean.

I know forking a project like X.Org isn't easy to do but i somehow suspect that if you do it, it might just work. Distributions certainly won't pick up your fork in the coming releases and if you break the API those prop. drivers from ati and nvidia will likely break, but if yours is just better then it will get the new "X.Org" in time.

Again, you have the knowledge to fork this and make it a success. Please just DO IT!
[User Picture]From: libv
2010-09-20 11:30 am (UTC)

Fork or scrap is not relevant.

(Link)

As a response to everyone calling for forking or for getting rid of X.org completely.

This is no solution to the problems i described, and in fact rather irrelevant.

Graphics drivers live all over the place, not just in the X.org side.

A more reasonable course of action would be to create repositories with integrated graphics driver stacks, pulling the dispersed code in. But this also doomed to fail, for a few reasons:
First off, this is a sisyphean task.
Next to that, people will be rather deliberately adding new features and create changes in a way that makes this harder. We've seen this in the radeonhd versus radeon struggle; due to good design with a lot of separation, code from radeonhd was a lot easier to bring over to radeon than the other way around.
Thirdly, if this sort of "fork" happens, the immediate response will be to pull in code to the xserver, and then break APIs all the time, in order to make the task of the "forkers" huge.
Lastly, look at who is at which side of this rift. One can pretty much state intel portland and redhat are driving this split stack. The first is actually tasked, to some extent, with writing the intel graphics driver stack, the second sees the ati graphics driver stack as a perfect marketing instrument for its server business. This leaves us with the nouveau guys, and the only person who is able to freely develop there full time is also working at redhat.

And besides, i dislike forks, I've noticed that they tend to waste a lot of time, and that the wrong people often get attracted to join the shouting, and that some of them tend to claim important positions for themselves solely based on that shouting. If a fork then does succeed, then after a while, one finds that good ideas and solid directions are less important than political affiliation based on the earlier shouting matches.
From: (Anonymous)
2010-09-21 06:54 pm (UTC)

The State of Linux Graphics 2

(Link)

So what has changed since http://sites.google.com/site/jonsmirl/graphics.html post? Could you elaborate more on current State of Linux Graphics.

This is how I understand everything so far:

It seems that the whole idea was to get rid off DDX in Xorg and later getting rid of Xorg or parts of it(Wayland, etc...). How does (non-)merging of drivers help to achieve that?

Currently 2D is accelerated separately from 3D (DDX drivers in X and their counterparts in MESA or Gallium). Why not to follow Nvidia and support both 2D and 3D acceleration in one stack? Anyway, is could be easily implemented over 3D rendering model (Current case of Direct2D over Direct3D).

2DApp -> CompositingManager -> X -> DDX -> libdrm -> DRM(kernel) //2D
3DApp -> CompositingManager -> X -> MESA/Gallium -> libdrm -> DRM(kernel) //3D

Can't those 2D and 3D counterparts be merged? I've always struggled to comprehend why there is this seemingly artificial separation between 2D and 3D rendering paths.

Thank you.
From: (Anonymous)
2010-09-25 02:29 pm (UTC)

A project fork would be better

(Link)

You'll get the support from the Xorg development community. They supported Keith back when he forked XFree86.
http://www.xfree86.org/pipermail/forum/2003-March/000128.html

It's time for the leadership to change anyway. People like fresh talent with clear direction.

The fact is that Intel's developers created an intrusive and huge instability. They could have created a new driver i950 and left i915 alone. The TTM version was stable. Ubuntu 8.10 was the last great representation of the graphics stack on Intel video. Only an amateur programmer throws away his fallback. You've got to have a backup plan.
The developers didn't even half test what they released. If all the features weren't there yet why did they release? I'm pretty sure Tugsten Graphics didn't release their drivers until they had quality tested them. Even Intel Windows division has to test their drivers before Microsoft would certify them. half assed.

Distributions aren't going to maintain two version of the graphics stack. RedHat may given their customers are high-rollers. Suse, Ubuntu and Mandriva I don't see supporting this concept.

At any rate, Keith should step down from his leadership role. He's senile and making bad management decisions. Xorg seriously needs new management. I thought there was ATI and NVIDIA representation sitting on that board. Alan Cox works at Intel now maybe he could take over. Companies sitting on Free Software and Open Source boards represents a conflict of interest. I'd even consider Bob Young or Ian Murdock. Those guys got it right.

good luck
sorry I have to post anonymously
From: (Anonymous)
2010-09-29 04:53 am (UTC)

(Link)

Linux community need to let X.Org die.