libv southpark hackergotchi

FOSDEM14 lima driver talk.

The FOSDEM organizers have captured more than 16h of video for 22 DevRooms and they are still working hard on getting all of these videos cut and transcoded and put on their site. This is a mammoth task, and some talks in the graphics devroom are still up in the air because of this.

Luckily, some people from the sunxi community took the livestream of sunxi related talks, and produced some quick dumps of it. I have now copied the Mali driver talk over to my fd.o account so that the sunxi server doesn't become too overloaded. Slides are also available.

This talk mentions where i am at with my mesa driver, why i am taking the route i am taking, the importance of SoC communities, and how the lack thereof hampers driver development, and also talks about the company ARM and their stance towards the open driver.

libv southpark hackergotchi

FOSDEM, and the SuSE bus.

When i was still at SuSE, Localhorst would rent a Ford S-Max, and stuff it to the brim with openSuSE kit and swag and drive to FOSDEM. I usually was tolerated on board as well, with my sportsbag full of DevRoom kit, provided i sang along to the radio, and didn't mention burger-king. Everyone at SuSE either made their own arrangements, or everyone was stuck on a flight from Nuremberg to Brussels (which was quickly dubbed "The SuSE-Bomber").

After the massive and rather counterproductive layoffs after FOSDEM 2009, SuSE tended to organize a bus for its own employees. And from what i heard, it was a pretty good solution. Imagine a load of happy geeks, from a place in the world with the best beers, stuck on a bus. It made the whole event seem like a school trip, but one where some beers were actually allowed. And, since there usually was tons of extra space on the bus, a load of ex-SuSE guys got to hitch a ride as well. The result was that a lot of SuSE employees visited FOSDEM, and got to catch up on things with some ex-SuSE guys and generally start doing what conferences are for from the second the bus left Nuremberg. Since busses are cheap, this really was a perfect solution, and everyone was happy.

I never took the bus. For FOSDEM, I want to arrive in brussels around suppertime on friday, and leave monday around midday (when the alcohol of the previous night has worn off a bit). The bus tended to arrive around suppertime as well, but would leave again around 19:00 on sunday. Also, i tend to run a devroom and have at least one talk. I need those 5 hours of peace on the train to prepare my talk. But I heard good things about the bus, and that it all was great fun and a bit of a community event before (and after) the big community event.

This year, however, things were different. For a long time, apparently, nobody from the OpenSuSE team was really bothered with FOSDEM. This i find truly amazing, and a really bad sign with respect of where SuSE and openSuSE is apparently heading. From what i have heard, there was always a bit of a plan to get a bus, but it was unclear where the budget would come from, and no-one took any action. I also heard that two weeks before the event, SuSE employees were asked whether they wanted to go to FOSDEM. Now if you do this 2 weeks before the event, with people who often have a wife and kids these days, most will already have made other plans. Then, if you also state that those people who might be interested, also need to have some travel budget left over, and need to get approval in a few days time, you of course only get a handful of people who end up going to the biggest open source event of the planet. I heard the number 8.

8 people from SuSE went to FOSDEM. An absolute disgrace for what once was the leading european linux distribution.

Here is an idea: why not make the bus a community service from the start? Why doesn't OpenSuSE sponsor a bus, one which starts at nuremberg, and perhaps stops in Frankfurt-Flughafen (so some people can grab a smoke and empty their beer-filled bladders), and then continues on to Brussels? Give the SuSE employees 4 weeks early notice to get their seats reserved (which gives them an incentive to think about FOSDEM early on), and then make seat reservation open to anyone who wants to go visit FOSDEM and lives near Nuremberg or Frankfurt? You can even hand the community members a bag of SuSE swag and a frankonian beer.

SuSE would not only do something good for their own employees and makes it easier on them to visit FOSDEM. They would actually sponsor FOSDEM and help boost their openSuSE community.

I am actually surprised that this has to be said, and that idea hasn't been spawned from within the OpenSuSE team itself. But here you go. Now make it happen for next year.
libv southpark hackergotchi

FOSDEM, the best conference... In the world.

Now that my body is well on its road to recovery, it's time to talk about the great success that FOSDEM was, once again.

We had a really nice devroom and pretty good crowds, but the most amazing thing was the recordings and the livestream. The FOSDEM organizers really outdid themselves there.

After the initial announcement of the graphics devroom went out, Martin Peres contacted me and we talked about getting proper recordings from the DevRoom, and briefly looked into what this would end up costing if we were to buy the equipment for it. Our words were barely cold when the FOSDEM organizers announced what can only be seen as absolutely insane: they would try to provide for full recording of everything.

In the end, FOSDEM provided recording in 22 rooms. They had to get 22 mixing desks, 22 sets of microphones, 22 cameras, 44 laptops... This was apparently mostly rented, and apparently, surprise, surprise, there aren't many rental companies which have such an insane amount of kit available.

Apart from some minor issues (like a broken firewire cable in the wine devroom) things worked out amazingly. Only the FOSDEM guys could pull something this insane off. We had all our talks in the Graphics DevRoom streamed out live, with no issues at all.

I would like to thank all the speakers in the graphics devroom, but i particularly would like to thank Martin Peres, who took full control and responsibility of the video equipment. Then there's Marcus Meissner and Thiery Redding who willingly sat in the back of the devroom and handled the recordings themselves, directing the streams, for only meagre rations of water and cookies. Without people stepping up and doing their bit, a devroom like this would not be possible. And the same goes for the whole of FOSDEM.

At the end of the final talk, after i had talked about sunxi_kms, i tried to thank the FOSDEM organizers, and get the remaining audience to clap for them. But i mostly stood there and babbled, at a loss for words, because what the FOSDEM organizers had achieved with this insane goal is simply amazing. And thinking about it now still, i still get quite emotional...

How on earth did they manage to pull this off, on top of organizing the rest of FOSDEM, a FOSDEM which caters for something like 8000 people as it is... It's just not possible!

It's not as if these guys get paid for what they are doing, FOSDEM is a low budget organization, purely based on volunteers. The absolute core of the organization is just a handful of people who have very busy jobs. And yet, they have succeeded where any other organization would have failed. There's no politics or powerplay, there is no money or commerce. There is just the absolute drive to make FOSDEM the best event on the planet, by making small changes every year...

This was my 7th DevRoom this year, and if i can help it there will be another one next year. I am really proud that i am allowed to do my, comparatively little, part as well. Every sunday evening after FOSDEM, after we sit down in the restaurant with the remainders of the graphics devroom, i am physically broken, but i am also one of the happiest people on the planet...

Each year, no matter what happened in the year before, no matter what nasty open source politics or corporate nonsense took place over that year... Each year, the FOSDEM organizers prove that something amazing can happen if only people do their bit, if only people work towards the same selfless goal. Each year, FOSDEM reminds me of why i do what i do, and why i need to keep on doing it.

Thank you.
libv southpark hackergotchi

Graphics DevRoom at FOSDEM2014.

Yes, there is going to be another exciting DevRoom about graphics on the upcoming FOSDEM.

It's not called the DevRoom this time round, but a hopefully more general Graphics DevRoom. As was the case with the DevRooms before, anything related to graphics drivers and windowing systems goes. While the new name should make it clearer that this DevRoom is about more than just X, it also doesn't fully cover the load either, as this explicitly does include input drivers as well.

Some people have already started wondering why I haven't been whining at them before. Well, my trusted system of blackmailing people into holding talks early on failed this year. The FOSDEM deadline was too early and XDC was too late, so I decided to take a chance, and request a devroom again, in the hope that enough people will make it over to the fantastic madness that is FOSDEM.

After endless begging and grovelling the FOSDEM organizers got so fed up that they gave us two full days again. This means that we will be able to better group things, and avoid a scheduling clash like with the ARM talks last year (where ARM system guys were talking in one room exactly when ARM graphics guys were talking in another). All of this doesn't mean that First Come, First Serve doesn't apply, and if you do not want to hold a talk with a hangover in an empty DevRoom, you better move quickly :)

The FOSDEM organizers have a system called pentabarf. This is where everything is tracked and the schedules are created, and, almost magically, at the other end, all sorts of interesting things fall out, like the unbelievably busy but clear website that you see every year. This year though, it is expected that speakers themselves manage their own details, and that the DevRoom organizers oversee this, so we will no longer use the trusted wiki pages we used before. While i am not 100% certain yet, i think it is best that people who have spoken at the DevRoom (most of whom i will be poking personally anyway) in the past few years first talk to me first before working with pentabarf, as otherwise there will be duplicate accounts which will mean more overhead for everyone. More on that in the actual call for speakers email which will hit the relevant mailing lists soon.

FOSDEM futures for ARM

Connor Abbott and I both have had chromebooks for a long long time. Connor bought his when it first came out, which was even before the last FOSDEM. I bought mine at a time where I thought that Samsung was never going to sell it in germany, and the .uk version arrived on my doorstep 3 days before the announcement for Europe went out. These things have been burning great big holes in our souls ever since, as i stated that we would first get the older Mali models supported properly with our Lima driver, and deliver a solid graphics driver before we lose ourselves again in the next big thing. So while both of us had this hardware for quite a while, we really couldn't touch these nice toys with an interesting GPU at all.

Now, naturally, this sort of thing is a bit tough to impose on teenagers, as they are hormonally programmed to break rules. So when Connor got bored during the summer (as teenagers do), he of course went and broke the rules. He did the unspeakable, and grabbed ARMs standalone shader compiler and started REing the Mali Midgard ISA. When his father is at FOSDEM this year, the two of us will have a bit of 'A Talk' about Connors wild behaviour, and Connor will be punished. Probably by forcing him to finish the beers he ordered :)

Luckily, adults are much better at obeying the rules. Much, much better.

Adults, for instance, would never go off and write a command stream tracer for this out of bounds future RE project. They would never ever dare to replay captured command streams on the chromebook. And they definitely would not spend days sifting through a binary to expose the shader compiler of the Mali Midgard. Such a thing would show weakness in character and would just undermine authority, and I would never stoop so low.

If I had done such an awful thing, then I would definitely not be talking about how much harder capture and replay were, err, would be, on this Mali, and that the lessons learned on the Mali Utgard will be really useful... In future? I would also not be mentioning how nice it would be to work on a proper linux from the get-go. I would also never be boasting at how much faster Connor and I will be at on turning our RE work on T6xx into a useful driver.

It looks like Connor and I will have some very interesting things to own up to at FOSDEM :)
libv southpark hackergotchi

Intel & Mir: The point-of-view of a graphics driver developing bystander.

Only a few days ago did I write about how open source software is not about "code or design or doing The Right Thing". "Open source software is about power, politics, corporate affiliation, and loads and loads of noise." I would like to thank Intel for so succinctly underlining that now with their current action.

Before I go any further, this seems to not be Chris Wilsons decision or his preferred solution. Chris wrote the patch which he was told, by unnamed party or parties in Intel, to back out. Also, I personally do not condone the actions taken by Canonical, but, as a graphics driver developer, I find Intels actions far worse. I rather doubt that Intel thought this one through properly.

What's the problem?

As a graphics driver developer, I fail to see the big problem with Mir.

So what if Canonical has decided to reinvent Wayland? Apart from the weird contribution agreement (which will only limit contributions), Mir is fully free software isn't it? Who are they hurting apart from their own resources and their own users? It's not that I am applauding Canonical for their decision, but I really don't see the massive problem here.

Why is Canonical not allowed to do this?

Reinvention galore

I personally really hate things being reinvented all the time. It is the disease that plagues open source software, and that what makes sure that we don't have a growing linux market.

How often have we heard that something is outdated and broken and doesn't fit modern demands anymore? We are then invariably being told that something new is being built afresh, from the lessons learned of what was done "wrong" before, and that in a few months time, everything is going to be fantastic. Sadly, such timeframes never pan out, and while the known "errors" are fixed, everything else gets broken, which then has to be reinvented or ported as well (or which simply remains broken). And then several years down the line, things are still not perfect, and then someone else (or sometimes even the same person) goes off and implements the next great thing from scratch, again.

We never have something that just works, we just go from broken state to broken state. And nobody learns from this, nobody apparently ever states "Hang on, isn't that pretty much the same story we heard 3 years ago?"

To me, as a stupid shortsighted driver developer, Wayland seems like X reinvented. A server/client display architecture with the new lessons learned implemented, but with everything else broken. We've been waiting for getting all those little niggles worked out ever since 2009, and at one point, networking was added to Wayland making it even more of an X replacement.

So then Mir was announced... And suddenly the world was ablaze. Huge flamewars broke out everywhere and effigies of Mark Shuttleworth were getting burned in the forums. I found the Mir move quite ironic, at first, and thought that the outrage was quite out of proportion, but then I read this article. It is a who's who of reinventers, complaining about Canonical reinventing Wayland. I was appalled.

What exactly gives these people the sole monopoly on reinvention?

What is Intel afraid of?

How could Mir possibly threaten Wayland?

Intel is a pretty big company, and it probably has the largest contingent of open source developers devoted to graphics. It employs some of the brightest and most influential people in the business. On top of that, Wayland was there first, has had more time to mature, has had more applications and toolkits ported, and has a much larger mindshare. Most people would think that Waylands future is pretty secure.

So what could possibly be so much better about Mir that makes Mir such a big threat to Wayland that Intels graphics driver developers have to be told not to support XMir at all? Honestly, in the above constellation, how vastly more superior technically does Mir have to be to justify such an action? If Intel really feels that it has to react like this, well, then it might as well just throw in the towel and go Mir immediately, as Wayland clearly must be completely useless.

What a way to oust your own insecurity.

Software Fascism

Intel finds it necessary to play games with their graphics driver, instead of having Wayland battle it out directly with Mir.

This kind of powerplay is quite insidious, and far more damaging than most people would expect. It completely skews the ability of software to compete on a fair and equal grounds, and hurts us all as it is mostly applied by those who are not able to compete properly, or those who feel as if they shouldn't need to bother to compete properly. It tends to favour the least technically advanced and the least morally acceptable.

The best example which I have come across so far is the RadeonHD versus Radeon battle. RadeonHD beat ATI by actually providing a solid open source driver in September 2007, and we at SuSE had a stated goal of being able to ship a solid open source driver on enterprise desktop rollouts. 3 months later, Radeon came around with support for the same hardware. It was technically inferior, and "borrowed" much of the hard work of radeonHD plus some noise added on top. What was worse was how the so-called community used software fascism to artificially boost the Radeon driver. This started out with the refusal of a mailing list at the usual place, hit a low point with RadeonHD being dropped from the build script for the xserver, and sank to whole new levels when, 2 years after the obvious death of the RadeonHD driver, the RadeonHD repository got vandalized (and the whistleblower got tarred and feathered while the perpetrators were commended for their "quick" confession).

So who won?

Well, it definitely was not RadeonHD, as that died early 2009 with Novell laying off a large portion of SuSE developers in Nuremberg. As luck had it, at the same time, AMD experienced serious financial difficulties and did not continue the RadeonHD project with SuSE. But although Radeon did survive, it did not win either. ATI won, AMD (which wanted a proper open source driver, whereas ATI seriously didn't) lost, and we all lost with it. Fglrx still rules supreme today, but now it does not get as much flack as it did before, as the figleaf driver provides some sort of an alternative for those who are unhappy with fglrx. But it goes beyond that, the radeon driver consistently applies or applied the solutions ATI fglrx developers recommended, instead of the empirical solutions we at RadeonHD usually chose, and the radeon driver is not as good as it could be.

Software fascism goes further than just badly skewing competition, and it always is a negative influence on software. Who knows what other bad decisions will make their way into the Intel driver now?

The responsibility of a graphics driver

The main responsibility of a graphics driver is to support the users of your graphics hardware. If you are actually employed by the vendor, your users are those who bought your hardware and who will buy your hardware again if they are satisfied. This is the business case for providing optimal support for your hardware for a given operating system or infrastructure. On top of that, in open source software, the users are more than just the customers, they are also the testers.

Canonicals plan and marketing seem to have worked out quite well over the years, to the extent that half the planet thinks that linux equals ubuntu, and ubuntu probably has the larger part of the linux desktop market. This means that ubuntu users are a sizable portion of Intels userbase, and as a hardware vendor (and only secondarily a maker of display servers), Intel simply cannot afford to refuse to support or even alienate these users. Canonical has decided that Mir will be the primary display server on future Ubuntu releases, and this in turn means that Intel has an obligation to support Mir.

The Xmir patch to the Intel graphics driver seems rather minimal and not very invasive. There also seems or seemed, as the case may be now, direct communication between Intels graphics driver developers and Ubuntus developers. As Mir will ship on the next Ubuntu versions, there will be a large amount of users which will test the Xmir code in the Intel graphics driver. There is no chance that the Xmir code will bitrot for the foreseeable future, and Intels own investment in this code will be minimal.

The real art of writing good drivers is to provide for quick and painless debugging. Graphics hardware is complex, the drivers for this hardware are also complex, and neither is ever perfect, so one has to work hard to maximize the chance for bug resolution. This means easy communication with users, and giving the user an easy route to test changes so that proper feedback can be provided quickly. If you fail to make it easy enough for users, you will simply not get your bugs fixed, and the higher the resolution threshold becomes, the worse your driver will become.

By not carrying this patch, Intel forces Ubuntu users to only report bugs to Ubuntu, which then means that only few bug reports will filter through to the actual driver developers. At the same time, Ubuntu users cannot simply test upstream code which contains extra debugging or potential fixes. Even worse, if this madness continues, you can imagine Intel stating to its customers that they refuse to fix bugs which only appear under Mir, even though there is a very very high chance of these bugs being real driver bugs which are just exposed by Mir.

The reality of the matter is, Intel is hurting its own graphics driver more than it could potentially hurt Mir or Canonical.

The andriodization of linux

The biggest installed base of Linux is android, and it is bigger by many orders of magnitude. Sadly the linux which we call android is little more than the linux kernel and some new-ish (mostly) open source infrastructure on top. While this, to some extent, is quite the boon for open source software, it also holds a major threat. If we are not careful, we get fully locked hardware. We are only sporadically able to enforce the GPL on the kernel, and we have no chance at all to get open source userspace drivers. This limits the usefulness of the now ubiquitous linux hardware out there, and with the way the desktop and mobile are evolving, this will soon limit the availability of hardware for which more-or-less complete open source is available. On top of that, all those electronics companies that are churning out hardware at an amazing rate, they are either unable to see the advantages of actively contributing to open source, or they are having a very hard time in learning how to do so.

This is exactly why I created the lima driver, and why some other brave souls created their respective GPU reverse engineering projects. We recognized this danger, and are sacrificing a large portion of our lives trying to prevent catastrophe. And even though things are not going as fast or as smooth as we expected, we have come a very long way.

Things took a wrong turn a while back though. In an effort to create a stopgap solution, Jolla developer Munk created libhybris, a wrapper library which allows the usage of android drivers on top of glibc, and thus on a normal linux installation. I find this hack pretty dangerous, as it makes all vendors complacent, and it cements the android way of working and the it makes binary drivers the default. Our biggest open source hopes for mobile; Sailfish, Firefox-OS and Ubuntu-Phone Mir readily embraced this way of working.

I have, so far, not seen anything from either Jolla, The Mozilla Foundation or Canonical, along the lines of active support of the route we have chosen with open ARM GPU drivers, and we've been at it for quite some time now. Those companies are more dependent on open source software than your average android vendor, and know how to do things the open source way, but they have fully embraced the binary drivers built for android only, with no signs of them wanting to change this.

The only reason why I favour Wayland over Mir is that Canonical immediately chose the libhybris route with Mir. Wayland currently has patches for libhybris, so soon Wayland sadly will have sunk to Mirs level as well, from a graphics driver point of view.

Intel employs a small army for their open source software, and specifically for their open source graphics driver. But Intel also has other teams working on graphics drivers, and while I am not certain, I do think that Intel ships binary only drivers on their android devices.

Canonical is happy with using libhybris, but currently would prefer to use a proper graphics driver for their future products. This preference now got significantly reduced. Intel now potentially has driven one of the last big users of open source graphics drivers to exclusively using android binaries as well, seriously reducing the relevance of its own OSTC driver developer team in the process.

The low road

Up until now, intel had the moral high ground in the Wayland versus Mir situation. With the simple decision to revert the Xmir patch, this situation now got reversed.

Well done.
libv southpark hackergotchi

The lima mesa driver runs es2gears.


Progress is slow but steady on the lima mesa driver, mostly because I am not giving lima as much time as I should. I now have working attributes, uniforms, vertex buffers and even some state is being set correctly. Enough to run es2gears. Here is the video on youtube, there is an older capture with a rotating smoothed cube as well.

This lima mesa driver uses the old-school (but contrary to popular belief, definitely not deprecated) mesa infrastructure, which, with surprisingly little work, allows me to run the mali binary drivers compiler, and allows me to build the lima driver externally. Using the binary compiler would not be possible from gallium, and with some improvements to the (intel developed, and thus intel focused) mesa glsl compiler, it seems that we might have a potent compiler for the mali ISAs as well. This way, the task of bringing up mesa on mali is nicely split, and we will be able to debug the command stream work separately from the shader compiler. I believe that we did lose the ability to run gles1 programs until Connors open-gpu-tools is hooked up to the mesa glsl compiler. This is just a small price to pay, in comparison to the size of the hurdle to take when doing everything in one go.

We are running es2gears at 130fps on the A10, and 310fps on the Exynos for a 300x300 render. Resizing is currently broken, or rather not implemented. I will need to split up the PLBU command stream to be able to do that properly under DRI2 (where resizes happen behind the mesa drivers back). The way DRI2, mesa and the sunxifb X driver now work together also means that i have to wait for jobs to finish (and then usleep for good measure), so there is a lot of potential for speed-up as well. I am not sure yet how things will have to be hacked to keep X from copying the region before rendering is absolutely done, i guess that we will have to hack something into UMP and the sunxifb driver. But a solution will be found, and we should see around a 50% increase in framerate from that, and even much much more if we manage to use overlays. Since we are in control of all the code now, we should theoretically be able to squeeze every last bit of performance out of the GPU, a luxury not offered to the users of the binary X11 driver.

I will now continue implementing textures, so that i can run all the limare egl tests. After that i will clean up the code and push it out. This will include patches to common mesa versions (and mesa packages) to allow building lima against them. Resizing and job interleaving will have to wait until after that, so keep your eyes peeled on this space :)

10 years in.

A few weeks ago, it was the 10th anniversary of my first contribution to X (a small display fix to the via driver). I cannot state that this anniversary was a happy event.

Looking back, I cannot believe that i once thought that in open source software, code, design and doing The Right Thing, both technically and morally, were paramount. Open source software is about power, politics, corporate affiliation, and loads and loads of noise. Noise and misinformation always wins over code, no matter how good this code is or how hard you work at it. I have had to learn this several times over.

This is especially true in the case of forks. Not the git clone variation, but the loud, aggressive and very detrimental community kind. While often technical reasons are claimed to be the cause, this is never ever the case. It always is about politics and power. And code always suffers as a result, and this suffering is never a short term thing, especially on big forks. See, a fork always means a big stink, a lot of noise, and a power vacuum, and this attracts a certain kind of individual. These personalities form the basis of the new community together with those who instigated or accelerated the fork. Good code simply has no chance in such an environment. Bad code and bad design tends to hang around for a long long time, and tends to influence (read, limit) the thinking of any new blood that turns up. On top of that, the bad mentality also tends to linger for many many years. Politics and noise continues to take precedence over code and design, for the foreseeable future.

The only thing to do in case of a fork, is to go play somewhere else, somewhere where a major fork hasn't taken place. If you don't, or if you do not go far enough, you will see your work impacted, especially if you are not willing to let yourself be limited by the existing mentality and powerbalance.

Now if only i wasn't such a stubborn bastard ;)
libv southpark hackergotchi

Old and new limare code, and management overhead...

I just pushed updated limare code and a fix to ioquake3.

In almost 160 patches, loads of things change:

  • clean FOSDEM code supporting Q3A timedemo on a limare ioquake3.

  • support for r3p2 kernel and binary userspace as found on the odroid-x series.

  • multiple PP support, allowing for the full power of the mali 400MP4 to be used.

  • fully threaded job handling, so new frames can be set up while the first is getting rendered.

  • multiple textures, in rgb888, rgba8888 and rgb565, with mipmapping.

  • multiple programs.

  • attribute and elements buffer support.

  • loads of gl state is now also handled limare style.

  • memory access optimized scan pattern (hilbert) for PP (fragment shader).

  • direct MBS (mali binary shader) loading for pre-compiled shaders (and OGT shaders!!!).

  • support for UMP (arm's in kernel external memory handler).

  • Properly centered companion cube (now it is finally spinning in place :))

  • X11 egl support for tests.

  • ...

Some of this code was already published to allow the immediate use of the OGT enabled ioquake3. But that branch is now going to be removed, as the new code replaces it fully.

As for performance, this is no better or worse than the FOSDEM code. 47fps on timedemo on the Allwinner A10 at 1024x600. But now on the Exynos 4, there are some new numbers... With the CPU clocked to 2GHz and the Mali clocked to 800MHz (!!!) we hit 145fps at 720p and 127fps at 1080p. But more on that a bit further in this post.

Upcoming: Userspace memory management.

Shortly after FOSDEM, i blogged about the 2% performance advantage over the binary driver when running Q3A.

As you might remember, we are using ARMs kernel driver, and despite all the pain that this is causing us due to shifting IOCTL numbers (whoever at ARM decided that IOCTL numbers should be defined as enums should be laid off immediately) I still think that this is a useful strategy. This allows us to immediately throw in the binary driver, and immediately compare Lima to the binary, and either help hard reverse engineering, or just make performance comparisons. Rewriting this kernel driver, or turning this into a fully fledged DRM driver is currently more than just a waste of time, it is actually counterproductive right now.

But now, while bringing up a basic mesa driver, it became clear that I needed to work on some form of memory management. Usually, you have the DRM driver handling all of that, (even for small allocations i think - not that i have checked). We do not have a DRM driver, and I do not intend to write one in the very near future either, and all I have is the big block mapping that the mali kernel driver offers (which is not bad in itself).

So in the train on the way back from linuxtag this year, I wrote up a small binary allocator to divide up the 2GB of address space that the Mali MMU gives us. On top of that, I now have 2 types of memory, sequential and persistent (next to UMP and external, for mapping the destination buffer into Mali memory), and limare can now allocate and map blocks of either at will.

The sequential memory is meant for per-frame data, holding things like draws and varyings and such, stuff that gets thrown away after the frame has been rendered. This simply tracks the amount of memory used, adds the newly requested memory at the end, and returns an address and a pointer. No tracking whatsoever. Very lightweight.

The persistent memory is the standard linked list type, with the overhead that that incurs. But this is ok, as this memory is meant for shaders, textures and attribute and element buffers. You do not create these _every_ draw, and you tend to reuse them, so it's acceptable if their management is a bit less optimized.

Normally, more management makes things worse, but this memory tracking allowed me to sanitize away some frame specific state tracking. Suddenly, Q3A at 720p which originally ran at 145fps on the exynos, ran at 176fps. A full 21% faster. Quite some difference.

I now have a board with a Samsung Exynos 4412 prime. This device has the quad A9s clocked at 1.7GHz, 2GB LP-DDR2 memory at 880MHz, and a quad PP Mali-400MP4 at 440MHz. This is quite the powerhouse compared to the 1GHz single A8 and single PP Mali-400 at 320MHz. Then, this Exynos chip I got actually clocks the A9s to 2GHz and the mali to a whopping 800MHz (81% faster than the base clock). Simply insane.

The trouble with the exynos device, though, is that there are only X11 binaries. This involves a copy of the rendered buffer to the framebuffer which totally kills performance. I cannot properly compare these X11 binaries with my limare code. So I did take my new memory management code to the A10 again, and at 1024x600 it ran the timedemo at 49.5fps. About a 6% margin over the binary framebuffer driver, or tripling my 2% lead at FOSDEM. Not too bad for increased management, right?

Anyway, with the overclocking headroom of the exynos, it was time for a proper round of benchmarking with limare on exynos.

Benchmark, with a pretty picture!

Limare Q3A benchmark results on exynos4412

The above picture, which I quickly threw together manually, maps it out nicely.

Remember, this is an Exynos 4412 prime, with 4 A9s clocked from 1.7-2.0GHz, 2GB LP-DDR2 at 880MHz, and a Mali-400MP4 which clocks from 440MHz to an insane 800MHz. The test is the quake 3 arena timedemo, running on top of limare. Quake 3 Arena is single threaded, so apart from the limare job handling, the other 3 A9 cores simply sit idle. It's sadly the only good test I have, if someone wants to finish the work to port Doom3 to gles, I am sure that many people will really appreciate it.

At 720p, we are fully CPU limited. At some points in the timedemo (as not all scenes put the same load on cpu and/or gpu), the difference in mali clock makes us slightly faster if the cpu can keep up, but this levels out slightly above 533MHz. Everything else is simply scaling linearly with the cpu clock. Every change in cpu clock is a 80% change in framerate. We end up hitting 176.4fps.

At 1080p, it is a different story. 1080p is 2.25 times the amount of screen real estate of 720p (if that number rings a bell, 2.25MB equals two banks of Tseng ET6x00 MDRAM :p). 2.25 times the amount of pixels that need to pushed out. Here clearly the CPU is not the limiting factor. Scaling linearly from the original 91fps at 440MHz is a bit pointless, as the Q3A benchmark is not always stressing CPU and GPU equally over the whole run. I've drawn the continuation of the 440-533MHz increase, and that would lead to 150fps, but instead we run into 135.1fps. I think that we might be stressing the memory subsystem too much. At 135fps, we are pushing over 1GBps out to the framebuffer, this while the display is refreshing at 60fps, so reading in half a gigabyte. And all of this before doing a single texture lookup (of which we have loads).

It is interesting to see the CPU become measurably relevant towards 800MHz. There must be a few frames where the GPU load is such that the faster CPU is making a distinguishable difference. Maybe there is more going on than just memory overload... Maybe in future i will get bored enough to properly implement the mali profiling support of the kernel, so that we can get some actual GP and PP usage information, and not just the time we spent waiting for the kernel job to return.

ARM Management and the Lima driver

I have recently learned, from a very reliable source, that ARM management seriously dislikes the Lima driver project.

To put it nicely, they see no advantage in an open source driver for the Mali, and believe that the Lima driver is already revealing way too much of the internals of the Mali hardware. Plus, their stance is that if they really wanted an open source driver, they could simply open up their own codebase, and be done.


We can debate endlessly about not seeing an advantage to an open source driver for the Mali. In the end ARMs direct customers will decide on that one. I believe that there is already 'a slight bit of' traction for the general concept of open source software, I actually think that a large part of ARMs high margin products depend on that concept right now, and this situation is not going to get any better with ARMv8. Silicon vendors and device makers are also becoming more and more aware of the pain of having to deal with badly integrated code and binary blobs. As Lima becomes more complete, ARMs customers will more and more demand support for the Lima driver from ARM, and ARM gets to repeat that mantra: "We simply do not see the advantage"...

About revealing the internals of the Mali, why would this be an issue? Or, let me rephrase that, what is ARM afraid of?

If they are afraid of IP issues, then the damage was done the second the Mali was poured into silicon and sold. Then the simple fact that ARM is that apprehensive should get IP trolls' mouths watering. Hey IP Trolls! ARM management believes that there are IP issues with the Mali! Here is the rainbow! Start searching for your pot of gold now!

Maybe they are afraid that what is being revealed by the Lima driver is going to help the competition. If that is the case, then it shows that ARM today has very little confidence in the strength of their Mali product or in their own market position. And even if Nvidia or Qualcomm could learn something today, they will only be able to make use of that two years or even further down the line. How exactly is that going to hurt the Mali in the market it is in, where 2 years is an eternity?

If ARM really believes in their Mali product, both in the Mali's competitivity and in the originality of its implementation, then they have no tangible reason to be afraid of revealing anything about its internals.

Then there is the view that ARM could just open source their own driver. Perhaps they could, it really could be that they have had very strict agreements with their partners, and that ARM is free to do what they want with the current Mali codebases. I personally think it is rather unlikely that everything is as watertight as ARM management imagines. And even then, given that they are afraid of IP issues... How certain are ARMs lawyers that nothing contentious slipped into the code over the years? How long will it take ARMs legal department to fully review this code and assess that risk?

The only really feasible solution tends to be a freshly written driver, with a full development history available publically. And if ARM wants to occupy their legal department, then they could try to match intel (AMD started so well, but ATI threw in the towel so quickly, but luckily the AMD GPGPU guys continued part of it), and provide the Technical Reference Manual and other documents to the Mali. That would be much more productive, especially as that will already be more legal overhead than ARM management would be willing to spare, when they do finally end up seeing the light.

So. ARM management hates us. But guess what. Apart from telling us to change our name (there was apparently the "fear" of a trademark issue with us using Remali, so we ended up calling it Lima instead), there was nothing that they could do to stop us a year and a half ago. And there is even less that ARM can do to stop us today :)

A full 6.0%...
libv southpark hackergotchi

Q3A with open source generated shaders!

The combination of limare and open-gpu-tools can now run Quake 3 Arena timedemo without depending on the binary driver for the shader compiler!

Connor Abbott has been being his amazing (16y old!) self again in the weeks after his talk at FOSDEM, and he pushed his compiler work in his open-gpu-tools tree to be able to handle basic vertex shaders. Remember that our vertex shader is a rather insane one, where the compiler has to work real hard on getting scheduling absolutely right. This is why an assembler for our vertex shader was not too useful and the most part of a compiler had to be written for it to generate useful results. A mammoth task, and Connor his vertex shader code is now larger than the code I have in my limare library.

So it was high time that we brought limare and OGT together to see what they were capable of with some basic shaders. Luckily, the Q3A GLES1 emulation has basic shaders, what a nice coincidence :)

So Connor turned my simple vertex shader essl into the high level language used by the OGT vertex shader compiler, and through steps described at this wiki page, turned them into MBS files (Mali Binary Shader - the file type output by the standalone compiler, and also by newer binary driver integrated compilers). Limare can then load and parse those MBS files, and run the shaders. No need to involve the ARM binary anymore when we have OGT generated MBS files :)

The result was quite impressive. We had a few issues where the limare driver (which has mostly taken its cues from the output of the binary driver) and OGT disagreed over symbol layout, but apart from that, bringing up the shaders connor produced was pretty painless. Amazingly effortless, for such a big step.

Connor then spent another day playing with the fragment shader assembler, fixed some bugs, and produced 3 fragment shaders for us. One for the clear shader used by limare directly, and 2 for Q3A. After some more symbol layout issues, these also just worked! We even seem to be error-margin faster with the MBS files (due to texture coordinate varyings being laid out differently).

So this is a really big milestone for the lima driver project. Even with our insane pre-optimized architecture, we now are able to run Quake 3 Arena without any external dependencies, and we are beating the ARM binary while doing so.

For generating your own shader MBS files, check out Connors OGT, and then you can head straight to Connors wiki page. My Q3A tree now has the MBS code included directly. And i pushed a dirty version of my FOSDEM limare code.

As for this new limare code, this fosdem_2013_pile branch will vanish soon, as i need to properly pry things apart still. This is run-for-the-price code, and often includes many unrelated fixes in the same commit. It's better to do archeology on it now, than 3y from now, so this needs to be split. But in the meantime, you all can go and give Q3A on a fully free driver stack on Mali hw a go :)

I will not post a video, as there really is nothing new to see. It is the exact same timedemo, running some promille faster. Build things, and then run it yourself on your sunxi hardware (i am still working on porting it to the new kernel of a more powerful platform). That's the best proof there is!

For building limare, check out the fosdem2013_pile branch and then just run make/make install.

For building Q3A all you need to do is run:
make ARCH=arm USE_LIMARE=1
And, when you have the full quake installed in ~ioquake3/baseq3, you can create a file called ~ioquake3/baseq3/demofour.cfg with the following content:
cg_drawfps 1
timedemo 1
set demodone  "quit"
set demoloop1 "demo four; set nextdemo vstr demodone"
vstr demoloop1
You can then run the ioquake3 binary with "+exec demofour.cfg" added to the command line, and you will have the demo running on top of fully free software!

Now we really have covered all the basics, time to find out how Mesa will play with our plans :)
libv southpark hackergotchi

Hey ARM!

Quake 3 Arena code.

I pushed out the Quake 3 Arena code used for demoing limare and for benchmarking.

You can build it on your linux-sunxi with
    make ARCH=arm

or, for the limare version (which will not build without the matching limare driver, which i haven't pushed out yet :))
    make ARCH=arm USE_LIMARE=1

for the GLESv2 version (with the broken lamps due to missing alphatest):
    make ARCH=arm USE_GLES2=1

Get a full Quake 3 Arena version first though, and stick all the paks in ~/ioquake3/baseq3. Add this to demofour.cfg in the same directory:
    cg_drawfps 1
    timedemo 1
    set demodone  "quit"
    set demoloop1 "demo four; set nextdemo vstr demodone"
    vstr demoloop1

To run the timedemo then run the quake binary with +exec demofour.cfg

For your own reverse engineering purposes, to build the GLESv1 version with logging included, edit code/egl/egl_glimp.c, and remove the // before:
    //#define QGL_LOG_GL_CALLS 1

But be aware, you are not free to spread that dumped data. That is ID Software data, albeit in a raw form.

I'd be much obliged if anyone hacks up input support, or re-adds sound. Or even adds the missing GLES2 shaders (color as a uniform for instance). That would make this code playable from the console, and should make it easier for me to provide playable limare code.

As you all can see, we have nothing to hide. The relevant differences between GLES1 and limare is in the GL implementation layer. I did shortcut vertex counting, this to ease logging, but this only has limited effect on CPU usage. The lesser CPU usage of limare is not significant or interesting as we do less checking than a full driver anyway. In simpler tests (rotating cubes), our scheduling results in a much higher CPU usage though (like 50% more, from 10% to 15% :)), even if we are not significantly faster. As said in my previous, i am not sure yet whether to keep this, or to find some improvements. Further tests, on much more powerful hardware, will tell.

Connors Compiler Work.

Connor had a massive motivation boost from FOSDEM (and did not suffer from the bug that was going round which so many of us in the last week). Earl from zaReason is sending him an A13 tablet, which should spur him on even further.

He has been coding like a madman, and he is now close to being able to compile the relatively simple shaders used in the Quake 3 Arena demo. He still has to manually convert the ESSL of the vertex shader to his own GP_IR first though, but that's already massive progress which gets us very close to our goals.

I am going to add MBS loading (Mali Binary Shader format) to limare to be able to forgo the binary compiler and load pre-compiled shaders into our programs. Since MBS is also spit out by the open-gpu-tools, we can then distribute our own compiled MBS files directly, and provide a fully open source Q3A implementation on mali.

How cool is that!

The very near future.

With my post purely about Q3A and its numbers, the reactions were rather strange. Seems like a lot of people were hung up exclusively on us being only 2% faster or because we were using this "ancient" game. The blog entry itself explained fully why this ancient game was actually a very good choice, yet only very few read it. Very few realized what a massive milestone an almost pixel-perfect Quake 3 Arena is for a reverse engineered driver.

As for performance... When i started investigating the mali, i had postulated that we would be happy to have only 75% of the performance of the binary driver. I assumed that, even with performance per watt being mighty important in the mobile space, 75% was the threshold at which the advantages of an open source driver would outweight the loss of performance. This would then lead to only ARMs big partners would end up shipping ARMs own binaries. And for projects like CyanogenMod and proper linux distributions there would be no question about what to ship.

With Q3A, and with the various rotating cubes, we now have proven that we can have a 100% match in performance. Sometimes we can even beat performance. All of this is general, no Q3A specific tricks here!

This is absolutely unique, and is beyond even the wildest dreams of a developer of any reverse engineered driver.

Absolutely nothing stops us now from delivering an open source driver that broadly matches the binary driver in performance! And this is exactly what we will be doing next!

Hey ARM!

We are not going away, we are here to stay. We cannot be silenced or stopped anymore, and we are becoming harder and harder to ignore.

It is only a matter of time before we produce an open source graphics driver stack which rivals your binary in performance. And that time is measured in weeks and months now. The requests from your own customers, for support for this open source stack, will only grow louder and louder.

So please, stop fighting us. Embrace us. Work with us. Your customers and shareholders will love you for it.

-- libv.
libv southpark hackergotchi

Quake 3 Arena timedemo on top of the lima driver!

At FOSDEM, I had a mainline talk about "Open ARM GPU Drivers", going over all the projects and GPUs, talking about the developers doing the hard reverse engineering work and the progress that they have made so far. I will write up a blog entry summarizing this talk soon, but for now i will just talk about the Lima demo i showed at the end of the talk.

Let me get straight to the main point before delving into details: We now have a limare (our proto/research driver) port of Quake 3 Arena which is running the q3a timedemo 2% faster than the binary driver. With 3% less cpu overhead than the binary driver to boot!

Here is the timedemo video up on youtube. It is almost pixel-perfect, with just a few rounding errors introduced due to us being forced to use a slightly different vertex shader (ESSL, pulled through the binary compiler instead of a hand coded shader). We have the exact same tearing as the binary drivers, which are also not synced to display on the linux-sunxi kernel (but ever so slightly more tearing than the original ;)).

This Q3A port is not playable for a few reasons. One is, i threw out the touchscreen input support, but never hacked in the standard SDL based input, so we have no input today. It should be easy to add though. Secondly, i only include the shaders that are needed for running the timedemo. The full game (especially its cut scenes) requires a few more shaders, which are even simpler than the ones currently included. I also need to implement the equivalent of glTexSubImage2d, as that is used by the cut-scenes. So, yes, it is not playable today, but it should be easy to change that :)

We are also not fully open source yet, as we are still using the binary shader compiler. Even after begging extensively, Connor was not willing to "waste time" on hand coding the few shaders needed. He has the necessary knowledge to do so though. So who knows, maybe when i push the code out (the q3a tree is a breeze to clean, but the lima code is a mess, again), he might still give us the few shaders that we need, and we might even gain a few promille performance points still :)

I will first be pushing out the q3a code, so that others can use the dumping code from it for their own GPU reverse engineering projects. The limare code is another hackish mess again (but not as bad as last time round), so cleaning that up will take a bit longer than cleaning up q3a.

Why frag like it is 1999?

Until now, i was mostly grabbing, replaying, and then porting, EGL/GLES2 programs that were specifically written for reverse engineering the mali. These were written by an absolute openGL/openGLES newbie, someone called libv. These tests ended up targetting very specific but far too limited things, and had very little in common with real world usage of the GPU. As most of the basic things were known for mali, it was high time to step up things a level.

So what real world OpenGL(ES) application does one pick then?

Quake 3 Arena of course. The demo four timedemo was the perfect next step for reverse engineering our GPU.

This 1999 first person shooter was very kindly open sourced by ID Software in 2005. Oliver McFadden later provided an openGLES1 port of ioquake3 for the Nokia N900. With the Mali binary providing an OpenGLES1 emulation library, it was relatively easy to get a version going which runs on the Mali binary drivers. Thank you Oliver, you will be missed.

The Q3A engine was written for fixed 3D pipelines and this has some very profound consequences. First, it limits the dependency on the shader compiler and allowed me to focus almost purely on the command stream. This completely fits with the main strategy of our reverse engineering project, namely it being 2 almost completely separate projects in one (command stream versus shader compilers). Secondly, and this was a nice surprise when i started looking at captures, the mali OpenGLES1 implementation had some very hardware specific optimizations that one could never expose with OpenGLES2 directly. Q3A ended up being vastly more educational than I had expected it to be.

With Q3A we also have a good benchmark, allowing us to get a better insight into performance for the first time. And on top of all of that, we get a good visual experience and it is a dead-certain crowdpleaser (and it was, thanks for the cheers guys :))

The only downside is that the data needed to run demo four is not available with the q3a demo release and therefor not freely downloadable. Luckily you can still find Q3A CDs on ebay, and i have heard that steam users can easily download it from there.

The long story

After linuxtag, where i demoed the rotating companion cube, I assumed that my knowledge about the mali was advanced enough that bringing up Q3A would take only a given number of weeks. But as these things usually go, and with work an real life getting in the way, it never pans out like that. January 17th is when i had q3a first work correctly, time enough to worry about some optimization still before FOSDEM, but only just enough.

I started with an android device and the kwaak3 "app", which is just Olivers port with some androidiness added. I captured some frames to find out what i still missed with limare. When i finally had some time available, i first spent it cleaning up the linuxtag code, which i pushed out early december. I had already brought up Q3A on linux-sunxi with the mali binary drivers, which can be seen from the video i then published on youtube.

One thing about the youtube video though... Oliver had a tiny error in his code, one that possibly never did show up on the N900. In his version of the texture loading code, the lightmaps original format would end up being RGB whereas the destination format is RGBA. This difference in format, and in-driver conversion, is not supported by the openGLES standard. This made the mali driver refuse to load the texture, which later on had the driver use only the primary texture, even though a second set of texture coordinates were attached to the command stream. The vertex shader did not reflect this, and in my openGL newbieness i assumed that Ben and Connor had a bug in their vertex shader disassembler. You can clearly see the flat walls in the video i posted. Once i fixed the bug though, q3a suddenly looked a lot more appealing.

I then started with turning the openGLES1 support code in Quake's GLimp layer into a dumper of openGLES1 commands and data in a way that made it easy to replay individual frames. Then i chose some interesting frames, and replayed them, turned them into a GLES2 equivalent (which is not always fully possible, alphaFunc comes to mind), and then improved limare until it ran the given frames nicely through (the mali has hw alphaFunc, so limare is able to do this directly too). Rince and repeat, over several interesting frames.

By the evening of January the 16th, i felt that i knew enough to attempt to write a GLimp for limare. This is exactly when my father decided to give me a call. Some have met him at Le Paon last Friday, when he, to my surprise, joined us for a beer after work as his office is not far away. He remarked that i seemed "a bit on edge" when he called on the 16th. Yes, i indeed was, and how could i be anything else at a time like this :) I hacked all night, as at the time i was living purely at night anyway, and minutes before my girlfriend woke up i gave it my first shot. Crash, a stupid bug in my code. I told my girlfriend that i wouldn't join her for "breakfast" before i went to bed, as i was simply way too close. By the time she left for work, i was able to run until the first few in-game frames, when the rendering would hang, with the mali only coming back several seconds later. After a bit of trying around, i gave the GP (vertex shader) a bit more space for its tile heap. This time it ran for about 800 frames before the same thing happened. I doubled the tile-heap again, and it ran all the way through!

The evening before i had hoped that i would get about 20fps out of this hardware. This already was a pretty cocky and arrogant guess, as the binary driver ran this demo at about 47.3fps, but i felt confident that the hardware had little to hide. And then the demo ran through, and produced a number.


Way beyond my wildest dreams. Almost 65% of the performance of the binary driver. Un-be-liev-ab-le. And this was with plain sequential job handling. Start a GP job, wait for it to finish, then start the PP job, wait for it to finish, then flip. 30.5fps still! Madness!

I had two weeks left for FOSDEM, so i had a choice, either add input support and invite someone from the public to come and play before the audience, or, optimize until we beat the binary driver. The framerate of the first pass decided that, optimization it was. I had a good benchmark, and only a third of the performance needed to be found, and most of the corners for that extra performance were known.

My first optimization was to tackle the PP polygon list block access pattern. During my previous talk at FOSDEM, i explained that this was the only bit I found that might be IP encumbered. In the meantime, over the weekly beers with Michael Matz, the SuSE Labs toolchain lead, i had learned that there is thing called the "hilbert space filling curve". Thanks Matz, that was worth about ~2.2fps. I benchmarked another few patterns: two level hilbert (inside plb block, and out), and the non-rotated hilbert pattern that is used for the textures. None would give us the same performance as the hilbert curve.

Building with -O3 then gave us another 1.5fps. Passing vec2s between the shaders gave us 0.3fps. It was time to put in proper interleaved job handling. With the help of Marcus Meissner (the SuSE Security lead), an ioctl struct sizing issue was found for the job wait thread. This fixed the reliability issues with threading on the r3p0 kernel of linux-sunxi. (ARM! Stable kernel interfaces now!) But thanks Marcus, as proper threading and interleaved job handling put me at 40.7 fps!

And then i got stuck. I only had 40.7fps and knew nothing that could account for such a big gap in performance. I tried a few things left and right, but nothing... I then decided to port q3a to GLES2 (with the loss of alphafunc and buggered up lamps as a result) to see whether our issue was with the compiled versus hand-coded shader. But I quickly ran into an issue with multi-texture program state tracking, which was curious, as the lima code was logically the same. Once this was fixed the GLES2 port ran at about 47.6fps, slightly faster than GLES1, which i think might be because of the lack of alphafunc.

Immediately after that i ported the multi-texture state tracking fix to the limare GLimp, but i sadly got no change in framerate out of it. Strangely, it seemed like there was no multitexturing going as my debugging printfs were not being triggered. I then noticed the flag for telling Q3A that our GL implementation supports multitexturing. Bang. 46.7fps. I simply couldn't believe how stupid that was. If that had been correct on the first run, i would've hit above 75% of the framerate, how insane would that have been :)

For the final 1.5fps, which put us at 48.2fps, i added a third frame, this while only rendering out to two framebuffers. Job done!

Adding a fourth frame did not improve numbers, and i left some minute cpu usage and memory usage optimizations untouched. We are faster than the binary driver, while employing no tricks. We know what we need to know about this chip and there is nothing left to prove with Q3A performance.

The numbers.

The fact that we are slightly faster is actually normal. We do not have to adhere to the OpenGLES standard, we can do without a lot of the checking that a proper driver normally needs to do. This is why the goal was not to match the binary driver's performance, but to beat it, which is exactly what we achieved. From some less PP and CPU bound programs, like the spinning cubes, it does seem that we are more aggressive with scheduling though.

Now let's look at some numbers. Here is the end of the timedemo log for the binary driver, on an Allwinner A10 (single cortex a8, at 1GHz), with a Mali-400MP1 at 320MHz, rendering to a 1024x600 LCD, with framerate printing enabled:
    THEINDIGO^7 hit the fraglimit.
    marty^7 was melted by THEINDIGO^7's plasmagun
    1260 frames 27.3 seconds 46.2 fps 10.0/21.6/50.0/5.6 ms
    ----- CL_Shutdown -----
    RE_Shutdown( 1 )

And here is the end of the timedemo log for the limare port:
    THEINDIGO^7 hit the fraglimit.
    marty^7 was melted by THEINDIGO^7's plasmagun
    ]64f in 1.313632s: 48.719887 fps (1280 at 39.473158 fps)
    1260 frames 26.7 seconds 47.2 fps 9.0/21.2/74.0/5.6 ms
    ----- CL_Shutdown -----
    RE_Shutdown( 1 )
    ]Max frame memory used: 2731/4096kB
    Auxiliary memory used: 13846/16384kB
    Total jobs time: 32.723190 seconds
       GP job  time: 2.075425 seconds
       PP job  time: 39.921429 seconds

Looking at the numbers from the limare driver, my two render threads are seriously overcommitted on the fragment shader (PP). We really are fully fragment shader bound, which is not surprising, as we only have a single fragment shader. Our GP is sitting idle most of the time.

It does seem promising for a quad core mali though. I will now get myself a quad-core A9 SoC, and put that one through its paces. My feeling is that there we will either hit a wall with memory bandwidth or with the CPU, as q3a is single threaded. Since limare does not yet support multiple fragment shaders the last remaining big unknown will get solved too.

Another interesting number is the maximum frame time. 50.0ms for the binary driver, versus 74.0ms for limare. My theory there is that i am scheduling differently than the original driver and that we get hit by us overcommitting the fragment shader. Wait and see whether this difference in scheduling will improve or worsen the numbers on the potentially 4 times faster SoC. We will not be context switching anymore with our render threads, and we will no longer be limited by the fragment shader. This should then decide whether another scheme should be picked or not.

Once we fix up the Allwinner A10 display engine, and can reliably sync to refresh rate, this difference in job scheduling should become mostly irrelevant.

The star: the mali by falanx.

In the previous section i was mostly talking about the strategy of scheduling GP and PP jobs, of which one tends to have 1 of each per frame. Performance optimization is a very high level problem on the mali, which is a luxury. On mali we do not need to bother with highly specific command queue patterns which most optimally use the available resources, which then ends up being SoC and board specific. We are as fast as the original driver without any trickery, and this has absolutely nothing to do with my supposed ability as a hacker. The credit fully goes to the design of the mali. There is simply no random madness with the mali. This chip makes sense.

The mali is the correct mix of the sane and the insane. All the real optimization is baked into the hardware design. The vertex shader is that insane for a reason. There is none of that "We can fix it in software" bullshit going on. The mali just is this fast. And after 20 months of throwing things at the mali, i still have not succeeded in getting the mali to hard or soft lockup the machine. Absolutely amazing.

When i was pretty much the only open source graphics developer who was pushing display support and modesetting forwards, I often had to hear that modesetting was easy, and that 3d is insane. The mali proves this absolutely wrong. Modesetting is a very complex problem to solve correctly, with an almost endless set of combinations that requires very good insight and the ability to properly structure things. If you fail to structure correctly, you have absolutely no chance of satisfying 99.9% of your users, you'll be lucky if you satisfy 60%. Compared to modesetting, 3D is clearly delineated, and it is a vastly more overseeable and managable problem... Provided that your hardware is sane.

The end of the 90s was an absolute bloodbath for graphics hardware vendors with just a few, suddenly big, companies surviving. That's exactly when a few Norwegian demo-sceners, at the Trondheim University, decided that they would do 3D vastly better than those survivors and they formed a company to do so, called Falanx. It must've seemed like suicide, and I am very certain that pretty much everybody declared them absolutely insane (like engadget did). Now, 12 years later, seeing what came out of that, I must say that I have to agree. Falanx was indeed insane, but it was that special kind of insanity that we call pure genius.

You crazy demo-sceners. You rock, and let this Q3a port be my salute to you.