Giving Google Earth a native look & feel

Although Google Earth is a native Linux application and even uses Qt it still looks like a Windows95 application – even in KDE. The problem is that Google Eerth ship its own Qt libraries instead of using the ones installed on the system and thus also does not respect the system settings.

But there is help! If you want to make it look native,  make

# assuming you are using the medibuntu packages
cd /usr/lib/googleearth/
sudo rm libQt*

this will fore Googleearth use the system libraries instead, which include a GTK Qt Style. So if you launch googleearth as

googleearth -style GTK+

it will look like this:

Native Googleearth

but this will only work on Jaunty, since the GTK Qt Style was just added in Qt 4.5. If you are still on Intrepid you can use

googleearth -style cleanlooks

which will look at least familiar with the default Human Theme.

Finally free voice/ video chat is there

Just 3 years after Google first released their Jingle extension for Jabber which they used in their GTalk application it is finally possible to use it on the open source desktop too. This is just 2 years after Nokia released their Linux/ Gnome based N800 internet tablet, which also supported it.

But now Jaunty got an update to the Empathy Messanger, which brought Voice/ Video support to it. So now you can finally do calls between two Empathy Clients, between Empathy and GTalk on Windows and even between Empathy and your now actually useful N800 and that regardless whether you are on 32bit or on 64bit – only requiring a Jabber account – no need for Skype any more.

and here is the proof screenshot:

Empathy and N800 chat

No fglrx for Jaunty?

Jaunty will have the new Xserver 1.6, which brings a lot of 2D acceleration improvements and bug fixes but also a API/ABI change, so that binary only graphics drivers must be updated in order to work with it. This makes all currently released fglrx drivers (Catalyst 9.2) incompatible with Jaunty.

With the release of intrepid there already was a similar situatuation, but AMD released a special version for intrepid back then. But this time AMD just announced to drop the support for chips <= R600 (HDxxxx series) from fglrx coming with Catalyt 9.3 and recommends to use the open source -ati driver for the not longer supported cards. So the question is will Catalyst 9.3 support Xserver 1.6?

It seems to be not so likely, since even the leaked Catalyst 9.4 does not yet support Xserver1.6 and the Ubuntu developers are trying to get R600/R700 support into the open source driver.

The good news is that the open source -ati driver in Jaunty will support all currently available ATI chipsets (yes this includes R600/R700), which will give you probably the fastest 2D acceleration among all linux drivers(intel just broke their driver by merging UXA) and also much better Xvideo accelereation in comparison to fglrx. (no tearing)

The back side is that there will be a regression regarding 3D acceleration. The open source 3D driver stack currently is much slower then the one in fglrx and the work to fix this will not be ready until Ubuntu 9.10 (or probably longer). This will not affect anyone just using Compiz – these will even get a major performance improvement in comparison with fglrx. But in games or more sophisticated 3D applications there will be a noticable slowdown.

And R600/ R700 chips will not have any 3D acceleration at all – not even Compiz. But for these there is still hope for Catalyst 9.5.

C++ becoming usable

Recently I had to write a small project in C++ which involved a lot of cursing, since C++ has many awkward places and no sane way to get around them. Then I discovered that the GCC installed had C++0x support if you compile with “-std=c++0x” and suddenly C++ became a usable language for me.

And these are the core problems currently existing in C++, which C++0x solves:

Broken array data type

currently one has only the C arrays available which are nice if you want raw access to memory, but already fail if you just want to know the size of the array:

// ok you can do
int a[3];
cout << sizeof(a)/sizeof(int) << endl; // 3

// but this one fails
int a[3][2];
cout << sizeof(a)/sizeof(int) << endl; // 6 and no easy way to get the size of the first dimension

but with C++0x one has a proper array data type:

array<int, 3> a;
cout << a.size() // already more readable

array<array<int, 2>, 3> a;
cout << a.size() << endl; // 3

Manual Memory Management

currently one has only auto_ptr for memory management, but it is quite limited since there can be only one auto_ptr pointing to an object at any time. Coming with C++0x there will be a shared_ptr which can be copied around and assigned just like an ordinary pointer, but automatically realeases the memory.

// so if you always do
shared_ptr<int> p = new int;

// instead of
int* p = new int;

you will get reference counted garbage collection in C++. If you are coming from Java, you might winkle your nose, since this is not as efficient as the “proper” garbage collector in Java, but this is what is used in GTK and Python all the time…

Bad Collection Integration

although C++ already offers some Collection types besides of plain C arrays they are badly integrated in the language

// for instance this works
int a[3] = {1, 2, 3};

// but this does not
vector<int> a = {1, 2, 3};

well with C++0x the latter example also works 🙂

then there is also better integration with iterators

list<int> l;
// while you currently have to write
for (list<int>::iterator it = l.begin(); it != l.end(); it++) {
    cout << *it << endl;
}

// you will be able to write
for(int& i: l) {
    cout << i << endl;
}

Sugar on top

do you know the occasions where you have to write

list<vector<int>>* l = new list<vector<int>>();

// well now you can write
auto l = new list<vector<int>>();

this also helps in cases where you just do not care of the type or do not know it like in

auto f = bind(&Class::method, inst, "hello", _1, _2);
f(12, 13); // calls inst.method("hello", 12, 13)

Conclusion

this were only a few of the changes, but they already bring a new level of abstraction to C++, which allow expressing algorithms in a higher and more understandable level. This probably will not convert any Java programmers, but it will certainly make life for C++ alternatives like D harder, since they are certainly less attractive now.

Nokia LGPLed Qt

This is another example why Nokia is currently the most OSS friendly company when it comes do Desktop applications.

First they sponsored most of the work on Telepathy which they needed for their Maemo platform and which shall now be used as a desktop-wide presence framework in gnome.

Then they sponsored the development of Tracker, which will be used on Maemo Frementale and probably will be the most advanced desktop-search engine with the coming 0.7 release.

This two projects are already great examples for working with the OSS community; instead of just using the Kernel and building your own closed user-land, they chose to use available Open Source projects and supported them to bring them in shape – therefore the whole Gnome desktop profited.

But now they did probably the most important contribution for the whole OSS desktop; the relicensed Qt under the LGPL. Up to now the linux desktop was split up between Qt applications and GTK applications. This split was really deep; it went from a different API down to the X11 level how the primitives were drawn. This was like that, because the licensing of Qt and GTK was incompatible and absolutely no code could be shared.

Now both Qt and GTK are LGPL, but expecting that either Qt or GTK will be discontinued now is utopic, since many applications would still use the dropped toolkit and developers would not just adapt to a new API if they already one that they like.

But what I think we can expect that Qt and GTK can share a lot of code. Right now GTK uses Cairo for drawing, while Qt has its own implementation which does basically the same. There are also many other such areas where code was unnecessarily duplicated.

So what I hope is that the difference between Qt and GTK will become just the API – with a mostly identical implementation.

Going back to Intrepid

Well Jaunty is really great; the new Xorg with the improved EXA just flies here on the radeon driver. Compiz now just flies – no more jerky scrolling like with fglrx.

And then there is already ext4 with support in grub, so I promptly converted all my partitions to ext4 since I wanted to get rid of the long drawn out fsck. What turned out not so cool, since ext4 needs the inode size of 256 in order to store the additional checksums for the fast fsck, but if you convert from ext3 you most likely have only 128byte long inodes, so no fast fsck for me… 🙁

Then I realized that – altough great – OSS drivers just suck for 3D and fglrx did a far better job here. And since I need OpenGL for work, I decided to go back to Intrepid.

Well I have just to wipe root and reinstall Intrepid there, since I was so clever to create a separate /home – but wait my /home is ext4 now! And Intrepid was not yet able to handle ext4. ouch!

In case you were saying “just like me” while reading up to now, here is the solution: you can take just the kernel out of the jaunty repository and use it with intrepid to mount /home. Everything plays nice with the foreign kernel 🙂

You can even do this on purpose to get ext4 support in Intrepid, but well, just dont.

Still no OSS R600 & R700 support in Jaunty

Although AMD released the example code for several hours already, there are still no attempts to update the according packages in Jaunty in order to finally enable the R600 and R700 to use open source acceleration techniques like EXA or or tear free Textured Video.

There is also no attemts to make Kernel Mode Setting and GEM to work on this chips, which finally leads me to the question if AMD is really that serious with Linux support. Like I see things there is no chance that things will be ready by – lets say – tomorrow. I am really considering getting a NVidia Card now. 🙁

Gnome Online Desktop

There was a lot of talk recently about how Gnome should embrace the online services in order to keep up with the Web 2.0 development. But sadly most of the ideas were like “lets integrate better with web service <foo>” – the problem is that I do not want to start using Google Calendar from now on; I like the way Lightning handles my dates and I like that I have them available offline. What I would like would be able automatically synchronise them, when an online connection is available.

But we are already too focused on the data layer right now, so lets take a step back and see where we are.

We are here

What we currently have are web pages or better web services with interfaces described with XHTML and styled with CSS and we have local applications with interfaces described with XML and (eventually) styled with CSS. So the UI done pretty similarly already – although it still looks quite different, since there are standard web-page widgets and since most of the web-pages are drawn inside the browser, while local applications are drawn as separate windows on the desktop.

The difference

One might think that the difference might be the place where computation happens, but actually the computation happens in both cases on your local machine – it is just that Java Script is the only language you can use for web services and it is pretty limited right now, but things like Tracemonkey and Google Gears are creating a platform for computation intensive application delivered over HTTP.

And that is also the main difference; the way applications are deployed. Because web-services can be updated every time you reload the page, you can easily keep you customer up to date, while for local applications you often have to deal with out of date applications and updates. This is especially a problem on non Linux platforms, where you do not have a central package(application) manager. But on Linux we have that advanced systems for update delivery so this is not really a problem.

The real difference

What everyone is doing right now is rewriting existing and working code in Java Script in order to solve that one delivery problem, which is really not that big on Linux while the rewriting task is quite huge. And everyone starts using that new and shiny web services – but the question is why. The real benefit which web services offer over current local applications is centralized and therefore synchronised memory; you can log into Google Calendar from every PC you have access to and always see your up to date dates.

So we need

So basically what is missing from Gnome is that centralized memory, but the problem is that an open source community can not easily offer that because someone has to pay a lot of money for a lot of servers. So we still have to integrate with the existing web services. Nothing new here. But that is not a problem either, since there are already web services for everything one might imagine.

What we have to consider here, is that we have to be able to switch easily between different services like lets say Picasa Web Albums and Flickr. We could even define an open API for web service interaction which the services could implement then.

The master plan

to me it makes no sense re-implementing everything in JavaScript just for the sake of being able to run it inside the browser. We have already a variety of better languages available, when running on the local machine and there is also much of the infrastructure ready.

So instead we should make the existing applications more web-aware. Luckily there is already a framework made exactly for this: Telepathy. Currently Telepathy only manages your web-presence status and allows local applications to work with it. But it should also be able to manage your web-pictures and web-calendar and make it accessible to local applications similarly.

Then F-Spot could easily show you which of your pictures are currently published and which are only available locally without making a difference between Picasa Web Albums and Flickr – you should only care whether your pictures are synchronised or not.

The next step would be to actually take advantage of such a framework. If you currently want to write a text you have to care whether you write it for offline or online usage. If you write it for offline work, then things are pretty easy; since Word is not available on Ubuntu you use OpenOffice. But if you write it for online use you have to deal with a bunch of different interfaces – depending on what your web software is you have to deal with the WordPress text editor, the Joomla text editor and so on. And they all have in common that you cant save your text to your local computer easily.

The initial task was just to write a text – why have do I care of such things at all? Would it not be great if I just could open a local running text editor, write my stuff and then decide whether I want to save it to a file or publish it to a web service of my choice.

A locally running web-aware application could do that while offering a consistent interface for text editing and doing a lot of things which you can not inside a browser like launching Gimp in order to edit that picture you need in your article and then automatically upload it.

So basically we have to move things offline in order to get an online desktop and then we can also use browser for what they were meant for; reading text.

OpenOffice is on the right track

After my last post about Blender and how Open Source projects lack communication when it comes to UI Toolkits, I wanted to write another post about OpenOffice.

The UI of OpenOffice is inherently broken, because of (similarly to Blender) the age of the project. OpenOffice also was first a closed source project, developed primary for Windows and having an own Toolkit. And therefore it looked quite out of place when it was first ported to linux – hence one of the main features of OpenOffice2 in comparison to OpenOffice1 was the covering layer over the UI which made it look a bit more native.

But the problem still persists, since it was not fixed but just covered by the layer – you can still see the true ugly face of OpenOffice by removing the “openoffice.org-gtk” in Ubuntu. But even with the cover installed you will see a lot of wrongly drawen things and odd behaviour.

But it seems like the OpenOffice developers are finally aware of that and have also started fixing the problem, by abadonning their custom written toolkit and using and abstract XML description of the UI which maps on Linux directly to GTK calls. This is a quite nice approach similary to where Mozilla ended on Linux(using XUL) and where Blender should head if they really want to fix their UI Problems.

Well done Ubuntu

I just started my Intrepid desktop without having xorg-input-kbd and xorg-input-mouse installed, which means that X.org in Intrepid now uses evdev for all input handling. This is nice, since it moves all input drivers out of X and uses the kernel input event interface instead – less code duplication yay!

By the way; evdev was one of the many things which made the EEE PC at linux plumbers boot in 5 seconds.