Category Archives: Articles

How to manually update a deb package from source

Probably everyone has encountered a package in Ubuntu which was not the newest released version while one for some reason needed the newest one. The first step is to search for a PPA with the desired version. But what if there is no such PPA or you want to build the version yourself? This is where this guide comes in. Note however that this is not aimed at ordinary users – you need some experience with programming/ compiling to successfully build a package.

Before you start

Before you start make sure that you have source packages enabled in your software sources.
Next you obviously need the upstream source tar-ball of the new program which should look something like <packagename>-<version>.tar.gz.
Download this tar-ball to a new directory <somedir> and extract it there.

Updating Package info

For the following commands I assume you are in the previously created directory <somedir>.

First we need to get the old version of the source package

apt-get source <packagename>

This will download and extract the old source package into <packagename>-<oldversion>.

Now we need some helper scripts to perform the upgrading as well as the build-time dependencies of the package

sudo apt-get install dpkg-dev devscripts fakeroot
sudo apt-get build-dep <packagename>

Next change into the extracted sources of the old package and update the packaging

cd <packagename>-<oldversion>
uupdate -v <newversion> ../<packagename>-<newversion>.tar.gz

# change into the extracted new package
cd ../<packagename>-<newversion>

# update version info
dch -l ~ppa -D $(lsb_release -sc)

For more information see the Debian New Maintainers Guide.

Building the program

To trigger a rebuild of the program simply execute

dpkg-buildpackage

Uploading your version to a PPA

To upload a package to a PPA you first need to sign it to prove that you are the author. To do this you have to execute the following in the <packagename>-<newversion> directory

debuild -S

Furthermore you need the upload tool dput to actually perform the uploading

sudo apt-get install dput

Now change to <somedir> and execute

dput ppa:<your_username>/<repository> <source.changes>

You can find more information at Launchpad.

Secure Owncloud setup

While the Owncloud Manual suggests enabling SSL, it unfortunately does not go into detail how to get a secure setup. The core problem is that the default SSL settings of Apache are not sane as in they do not enforce strong encryption. Furthermore the used default certificate will not match your server name and produce errors in the browser.

In the following a short guide in how to set-up a secure Apache 2.4 server for Owncloud will be presented.

Generating a secure Certificate

A secure TLS connection starts with the Server authenticating itself to the client with the server certificate. Therefore we will start the setup of our server with generating that certificate.
The purpose of the certificate is to ensure that if you type in “your.website.net” you are indeed talking to your server and not to a man-in-the-middle who intercepted your connection. Therefore the certificate contains the server name and the public key of the server.

As mentioned above the default certificate will not match your server name and therefore you will have to generate a matching one.

Unfortunately following the apache SSL FAQ will results in a certificate using the possibly vulnerable SHA-1 hashing function. A better alternative is SHA-256, but it has to be explicitly requested during certificate creation. The according call to openssl for certificate creation is

openssl req -new -sha256 -x509 -nodes -days 365 -out your.website.net.pem -keyout your.website.net.key

The resulting certificate and private key have to be referenced in Apache as following

SSLCertificateFile    /path/to/your.website.net.pem
SSLCertificateKeyFile /path/to/your.website.net.key

Note that this results in a so called self-signed certificate. Usually certificates on the web are approved by a Certificate Authority (a digital notary) which confirms your identity. By trusting the CA you can also trust websites that are otherwise unknown to you, but which were approved by the CA.
While this makes sense for public websites, you probably already trust your own server, so there is no need for a CA signed certificate.
Just add your self-signed certificate to the trusted list of your browser on first visit.
If you fear a man-in-the-middle attack during the initial connection, you can also manually copy the generated pem file on a USB-drive and import it in the browser from there.

Using secure ciphers

Using the secure certificate we only know that we are indeed talking to the server we want to talk to. Next we actually want to start sending encrypted messages. In theory we could encrypt data with the public key of the server using asymmetric encryption like RSA. However asymmetric encryption is slow and therefore not suitable for large amounts of data. Furthermore our communication could be decrypted if somebody would record it and at some point in the future get access to the private key of the server. Therefore we want to use a one-time symmetric key. This way we achieve forward secrecy.The symmetric encryption should be also secure in a sense that even when large amounts of data is collected, it is must not be possible to reconstruct the key and decode the data.
Last but not least the chosen cipher should be supported by our clients. Surprisingly it is the Owncloud desktop client which does not support modern ciphers, while current browsers and even the android app does.

Instead of discussing all available ciphers in regard of the above requirements, I would rather refer to the excellent TLS server configuration guide by Mozilla.
Yet we can still improve the suggested configuration. Mozilla has to consider compability with old web-browsers which we do not have to. So without further ado this is the recommend cipher configuration

SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES

The rationale behind this suggestion is

  • Allow TLS 1.0 for compability with mobile apps
  • Disable SSL compression to mitigate the CRIME attack
  • always use Diffie Hellman(DH) key exchange(Kx) for forward secrecy
  • prefer Elliptic Curve Diffie Hellman (ECDH) for performance
  • always use AES for symmetric encryption
  • prefer AES GCM mode for security and performance

Notes

Actually only the Owncloud desktop client forces us to enable ordinary DH key exchange. Besides being much slower than ECDH, a weak modulus is used for DH Kx up to (including) Apache 2.4.6. While there are no practical attacks exploiting this yet, we can only be completely on the safe side by updating to Apache 2.4.7 or fixing the root issue in the desktop client. (bug status)

Also we only allow TLSv1 for compability with the mobile apps as openssl in android does not yet support TLS1.1+. This circumstance is not really critical as BEAST is not applicable in the sync apps and browsers will connect to the server with TLS1.1+ or work around the vulnerability.

Enforcing HTTPS

At this point the secure connection to your server is ready, but we still have to ensure that it is the only way data is exchanged with the clients.
While you can only enable apache on port 443, you will always have to remember to type in https:// in the browser. A better way is to automatically redirect from port 80 to port 443 like

<VirtualHost *:80>
        ServerName your.website.net
        Redirect permanent / https://your.website.net/
</VirtualHost>

Note that you do not have to use mod_rewrite here as Owncloud sets the HSTS header, so browsers will automatically prefix all requests with https after the first visit.

But whatever you do, remember this

 

How to root Android using Ubuntu

The Big Picture

Android consists of three parts relevant to rooting

  1. the bootloader
  2. recovery system
  3. main system

typically only the main system is running, that is the Linux Kernel, the launcher, the phone app etc.. If we talk about rooting, that means we want to add an additional app to the main system which may access secured parts of the main system and also acts as a gatekeeper for other apps that want to get access too.

The problem is that we need access to the secure parts of the system in order to do so, which means that we cant simply install that app (e.g. an apk) from within the main system.

This means we have to go one level down. This is where the recovery system is. Typically you do not see it, as it is only active when the main system can not run – either because a system update is installed or because you do a factory reset.
As the recovery system can do a full system update, it means that it has also access to the secured parts of the main system – exactly what we need. Unfortunately the stock recovery system does not allow installing apps, so we have to replace it.
But before that we have to talk about the bootloader.

The bootloader is a tiny piece of software which decides wether to start the recovery or the main system (or another main system, like Ubuntu Phone). But in the default configuration in only starts systems that it knows and trusts. In this configuration the bootloader is called locked. Although it prevents malicious software to change the phone and spy on us, it also prevents us from replacing the recovery system. This concept is also coming to the PC btw where it is called secure-boot.

Here is a graphical overview of the Android components:

android-brs

So what we need to do in order to get root access is

  1. unlock the bootloader
  2. replace the recovery system
  3. install a superuser app

Note that unlocking the bootloader also allows attackers to circumvent any of the android security features. It is possible directly access all the files on the phone from the bootloader.
Therefore android will wipe all userdata when the bootloader is unlocked

Preparations

First you need to install the fastboot binary to be able to perform low-level communication with the device

apt-get install android-tools-fastboot

Next you have to allow non-root users to execute commands over USB, so you do not have to run fastboot as root. For this create the file

/etc/udev/rules.d/51-android.rules

with the following content

SUBSYSTEM=="usb", ATTR{idVendor}=="<VENDOR>", MODE="0666", GROUP="plugdev"

you can find the value for <VENDOR> on the page linked here.

Finally you have to reboot into fastboot mode. Usually there is a key combination you have to press on startup.

Remember this key combination as you will need some more times.

Samsung Devices however, like the Galaxy S3, do not support the fastboot mode – instead they have a download mode, which uses a proprietary Samsung protocol. To flash those you have to use the Heimdall tool. While this article does not cover the heimdall CLI calls, the general discussion still applies.

Unlocking the Bootloader

for google devices, like a Nexus 4 or Nexus 7 it is just

fastboot oem unlock

if you have a Sony Xperia device, like a Xperia Z, you additionally have to request a unlock key and then do

fastboot oem unlock 0x<KEY>

where <KEY> is the key you obtained.

Replacing the Recovery System

There are two prominent alternative recovery systems with the ability to install apps

Clock Work Mod (CWM) is probably most known so we will use that one. From the Website linked above download the recovery image which fits your phone.
Here you have the choice between the ordinary recovery which uses the volume buttons of your device for navigation and the touch recovery which supports the touch screen.

fastboot flash recovery <RECOVERY>.img

where <RECOVERY> is the name of the file you downloaded. For instance for a Nexus 5 and CWM 6.0.4.5 it would be

fastboot flash recovery recovery-clockwork-6.0.4.5-hammerhead.img

Installing the superuser app

Again we have several choices here

although SuperSU is the most prominent one, I would recommend getting Superuser by CWM, as it is open source and also nag-free as there is no “pro” version of it.

To install we need to get this zip archive and copy it to the device. To install it, we need to reboot into fastboot mode and then select “Recovery Mode” to get to the recovery system. Once in Recovery mode select

install zip -> choose zip from /sdcard

then browse and select the “superuser.zip” you just copied.

Once installed select

Go Back -> reboot system now

Once the system has started you should have a “Superuser” App on your device. Congratulations, you are done.

Optional: flash stock recovery

As the recovery is responsible for installing system updates it is a good idea to revert to stock version after you installed root, so the system can auto-update itself again. However a system update will also remove your superuser app so you will have to repeat the above procedure again.

If you have a Google Nexus Device, you can grab the factory images here.  There you will find a image of the stock recovery and restore it by

fastboot flash recovery recovery.img

Debugging native code with ndk-gdb using standalone CMake toolchain

I recently ran into this problem and could not find any good solution on the Internet. So next comes a small summary of the problem with hopefully enough buzzwords, so Google can lead you here.

If you want to do C++ development on Android, you need the NDK for cross compilation. It comes by default with its own build system called ndk-build, which basically is a bunch of custom makefiles. But if you are sharing code between the Android Platform and lets say plain Linux, you have likely already a build system installed. For C/C++ CMake is quite popular as it supports different platforms and compilers. Fortunately there is already a project which adds Android support to CMake. I will not cover that – instead I assume you are using it already.

Unfortunately you cant use the ndk-gdb script supplied with the NDK to debug your application as it relies on the behaviour of ndk-build. But as said earlier, ndk-build is no wizardy, but just a bunch of scripts. So it is possible to emulate the behaviour using CMake, as following:

Add the following macro to your CMakeLists.txt file

macro(ndk_gdb_debuggable TARGET_NAME)
    get_property(TARGET_LOCATION TARGET ${TARGET_NAME} PROPERTY LOCATION)
    
    # create custom target that depends on the real target so it gets executed afterwards
    add_custom_target(NDK_GDB ALL) 
    add_dependencies(NDK_GDB ${TARGET_NAME})
    
    set(GDB_SOLIB_PATH ${PROJECT_SOURCE_DIR}/obj/local/${ANDROID_NDK_ABI_NAME}/)
    
    # 1. generate essential Android Makefiles
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Android.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Application.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")

    # 2. generate gdb.setup
    get_directory_property(PROJECT_INCLUDES DIRECTORY ${PROJECT_SOURCE_DIR} INCLUDE_DIRECTORIES)
    string(REGEX REPLACE ";" " " PROJECT_INCLUDES "${PROJECT_INCLUDES}")
    file(WRITE ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ${GDB_SOLIB_PATH}\n")
    file(APPEND ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "directory ${PROJECT_INCLUDES}\n")

    # 3. copy gdbserver executable
    file(COPY ${ANDROID_NDK}/prebuilt/android-arm/gdbserver/gdbserver DESTINATION ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/)

    # 4. copy lib to obj
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND mkdir -p ${GDB_SOLIB_PATH})
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND cp ${TARGET_LOCATION} ${GDB_SOLIB_PATH})

    # 5. strip symbols
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND ${CMAKE_STRIP} ${TARGET_LOCATION})
endmacro()

Then use it like

add_library(YourTarget ...)
ndk_gdb_debuggable(YourTarget)

You should now be able to use ndk-gdb with CMake, just as if you would have used ndk-build.

Note that steps 4 and 5 are optional for debugging. They just reduce the size of the library that has to be transferred to the device. If you dont care, you can just leave them out. But then the solib search path from step 2 must be set to:

file(WRITE ./libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ./libs/${ANDROID_NDK_ABI_NAME}\n")

Ideally someone should integrate that in the Android toolchain linked above.

Update Merged Upstream

GNOME Project suffering the NIH disease

When I first read about GNOME dropping support for BSD and Solaris, my impression was that this is a good idea to aiming to unify limit resources and get the work done. I was also excited about the idea of the GNOME OS. I think it is necessary to keep the big picture in mind when developing the different components. Previously Ubuntu was the only project that did this and it was also the reason why I started using Ubuntu. Because it made the different parts of Linux work together to achieve the big goal of a great overall system.

But then things started to go wrong. Instead of picking existing components and giving them the final polish like Ubuntu did before, the GNOME project started developing things from scratch without any apparent reason to do so. And even worse: incompatible to existing solutions. It started with the rejection of the appindicator specification implemented by Ubuntu and KDE. At that point it was not clear to me whether the specification was broken or whether the responsible people at GNOME were just ignorant.

Then came systemd. And it started to be apparent that unfortunately it was the latter. To my knowledge Ubuntu is the biggest deployment of GNOME and it is based around the Linux ecosystem. So dropping support for Ubuntu has nothing to do with unifying limited resources. Ubuntu is your target audience, so if you should try to collaborate with a project you should collaborate with Ubuntu. My opinion on that is that some Fedora developers were pissed that the Unity interface was exclusive for Ubuntu and instead of packaging it for Fedora they started making GNOME Shell exclusive for Fedora.

Next I read about the overlay scrollbars re-developed for GNOME. While the first reaction might be the developers simply do not want to use Ubuntu technology, I think the reason is different. The developer does not seem to have any antipathy towards Ubuntu and if we look at the project he developed the scrollbars for another explanation becomes visible.

But first lets take a step back. Lets take a look at the core of GNOME. By this I mean the programming language it is written in. It is C/GObject; plain C extended with naming conventions and libraries to allow modern paradigms such as object oriented programming and events/ observer pattern. From today’s perspective one might wonder why one should choose this over C++, which integrates most of the features at the language level. But back when the GNOME project started C++ was not mature yet which meant that your program might break with the next compiler update or even the next STL update.

Therefore basing your project on plain C was a good idea. But a few years back it became obvious that programming in C/GObject seriosly lacked behind more modern programming languages like C++, Java and C# for application development.

Unfortunately instead of moving the straightforward route from C to C++, which most of C developers took when C++ matured(that was about 10 years ago), Vala was born.

So instead of using a proven and mature foundation, a new layer of indirection was created to essentially provide the same feature set. Commonly this is referred to as the “not invented here” symptom. A more derogative phrase would be reinventing the wheel..

What is sad here is that being an open source project, GNOME disregards the biggest advantage of open source software, namely standing on the shoulders of giants. With open source software you can use take an existing solution and improve upon it. This way you get the base functionality as well as the bug fixes that went in it for free. If you would develop it from scratch, you most likely would have to fix the same bugs again yourself.

To sum up here is what GNOME is losing right now

  • 30 years of language and library experience by using Vala instead of C++
  • 5 years of deployment and bug fixing by using systemd instead of extending upstart
  • 1 year of development testing and design if they reimplement overlay-scrollbars
  • 8 years of foundation development that went into Eclipse, by developing Gnome Builder from scratch
  • but most importantly: the synergy effects by collaborating with others

Do not get me wrong, I am not saying that the GNOME solutions could be replaced by existing solutions – I am saying that by extending existing solutions the GNOME project and the free software landscape would be better off as a whole.

Tablet PCs – a chance for Maemo?

I just have watched the Apple iPad announcement and I have to say that I am quite impressed by the Apple marketing team. Before the film I could not think of a good use case for a oversized iPod, but after the film I have to say that Apple greatly refined the use case of the netbooks as a second PC.

Instead of putting an ordinary OS into a differently shaped device, like Microsoft is seemingly doing with the Slate, Apple adjusted the OS to the new use case.

If you have a much smaller screen and a much smaller keyboard, like you have on netbooks, you don’t want to write long articles or aim for the tiny buttons of ordinary user interfaces. Instead one should of a netbook like a playback device, which only requires rudimentary interaction.

As apple is great at streamlining stuff, they simply left out the keyboard and used a modified version of the iPhone OS, which is optimized for easy usage and – voilla here comes the computer you actually want to use in your living room, to quickly peek on facebook or your mail inbox.

But there are two big disadvantages that come with using the iPhone OS. First it is stripped down to much; there is no multitasking and no system clipboard which takes a lot of the convenience you have when using a real OS.

And second you are again locked-in by apple. If you use the iPad, you are also more or less forced to use iTunes for your music, iBook Store for your eBook and the AppStore if you want new Software.

Of course you might be able to Jailbreak the device and use third-party software but this will be nowhere as convenient as using the defaults. This is Apples Achilles heel and where Maemo can triumph.

With Maemo you basically have a full-fledged Linux with a easy to use UI. You have multitasking, you have a system clipboard and most importantly you have an open software repository – and all of this very well integrated in the UI.

You can freely choose your email provide, your music player and even the format you save your music in. And even though Nokia does not support OGG by default, the open nature of the OS allows it to be just as integrated as everything else.

Actually Nokia only has to build a Internet Tablet with the size of the iPad…

4 Years Later

Nearly 4 years ago Jon Smirl, the author of the experimental Xegl server wrote a nice summary about the state of linux graphics. He did that after realizing that it take much more time than he could out into Xegl in order to fix the linux graphics, so it is a realistic summary.

The interesting thing is that in his vision he described an X server which would run without root privileges and use OpenGL for all its drawings. Number one on his todo list though was a memory manager for DRM, which would enable all the nice stuff.

Well it seems that coming with Karmic we will finally see that first step done; most of the users(Intel and AMD) should get a nice in kernel memory manager which will be visible by Kernel Mode Setting. So what can we expect next?

Linux Graphics Dependencies

This diagram shows the dependencies of the various features. (Source) Green is what we should get with Karmic. The graphics-memory manager in kernel allows moving mode setting there too, so the resolution is set only once during bootup (flicker-free). Since the graphics card is now controlled from one hand it also allows dynamic power management to be done and does not require root privileges for X, which results in better security. Memory management also allows giving 3D applications their private front buffer, so they dont conflict when rendering. (RDR, Redirected Direct Rendering) Which is basically the core of DRI2. It also allows supporting memory related OpenGL features like VBOs or FBOs(Vertex/Frame Buffer Object) which brings the compliance level of MESA up to OpenGL 1.5.

Theoretically everything up to OpenGL3 could be implemented now, but since of the old design of MESA everything would have to be implemented for each driver over and over again. That is why Gallium3D was created. It is a intermediate level API which abstracts from the hardware by just offering a  state tracking interface. On top of that it is possible to implement more sophisticated APIs like OpenGL, OpenCL or a Video Decoding API like VDPAU. These implementations can then be shared across all drivers which are built on the Gallium3D architecture.

Well it is still a way to go until we get these grey boxes, which Windows and Mac basically already had 4 years ago…

Gnome Online Desktop

There was a lot of talk recently about how Gnome should embrace the online services in order to keep up with the Web 2.0 development. But sadly most of the ideas were like “lets integrate better with web service <foo>” – the problem is that I do not want to start using Google Calendar from now on; I like the way Lightning handles my dates and I like that I have them available offline. What I would like would be able automatically synchronise them, when an online connection is available.

But we are already too focused on the data layer right now, so lets take a step back and see where we are.

We are here

What we currently have are web pages or better web services with interfaces described with XHTML and styled with CSS and we have local applications with interfaces described with XML and (eventually) styled with CSS. So the UI done pretty similarly already – although it still looks quite different, since there are standard web-page widgets and since most of the web-pages are drawn inside the browser, while local applications are drawn as separate windows on the desktop.

The difference

One might think that the difference might be the place where computation happens, but actually the computation happens in both cases on your local machine – it is just that Java Script is the only language you can use for web services and it is pretty limited right now, but things like Tracemonkey and Google Gears are creating a platform for computation intensive application delivered over HTTP.

And that is also the main difference; the way applications are deployed. Because web-services can be updated every time you reload the page, you can easily keep you customer up to date, while for local applications you often have to deal with out of date applications and updates. This is especially a problem on non Linux platforms, where you do not have a central package(application) manager. But on Linux we have that advanced systems for update delivery so this is not really a problem.

The real difference

What everyone is doing right now is rewriting existing and working code in Java Script in order to solve that one delivery problem, which is really not that big on Linux while the rewriting task is quite huge. And everyone starts using that new and shiny web services – but the question is why. The real benefit which web services offer over current local applications is centralized and therefore synchronised memory; you can log into Google Calendar from every PC you have access to and always see your up to date dates.

So we need

So basically what is missing from Gnome is that centralized memory, but the problem is that an open source community can not easily offer that because someone has to pay a lot of money for a lot of servers. So we still have to integrate with the existing web services. Nothing new here. But that is not a problem either, since there are already web services for everything one might imagine.

What we have to consider here, is that we have to be able to switch easily between different services like lets say Picasa Web Albums and Flickr. We could even define an open API for web service interaction which the services could implement then.

The master plan

to me it makes no sense re-implementing everything in JavaScript just for the sake of being able to run it inside the browser. We have already a variety of better languages available, when running on the local machine and there is also much of the infrastructure ready.

So instead we should make the existing applications more web-aware. Luckily there is already a framework made exactly for this: Telepathy. Currently Telepathy only manages your web-presence status and allows local applications to work with it. But it should also be able to manage your web-pictures and web-calendar and make it accessible to local applications similarly.

Then F-Spot could easily show you which of your pictures are currently published and which are only available locally without making a difference between Picasa Web Albums and Flickr – you should only care whether your pictures are synchronised or not.

The next step would be to actually take advantage of such a framework. If you currently want to write a text you have to care whether you write it for offline or online usage. If you write it for offline work, then things are pretty easy; since Word is not available on Ubuntu you use OpenOffice. But if you write it for online use you have to deal with a bunch of different interfaces – depending on what your web software is you have to deal with the WordPress text editor, the Joomla text editor and so on. And they all have in common that you cant save your text to your local computer easily.

The initial task was just to write a text – why have do I care of such things at all? Would it not be great if I just could open a local running text editor, write my stuff and then decide whether I want to save it to a file or publish it to a web service of my choice.

A locally running web-aware application could do that while offering a consistent interface for text editing and doing a lot of things which you can not inside a browser like launching Gimp in order to edit that picture you need in your article and then automatically upload it.

So basically we have to move things offline in order to get an online desktop and then we can also use browser for what they were meant for; reading text.