Category Archives: Planet Maemo

How to manually update a deb package from source

Probably everyone has encountered a package in Ubuntu which was not the newest released version while one for some reason needed the newest one. The first step is to search for a PPA with the desired version. But what if there is no such PPA or you want to build the version yourself? This is where this guide comes in. Note however that this is not aimed at ordinary users – you need some experience with programming/ compiling to successfully build a package.

Before you start

Before you start make sure that you have source packages enabled in your software sources.
Next you obviously need the upstream source tar-ball of the new program which should look something like <packagename>-<version>.tar.gz.
Download this tar-ball to a new directory <somedir> and extract it there.

Updating Package info

For the following commands I assume you are in the previously created directory <somedir>.

First we need to get the old version of the source package

apt-get source <packagename>

This will download and extract the old source package into <packagename>-<oldversion>.

Now we need some helper scripts to perform the upgrading as well as the build-time dependencies of the package

sudo apt-get install dpkg-dev devscripts fakeroot
sudo apt-get build-dep <packagename>

Next change into the extracted sources of the old package and update the packaging

cd <packagename>-<oldversion>
uupdate -v <newversion> ../<packagename>-<newversion>.tar.gz

# change into the extracted new package
cd ../<packagename>-<newversion>

# update version info
dch -l ~ppa -D $(lsb_release -sc)

For more information see the Debian New Maintainers Guide.

Building the program

To trigger a rebuild of the program simply execute

dpkg-buildpackage

Uploading your version to a PPA

To upload a package to a PPA you first need to sign it to prove that you are the author. To do this you have to execute the following in the <packagename>-<newversion> directory

debuild -S

Furthermore you need the upload tool dput to actually perform the uploading

sudo apt-get install dput

Now change to <somedir> and execute

dput ppa:<your_username>/<repository> <source.changes>

You can find more information at Launchpad.

Secure Owncloud setup

While the Owncloud Manual suggests enabling SSL, it unfortunately does not go into detail how to get a secure setup. The core problem is that the default SSL settings of Apache are not sane as in they do not enforce strong encryption. Furthermore the used default certificate will not match your server name and produce errors in the browser.

In the following a short guide in how to set-up a secure Apache 2.4 server for Owncloud will be presented.

Generating a secure Certificate

A secure TLS connection starts with the Server authenticating itself to the client with the server certificate. Therefore we will start the setup of our server with generating that certificate.
The purpose of the certificate is to ensure that if you type in “your.website.net” you are indeed talking to your server and not to a man-in-the-middle who intercepted your connection. Therefore the certificate contains the server name and the public key of the server.

As mentioned above the default certificate will not match your server name and therefore you will have to generate a matching one.

Unfortunately following the apache SSL FAQ will results in a certificate using the possibly vulnerable SHA-1 hashing function. A better alternative is SHA-256, but it has to be explicitly requested during certificate creation. The according call to openssl for certificate creation is

openssl req -new -sha256 -x509 -nodes -days 365 -out your.website.net.pem -keyout your.website.net.key

The resulting certificate and private key have to be referenced in Apache as following

SSLCertificateFile    /path/to/your.website.net.pem
SSLCertificateKeyFile /path/to/your.website.net.key

Note that this results in a so called self-signed certificate. Usually certificates on the web are approved by a Certificate Authority (a digital notary) which confirms your identity. By trusting the CA you can also trust websites that are otherwise unknown to you, but which were approved by the CA.
While this makes sense for public websites, you probably already trust your own server, so there is no need for a CA signed certificate.
Just add your self-signed certificate to the trusted list of your browser on first visit.
If you fear a man-in-the-middle attack during the initial connection, you can also manually copy the generated pem file on a USB-drive and import it in the browser from there.

Using secure ciphers

Using the secure certificate we only know that we are indeed talking to the server we want to talk to. Next we actually want to start sending encrypted messages. In theory we could encrypt data with the public key of the server using asymmetric encryption like RSA. However asymmetric encryption is slow and therefore not suitable for large amounts of data. Furthermore our communication could be decrypted if somebody would record it and at some point in the future get access to the private key of the server. Therefore we want to use a one-time symmetric key. This way we achieve forward secrecy.The symmetric encryption should be also secure in a sense that even when large amounts of data is collected, it is must not be possible to reconstruct the key and decode the data.
Last but not least the chosen cipher should be supported by our clients. Surprisingly it is the Owncloud desktop client which does not support modern ciphers, while current browsers and even the android app does.

Instead of discussing all available ciphers in regard of the above requirements, I would rather refer to the excellent TLS server configuration guide by Mozilla.
Yet we can still improve the suggested configuration. Mozilla has to consider compability with old web-browsers which we do not have to. So without further ado this is the recommend cipher configuration

SSLProtocol all -SSLv2 -SSLv3
SSLCompression off
SSLHonorCipherOrder On
SSLCipherSuite EECDH+AESGCM:EECDH+AES:EDH+AES

The rationale behind this suggestion is

  • Allow TLS 1.0 for compability with mobile apps
  • Disable SSL compression to mitigate the CRIME attack
  • always use Diffie Hellman(DH) key exchange(Kx) for forward secrecy
  • prefer Elliptic Curve Diffie Hellman (ECDH) for performance
  • always use AES for symmetric encryption
  • prefer AES GCM mode for security and performance

Notes

Actually only the Owncloud desktop client forces us to enable ordinary DH key exchange. Besides being much slower than ECDH, a weak modulus is used for DH Kx up to (including) Apache 2.4.6. While there are no practical attacks exploiting this yet, we can only be completely on the safe side by updating to Apache 2.4.7 or fixing the root issue in the desktop client. (bug status)

Also we only allow TLSv1 for compability with the mobile apps as openssl in android does not yet support TLS1.1+. This circumstance is not really critical as BEAST is not applicable in the sync apps and browsers will connect to the server with TLS1.1+ or work around the vulnerability.

Enforcing HTTPS

At this point the secure connection to your server is ready, but we still have to ensure that it is the only way data is exchanged with the clients.
While you can only enable apache on port 443, you will always have to remember to type in https:// in the browser. A better way is to automatically redirect from port 80 to port 443 like

<VirtualHost *:80>
        ServerName your.website.net
        Redirect permanent / https://your.website.net/
</VirtualHost>

Note that you do not have to use mod_rewrite here as Owncloud sets the HSTS header, so browsers will automatically prefix all requests with https after the first visit.

But whatever you do, remember this

 

How to root Android using Ubuntu

The Big Picture

Android consists of three parts relevant to rooting

  1. the bootloader
  2. recovery system
  3. main system

typically only the main system is running, that is the Linux Kernel, the launcher, the phone app etc.. If we talk about rooting, that means we want to add an additional app to the main system which may access secured parts of the main system and also acts as a gatekeeper for other apps that want to get access too.

The problem is that we need access to the secure parts of the system in order to do so, which means that we cant simply install that app (e.g. an apk) from within the main system.

This means we have to go one level down. This is where the recovery system is. Typically you do not see it, as it is only active when the main system can not run – either because a system update is installed or because you do a factory reset.
As the recovery system can do a full system update, it means that it has also access to the secured parts of the main system – exactly what we need. Unfortunately the stock recovery system does not allow installing apps, so we have to replace it.
But before that we have to talk about the bootloader.

The bootloader is a tiny piece of software which decides wether to start the recovery or the main system (or another main system, like Ubuntu Phone). But in the default configuration in only starts systems that it knows and trusts. In this configuration the bootloader is called locked. Although it prevents malicious software to change the phone and spy on us, it also prevents us from replacing the recovery system. This concept is also coming to the PC btw where it is called secure-boot.

Here is a graphical overview of the Android components:

android-brs

So what we need to do in order to get root access is

  1. unlock the bootloader
  2. replace the recovery system
  3. install a superuser app

Note that unlocking the bootloader also allows attackers to circumvent any of the android security features. It is possible directly access all the files on the phone from the bootloader.
Therefore android will wipe all userdata when the bootloader is unlocked

Preparations

First you need to install the fastboot binary to be able to perform low-level communication with the device

apt-get install android-tools-fastboot

Next you have to allow non-root users to execute commands over USB, so you do not have to run fastboot as root. For this create the file

/etc/udev/rules.d/51-android.rules

with the following content

SUBSYSTEM=="usb", ATTR{idVendor}=="<VENDOR>", MODE="0666", GROUP="plugdev"

you can find the value for <VENDOR> on the page linked here.

Finally you have to reboot into fastboot mode. Usually there is a key combination you have to press on startup.

Remember this key combination as you will need some more times.

Samsung Devices however, like the Galaxy S3, do not support the fastboot mode – instead they have a download mode, which uses a proprietary Samsung protocol. To flash those you have to use the Heimdall tool. While this article does not cover the heimdall CLI calls, the general discussion still applies.

Unlocking the Bootloader

for google devices, like a Nexus 4 or Nexus 7 it is just

fastboot oem unlock

if you have a Sony Xperia device, like a Xperia Z, you additionally have to request a unlock key and then do

fastboot oem unlock 0x<KEY>

where <KEY> is the key you obtained.

Replacing the Recovery System

There are two prominent alternative recovery systems with the ability to install apps

Clock Work Mod (CWM) is probably most known so we will use that one. From the Website linked above download the recovery image which fits your phone.
Here you have the choice between the ordinary recovery which uses the volume buttons of your device for navigation and the touch recovery which supports the touch screen.

fastboot flash recovery <RECOVERY>.img

where <RECOVERY> is the name of the file you downloaded. For instance for a Nexus 5 and CWM 6.0.4.5 it would be

fastboot flash recovery recovery-clockwork-6.0.4.5-hammerhead.img

Installing the superuser app

Again we have several choices here

although SuperSU is the most prominent one, I would recommend getting Superuser by CWM, as it is open source and also nag-free as there is no “pro” version of it.

To install we need to get this zip archive and copy it to the device. To install it, we need to reboot into fastboot mode and then select “Recovery Mode” to get to the recovery system. Once in Recovery mode select

install zip -> choose zip from /sdcard

then browse and select the “superuser.zip” you just copied.

Once installed select

Go Back -> reboot system now

Once the system has started you should have a “Superuser” App on your device. Congratulations, you are done.

Optional: flash stock recovery

As the recovery is responsible for installing system updates it is a good idea to revert to stock version after you installed root, so the system can auto-update itself again. However a system update will also remove your superuser app so you will have to repeat the above procedure again.

If you have a Google Nexus Device, you can grab the factory images here.  There you will find a image of the stock recovery and restore it by

fastboot flash recovery recovery.img

Debugging native code with ndk-gdb using standalone CMake toolchain

I recently ran into this problem and could not find any good solution on the Internet. So next comes a small summary of the problem with hopefully enough buzzwords, so Google can lead you here.

If you want to do C++ development on Android, you need the NDK for cross compilation. It comes by default with its own build system called ndk-build, which basically is a bunch of custom makefiles. But if you are sharing code between the Android Platform and lets say plain Linux, you have likely already a build system installed. For C/C++ CMake is quite popular as it supports different platforms and compilers. Fortunately there is already a project which adds Android support to CMake. I will not cover that – instead I assume you are using it already.

Unfortunately you cant use the ndk-gdb script supplied with the NDK to debug your application as it relies on the behaviour of ndk-build. But as said earlier, ndk-build is no wizardy, but just a bunch of scripts. So it is possible to emulate the behaviour using CMake, as following:

Add the following macro to your CMakeLists.txt file

macro(ndk_gdb_debuggable TARGET_NAME)
    get_property(TARGET_LOCATION TARGET ${TARGET_NAME} PROPERTY LOCATION)
    
    # create custom target that depends on the real target so it gets executed afterwards
    add_custom_target(NDK_GDB ALL) 
    add_dependencies(NDK_GDB ${TARGET_NAME})
    
    set(GDB_SOLIB_PATH ${PROJECT_SOURCE_DIR}/obj/local/${ANDROID_NDK_ABI_NAME}/)
    
    # 1. generate essential Android Makefiles
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Android.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Application.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")

    # 2. generate gdb.setup
    get_directory_property(PROJECT_INCLUDES DIRECTORY ${PROJECT_SOURCE_DIR} INCLUDE_DIRECTORIES)
    string(REGEX REPLACE ";" " " PROJECT_INCLUDES "${PROJECT_INCLUDES}")
    file(WRITE ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ${GDB_SOLIB_PATH}\n")
    file(APPEND ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "directory ${PROJECT_INCLUDES}\n")

    # 3. copy gdbserver executable
    file(COPY ${ANDROID_NDK}/prebuilt/android-arm/gdbserver/gdbserver DESTINATION ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/)

    # 4. copy lib to obj
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND mkdir -p ${GDB_SOLIB_PATH})
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND cp ${TARGET_LOCATION} ${GDB_SOLIB_PATH})

    # 5. strip symbols
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND ${CMAKE_STRIP} ${TARGET_LOCATION})
endmacro()

Then use it like

add_library(YourTarget ...)
ndk_gdb_debuggable(YourTarget)

You should now be able to use ndk-gdb with CMake, just as if you would have used ndk-build.

Note that steps 4 and 5 are optional for debugging. They just reduce the size of the library that has to be transferred to the device. If you dont care, you can just leave them out. But then the solib search path from step 2 must be set to:

file(WRITE ./libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ./libs/${ANDROID_NDK_ABI_NAME}\n")

Ideally someone should integrate that in the Android toolchain linked above.

Update Merged Upstream

GNOME Project suffering the NIH disease

When I first read about GNOME dropping support for BSD and Solaris, my impression was that this is a good idea to aiming to unify limit resources and get the work done. I was also excited about the idea of the GNOME OS. I think it is necessary to keep the big picture in mind when developing the different components. Previously Ubuntu was the only project that did this and it was also the reason why I started using Ubuntu. Because it made the different parts of Linux work together to achieve the big goal of a great overall system.

But then things started to go wrong. Instead of picking existing components and giving them the final polish like Ubuntu did before, the GNOME project started developing things from scratch without any apparent reason to do so. And even worse: incompatible to existing solutions. It started with the rejection of the appindicator specification implemented by Ubuntu and KDE. At that point it was not clear to me whether the specification was broken or whether the responsible people at GNOME were just ignorant.

Then came systemd. And it started to be apparent that unfortunately it was the latter. To my knowledge Ubuntu is the biggest deployment of GNOME and it is based around the Linux ecosystem. So dropping support for Ubuntu has nothing to do with unifying limited resources. Ubuntu is your target audience, so if you should try to collaborate with a project you should collaborate with Ubuntu. My opinion on that is that some Fedora developers were pissed that the Unity interface was exclusive for Ubuntu and instead of packaging it for Fedora they started making GNOME Shell exclusive for Fedora.

Next I read about the overlay scrollbars re-developed for GNOME. While the first reaction might be the developers simply do not want to use Ubuntu technology, I think the reason is different. The developer does not seem to have any antipathy towards Ubuntu and if we look at the project he developed the scrollbars for another explanation becomes visible.

But first lets take a step back. Lets take a look at the core of GNOME. By this I mean the programming language it is written in. It is C/GObject; plain C extended with naming conventions and libraries to allow modern paradigms such as object oriented programming and events/ observer pattern. From today’s perspective one might wonder why one should choose this over C++, which integrates most of the features at the language level. But back when the GNOME project started C++ was not mature yet which meant that your program might break with the next compiler update or even the next STL update.

Therefore basing your project on plain C was a good idea. But a few years back it became obvious that programming in C/GObject seriosly lacked behind more modern programming languages like C++, Java and C# for application development.

Unfortunately instead of moving the straightforward route from C to C++, which most of C developers took when C++ matured(that was about 10 years ago), Vala was born.

So instead of using a proven and mature foundation, a new layer of indirection was created to essentially provide the same feature set. Commonly this is referred to as the “not invented here” symptom. A more derogative phrase would be reinventing the wheel..

What is sad here is that being an open source project, GNOME disregards the biggest advantage of open source software, namely standing on the shoulders of giants. With open source software you can use take an existing solution and improve upon it. This way you get the base functionality as well as the bug fixes that went in it for free. If you would develop it from scratch, you most likely would have to fix the same bugs again yourself.

To sum up here is what GNOME is losing right now

  • 30 years of language and library experience by using Vala instead of C++
  • 5 years of deployment and bug fixing by using systemd instead of extending upstart
  • 1 year of development testing and design if they reimplement overlay-scrollbars
  • 8 years of foundation development that went into Eclipse, by developing Gnome Builder from scratch
  • but most importantly: the synergy effects by collaborating with others

Do not get me wrong, I am not saying that the GNOME solutions could be replaced by existing solutions – I am saying that by extending existing solutions the GNOME project and the free software landscape would be better off as a whole.

Doing the right thing

Canonical is doing the right thing. Yes morally as well. By choosing the MIT/X11 license instead of the GPL the Banshee developer explicitly allow using Banshee in a closed-source for-profit project without giving back anything.

To start whining about moral, now that someone actually takes advantage of this right is somehow premature – in the end you had the choice how to license it, right? If you don’t like what happens change the license! Maybe a proprietary one this time, as open source obviously is not restrictive enough for you and you have to resort to “morality”.

As for me I would be perfectly happy if Canonical would simply keep 100% of the Amazon revenue – after all its their product (yes putting together the pieces makes it something new).

As a user I care most whether the product works and I use ubuntu as it works best for me. And since canonical did a great job so far providing what I want, I think the decision should be up to them whether to spend the money on shiny new icons or to give something back to the banshee developers.

For reference: this and this.

Augmented Reality on the N900

finally I reached a stage where I could upload my small augmented reality app to extras-devel, so all those who asked for it can now play with it. But be aware that it is in extras-devel for a reason. In case you are wondering what I am writing about, here is a video of the demo:

in order to make it work, you will have to print the artoolkitplus markers. Furthermore there are these controls:

  • scale the objects using the volume buttons
  • select one of the objects for scaling by tapping on it
  • tapping on the palette symbol triggers annotating by drawing on the screen
  • tapping on the sun symbol fixes the sun to the current device position
  • once fixed the shadows can be rotated using the arrow keys

Handheld based interaction using AR

it is time for the next demo of my project, as I reached the beta status. (feature complete) I think you can see quite clearly now where this is going and which kind of interactions will be possible using this technique. The concrete features are described in the annotations.

If you use more advanced tracking methods and add some physics to this, you could easily port numpty physics to this kind of interaction or create an easy to use level editor. In case you missed my last video, here is the link.

It will still take some weeks until this hits maemo-extras as there are still some bugs left and I still want to get rid of keyboard use for interaction.

Thoughts about MeeGo

In a country with freedom of speech, one has to say something to every happening, right? So here is my try:

Basically the merge of Maemo and Moblin is logical and consequent as Nokia and Intel already collaborated with ofono and merging helps joining the efforts. This is quite necessary as neither Maemo nor Moblin could survive on their own in a world where everybody else uses Android and soon will start using Chrome OS. There was also a need to make Moblin more like Meamo to be able to compete with the iPad.

That sounds greet so far, right? Joint efforts, bigger community, open to everybody… But the problem is the way the merge is going to happen. Moblin is more or less a huge techdemo so far – everybody who I know uses Ubuntu Netbook Remix on their Netbook, as it is more production ready and end user oriented. The same also applies to Maemo.

It is a bit sad that the next Maemo/ MeeGo Harmattan will be Qt based though, at it means that all the currently working applications have to be rewritten without gaining an immediate benefit. But considering that Qt is technically more advanced than GTK and that it allows deploying your application on the different OS Nokia uses this is understandable.

What is less understandable is that MeeGo will be based on RPM/ Moblin/ Fedora. And at least for Fedora the official motto is merging new features as fast possible, which is nice for developers but less nice for end users, as the distribution is less stable. So while it is logical to base a tech-demo like Moblin on Fedora, I would not base anything that is supposed to be stable on it.

But this is exactly what shall happen with MeeGo. This means Maemo has to abandon its Debian roots and rebase everything to RPM. By everything I mean the huge amount of packaging experience gained during the last 5 years, the build infrastructure and of course the core package management applications. This has also an impact on the community infrastructure, downloads, karma are coupled to the DEB format too.

So what do we gain by rebasing to RPM? Maybe the Moblin interface which is indeed nice? Actually no, as it is Clutter/GTK based while MeeGo will use Qt – besides the Moblin interface was packaged by Ubuntu as deb too. Ok the Moblin community will not need to change its infrastructure, but is the Moblin community actually that big?

As I really wonder why we switch to RPM I started a wiki to collect the arguments, and as it does not look too good for RPM also a brainstorm vote.

AR shadowmapping demo

while my last video already contained augmented reality and shadow mapping, it did not really show the potential of shadowmapping, therefore I created another video

other news are the tracking of multiple individual markers(quite obvious) and the camera relative light source. The latter is necessary to cast the shadow of all objects in the same direction.

As some of you wondered where the sense behind all of this is; I write the program as a project work at the university. The aim is to create an easy way to create and modify 3D scenes.