Converting a Ubuntu and Windows dual-boot installation to UEFI

UEFI is the successor to BIOS for communicating with the Firmware on your Mainboard.
While the first BIOS was released with the IBM-PC in 1981, the first UEFI version (EFI 2.0) was released 25 years later in 2006 building upon the lessons learned in that timespan. So UEFI is without any doubt the more modern solution.

The user-visible advantages of using UEFI instead of BIOS are

You could reinstall both Windows and Ubuntu to get UEFI. However it is also possible to convert existing installations of both on the fly – without the backup/ restore cycle. You should still do a backup in case something goes wrong though.


Only the 64bit Versions of Windows support UEFI. Therefore this guide assumes that you run the 64bit versions of both Windows and Ubuntu.

Furthermore verify the following items before you continue – otherwise you will not be able to finish the conversion. Use GParted in case you have not enough space before the first or after the last partition.

  • 250MB space in front of first partition
    Typically Windows 8 creates a 350MB System Partition upon installation. This space can be reclaimed for a 100MB EFI partiton and a new 100MB Windows System partition.
  • 1-2MB behind last partiton (for the GPT backup)
  • UEFI bootable Ubuntu USB drive.
    You can use the startup disk creator on ubuntu with an Ubuntu 14.04+ ISO.
  • UEFI bootable Windows USB drive.
    You can use the Microsoft Media Creation tool for Windows 10 to get one.

to test that the sticks are indeed UEFI compatible, try booting them with CSM Mode disabled in your BIOS.

Convert the drive to GPT

UEFI requires a GUID Partition Table (GPT), so first we need to convert from MBR to GPT.

After this step you will not be able to boot your system any more. So make sure you have the Ubuntu USB drive ready.

We will use gdisk to perform the conversion as following:

sudo gdisk /dev/sdX
Command (? for help): w

where sdX is your system drive (e.g. sda)

Convert Windows to UEFI

Now boot your Windows USB drive and enter the command prompt as described in this Microsoft Technet article at step 6.

Continue with the following steps from the Article. Note that we have skipped steps 1-4 as we used Ubuntu to convert the disk to GPT.

We have now created a EFI partition, a new EFI compatible Windows System Partition and we have installed the Windows Bootloader to the EFI partition. Your Windows installation should now start again.
At this point you could also perform an upgrade to Windows 10, as the upgrade would erase grub from the EFI partition anyway.

Next we are going to install grub to the EFI partition and make it manage the boot.

Enter a Ubuntu chroot

As we can not directly boot our Ubuntu installation, we will instead boot from the Ubuntu USB drive and the switch to the installed Ubuntu.
To do the switch we have to setup and enter a chroot as following

sudo mount /dev/sdXY /mnt 
sudo mount /dev/sdX1 /mnt/boot/efi 
sudo mount -o bind /dev /mnt/dev
sudo mount -o bind /sys /mnt/sys
sudo mount -t proc /proc /mnt/proc
sudo cp /proc/mounts /mnt/etc/mtab
sudo cp /etc/resolv.conf /mnt/etc/resolv.conf
sudo chroot /mnt

where sdXY is the partition where your Ubuntu system is installed (e.g. sda5)

Convert Ubuntu to UEFI

Inside your Ubuntu Installation we have to replace grub for BIOS (aka grub-pc) with grub for UEFI (aka grub-efi) as:

sudo apt-get --reinstall install grub-common grub-efi-amd64 os-prober

this would be enough to get the system booting again, however we also aim for secure boot so we also need to install the following:

sudo apt-get install shim-signed grub-efi-amd64-signed linux-signed-generic

This installs signatures for grub and the kernel which are used to verify the integrity of these at boot. Furthermore we install shim, which is a passthrough bootloader that translates from the Microsoft signatures on you mainboard to the signatures by Canonical used to sign grub and the kernel (see this for details).

Next we finally install grub to the EFI partition by:

sudo grub-install --uefi-secure-boot /dev/sdX
sudo update-grub

where sdX is again your system drive (e.g. sda).

Now you can enable secure boot in your BIOS and benefit. Note that some BIOS implementations additionaly require you to select the trusted signatures. Look out for an option called “Install Default Secure Boot keys” or similar to select the Microsoft signatures.

Updating Crucial MX100 Firmware with Ubuntu

There has been a Firmware update for the Crucial MX100 to MU02. In case you are running Ubuntu there is an easy way to perform the update without using a CD or USB Stick.

As the firmware comes in form of an iso image containing Tiny Core Linux, we can instruct grub2 to directly boot from it. Here is how:

  1. append the following to /etc/grub.d/40_custom:
    menuentry "MX100 FW Update" {
     set isofile="/home/<USERNAME>/Downloads/MX100_MU02_BOOTABLE_ALL_CAP.iso"
     # assuming your home is on /dev/sda3 ATTENTION: change this so it matches your setup
     loopback loop (hd0,msdos3)$isofile
     linux (loop)/boot/vmlinuz libata.allow_tpm=1 quiet base loglevel=3 waitusb=10 superuser rssd-fw-update rssd-fwdir=/opt/firmware rssd-model=MX100
     initrd (loop)/boot/core.gz

    read this for details of the file format.

  2. run sudo update-grub
  3. reboot and select “MX100 FW Update”
  4. Now you can delete the menuentry created in step1

Note that this actually much “cleaner” than using windows where you have to download 150MB of the Crucial Store Executive Software which actually is a local webserver written in Java (urgh!). But all it can do is display some SMART monitoring information and automatically perform the above steps on windows.

Header Image CC-by MiNe

Introducing Sensors Unity

Sensors-Unity is a new lm-sensors GUI for the Unity Desktop. It allows monitoring the output of the sensors CLI utility while integrating with the Unity desktop. This means there is no GPU/ HDD support and no plotting.
If you need those you are probably better suited with psensor. However if you just need a overview of the sensor readings and if you appreciate a clean UI you should give it a shot.

Sensors Unity is available from this PPA

It is written in Python3 / GTK3 and uses You can contribute code or help translating via launchpad.


In contrast to other applications the interface is designed around being a application. Instead of getting another indicator in the top-right, you get an icon in the launcher:

The user interface
The user interface

The idea is that you do not need the sensor information all the time. Instead you launch the app when you do. If you want to passively monitor some value you can minimize the app while selecting the value to display in the launcher icon.

To get the data libsensors is used which means that you need to get lm-sensors running before you will see anything.

However once the sensors command line utility works you will see the same results in Sensors-Unity as it shares the configuration in /etc/sensors3.conf.


Unfortunately configuring lm-sensors via /etc/sensors3.conf is quite poorly documented, so lets quickly recap the usage.

  • /etc/sensors3.conf contains the configuration for all sensors known by lm-sensors
  • however every mainboard can use each chip in a slightly different way
  • therefore you can override /etc/sensors3.conf by placing your specific configuration in /etc/sensors.d/ (see this for details)
  • you can find a list of these board specific configurations in the lm-sensors wiki
  • to disable a sensor use the ignore statement
  • #ignore everything from this chip
    chip "acpitz-virtual-0"
     ignore temp1
     ignore temp2
  • to change the label use the label statement
  • chip "coretemp-*"
     label temp1 "CPU Package"

    Sensors-Unity Specific Configuration

Sensors-Unity allows using the Pango Markup Language for sensor labels. For instance if you want “VAXG” instead of “CPU Graphics” to be displayed, you would write:

label in4 "V<sub>AXG</sub>"

In order not to interfere with other utilities and to allow per-user configuration of the labels/ sensors Sensors-Unity first tries to read ~/.config/sensors3.conf before continuing with the lm-sensors config lookup described above.

Replacing your desktop laptop with a ITX workstation

If you use your laptop as a desktop replacement, you will at some point get an external display and a mouse/ keyboard for more convenient usage.
At this point the laptop becomes only a small case of non-upgradable components.

Now you could as well replace your laptop by a real case of comparable size.  This will make your PC not only easily upgradable, but allow higher-end components while being more silent at the same time.

Continue reading Replacing your desktop laptop with a ITX workstation

Secure Owncloud Server

This article is about how to securely configure the machine where your Owncloud instance will be running.
Even if you set-up your connection with Owncloud in a secure way,  your data still can be compromised by exploiting security flaws in the underlying architecture.

In the following we specifically will cover the underlying software stack and brute-force password hacking attempts.

Continue reading Secure Owncloud Server

How to manually update a deb package from source

Probably everyone has encountered a package in Ubuntu which was not the newest released version while one for some reason needed the newest one. The first step is to search for a PPA with the desired version. But what if there is no such PPA or you want to build the version yourself? This is where this guide comes in. Note however that this is not aimed at ordinary users – you need some experience with programming/ compiling to successfully build a package.

Before you start

Before you start make sure that you have source packages enabled in your software sources.
Next you obviously need the upstream source tar-ball of the new program which should look something like <packagename><version>.tar.gz.
Download this tar-ball to a new directory <somedir> and extract it there.

Updating Package info

For the following commands I assume you are in the previously created directory <somedir>.

First we need to get the old version of the source package

apt-get source <packagename>

This will download and extract the old source package into <packagename><oldversion>.

Now we need some helper scripts to perform the upgrading as well as the build-time dependencies of the package

sudo apt-get install dpkg-dev devscripts fakeroot
sudo apt-get build-dep <packagename>

Next change into the extracted sources of the old package and update the packaging

cd <packagename>-<oldversion>
uupdate -v <newversion> ../<packagename>-<newversion>.tar.gz

# change into the extracted new package
cd ../<packagename>-<newversion>

# update version info
dch -l ~ppa -D $(lsb_release -sc)

For more information see the Debian New Maintainers Guide.

Building the program

To trigger a rebuild of the program simply execute


Uploading your version to a PPA

To upload a package to a PPA you first need to sign it to prove that you are the author. To do this you have to execute the following in the <packagename><newversion> directory

debuild -S

Furthermore you need the upload tool dput to actually perform the uploading

sudo apt-get install dput

Now change to <somedir> and execute

dput ppa:<your_username>/<repository> <source.changes>

You can find more information at Launchpad.

Secure Owncloud setup

While the Owncloud Manual suggests enabling SSL, it unfortunately does not go into detail how to get a secure setup. The core problem is that the default SSL settings of Apache are not sane as in they do not enforce strong encryption. Furthermore the used default certificate will not match your server name and produce errors in the browser.

In the following a short guide in how to set-up a secure Apache 2.4 server for Owncloud will be presented.

Continue reading Secure Owncloud setup

How to root Android using Ubuntu

The Big Picture

Android consists of three parts relevant to rooting

  1. the bootloader
  2. recovery system
  3. main system

typically only the main system is running, that is the Linux Kernel, the launcher, the phone app etc.. If we talk about rooting, that means we want to add an additional app to the main system which may access secured parts of the main system and also acts as a gatekeeper for other apps that want to get access too.

The problem is that we need access to the secure parts of the system in order to do so, which means that we cant simply install that app (e.g. an apk) from within the main system.

This means we have to go one level down. This is where the recovery system is. Typically you do not see it, as it is only active when the main system can not run – either because a system update is installed or because you do a factory reset.
As the recovery system can do a full system update, it means that it has also access to the secured parts of the main system – exactly what we need. Unfortunately the stock recovery system does not allow installing apps, so we have to replace it.
But before that we have to talk about the bootloader.

The bootloader is a tiny piece of software which decides wether to start the recovery or the main system (or another main system, like Ubuntu Phone). But in the default configuration in only starts systems that it knows and trusts. In this configuration the bootloader is called locked. Although it prevents malicious software to change the phone and spy on us, it also prevents us from replacing the recovery system. This concept is also coming to the PC btw where it is called secure-boot.

Here is a graphical overview of the Android components:


So what we need to do in order to get root access is

  1. unlock the bootloader
  2. replace the recovery system
  3. install a superuser app

Note that unlocking the bootloader also allows attackers to circumvent any of the android security features. It is possible directly access all the files on the phone from the bootloader.
Therefore android will wipe all userdata when the bootloader is unlocked


First you need to install the fastboot binary to be able to perform low-level communication with the device

apt-get install android-tools-fastboot

Next you have to allow non-root users to execute commands over USB, so you do not have to run fastboot as root. For this create the file


with the following content

SUBSYSTEM=="usb", ATTR{idVendor}=="<VENDOR>", MODE="0666", GROUP="plugdev"

you can find the value for <VENDOR> on the page linked here.

Finally you have to reboot into fastboot mode. Usually there is a key combination you have to press on startup.

Remember this key combination as you will need some more times.

Samsung Devices however, like the Galaxy S3, do not support the fastboot mode – instead they have a download mode, which uses a proprietary Samsung protocol. To flash those you have to use the Heimdall tool. While this article does not cover the heimdall CLI calls, the general discussion still applies.

Unlocking the Bootloader

for google devices, like a Nexus 4 or Nexus 7 it is just

fastboot oem unlock

if you have a Sony Xperia device, like a Xperia Z, you additionally have to request a unlock key and then do

fastboot oem unlock 0x<KEY>

where <KEY> is the key you obtained.

Replacing the Recovery System

There are two prominent alternative recovery systems with the ability to install apps

Clock Work Mod (CWM) is probably most known so we will use that one. From the Website linked above download the recovery image which fits your phone.
Here you have the choice between the ordinary recovery which uses the volume buttons of your device for navigation and the touch recovery which supports the touch screen.

fastboot flash recovery <RECOVERY>.img

where <RECOVERY> is the name of the file you downloaded. For instance for a Nexus 5 and CWM it would be

fastboot flash recovery recovery-clockwork-

Installing the superuser app

To my knowledge the only one superuser app that works with Android 5.0 lollipop is SuperSU. So we will have to use that. However there is also a pull-request for Superuser by CWM which might make it also work with Android 5.0 in the future.

The installation procedure is a little bit different. In the appropriate package for your device you will find a boot image which we have just to start once with

fastboot boot image/CF-Auto-Root-hammerhead-hammerhead-nexus5.img

note that there is no flash in the line above. The image which starts contains a script which installs SuperSU and modifies the startup procedure so it can get around SELinux.

Pre Android 5.0 lollipop

If you run devices with Adroid older than 5.0 lollipop you have several choices here

although SuperSU is the most prominent one, I would recommend getting Superuser by CWM, as it is open source and also nag-free as there is no “pro” version of it.

To install we need to get this zip archive and copy it to the device. To install it, we need to reboot into fastboot mode and then select “Recovery Mode” to get to the recovery system. Once in Recovery mode select

install zip -> choose zip from /sdcard

then browse and select the “” you just copied.

Once installed select

Go Back -> reboot system now

Once the system has started you should have a “Superuser” App on your device. Congratulations, you are done.

Optional: flash stock recovery

As the recovery is responsible for installing system updates it is a good idea to revert to stock version after you installed root, so the system can auto-update itself again. However a system update will also remove your superuser app so you will have to repeat the above procedure again.

If you have a Google Nexus Device, you can grab the factory images here.  There you will find a image of the stock recovery and restore it by

fastboot flash recovery recovery.img

Debugging native code with ndk-gdb using standalone CMake toolchain

I recently ran into this problem and could not find any good solution on the Internet. So next comes a small summary of the problem with hopefully enough buzzwords, so Google can lead you here.

If you want to do C++ development on Android, you need the NDK for cross compilation. It comes by default with its own build system called ndk-build, which basically is a bunch of custom makefiles. But if you are sharing code between the Android Platform and lets say plain Linux, you have likely already a build system installed. For C/C++ CMake is quite popular as it supports different platforms and compilers. Fortunately there is already a project which adds Android support to CMake. I will not cover that – instead I assume you are using it already.

Unfortunately you cant use the ndk-gdb script supplied with the NDK to debug your application as it relies on the behaviour of ndk-build. But as said earlier, ndk-build is no wizardy, but just a bunch of scripts. So it is possible to emulate the behaviour using CMake, as following:

Add the following macro to your CMakeLists.txt file

macro(ndk_gdb_debuggable TARGET_NAME)
    # create custom target that depends on the real target so it gets executed afterwards
    add_custom_target(NDK_GDB ALL) 
    add_dependencies(NDK_GDB ${TARGET_NAME})
    # 1. generate essential Android Makefiles

    # 2. generate gdb.setup
    file(WRITE ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ${GDB_SOLIB_PATH}\n")
    file(APPEND ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "directory ${PROJECT_INCLUDES}\n")

    # 3. copy gdbserver executable
    file(COPY ${ANDROID_NDK}/prebuilt/android-arm/gdbserver/gdbserver DESTINATION ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/)

    # 4. copy lib to obj
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND mkdir -p ${GDB_SOLIB_PATH})

    # 5. strip symbols

Then use it like

add_library(YourTarget ...)

You should now be able to use ndk-gdb with CMake, just as if you would have used ndk-build.

Note that steps 4 and 5 are optional for debugging. They just reduce the size of the library that has to be transferred to the device. If you dont care, you can just leave them out. But then the solib search path from step 2 must be set to:

file(WRITE ./libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ./libs/${ANDROID_NDK_ABI_NAME}\n")

Ideally someone should integrate that in the Android toolchain linked above.

Update Merged Upstream

GNOME Project suffering the NIH disease

When I first read about GNOME dropping support for BSD and Solaris, my impression was that this is a good idea to aiming to unify limit resources and get the work done. I was also excited about the idea of the GNOME OS. I think it is necessary to keep the big picture in mind when developing the different components. Previously Ubuntu was the only project that did this and it was also the reason why I started using Ubuntu. Because it made the different parts of Linux work together to achieve the big goal of a great overall system.

But then things started to go wrong. Instead of picking existing components and giving them the final polish like Ubuntu did before, the GNOME project started developing things from scratch without any apparent reason to do so. And even worse: incompatible to existing solutions. It started with the rejection of the appindicator specification implemented by Ubuntu and KDE. At that point it was not clear to me whether the specification was broken or whether the responsible people at GNOME were just ignorant.

Then came systemd. And it started to be apparent that unfortunately it was the latter. To my knowledge Ubuntu is the biggest deployment of GNOME and it is based around the Linux ecosystem. So dropping support for Ubuntu has nothing to do with unifying limited resources. Ubuntu is your target audience, so if you should try to collaborate with a project you should collaborate with Ubuntu. My opinion on that is that some Fedora developers were pissed that the Unity interface was exclusive for Ubuntu and instead of packaging it for Fedora they started making GNOME Shell exclusive for Fedora.

Next I read about the overlay scrollbars re-developed for GNOME. While the first reaction might be the developers simply do not want to use Ubuntu technology, I think the reason is different. The developer does not seem to have any antipathy towards Ubuntu and if we look at the project he developed the scrollbars for another explanation becomes visible.

But first lets take a step back. Lets take a look at the core of GNOME. By this I mean the programming language it is written in. It is C/GObject; plain C extended with naming conventions and libraries to allow modern paradigms such as object oriented programming and events/ observer pattern. From today’s perspective one might wonder why one should choose this over C++, which integrates most of the features at the language level. But back when the GNOME project started C++ was not mature yet which meant that your program might break with the next compiler update or even the next STL update.

Therefore basing your project on plain C was a good idea. But a few years back it became obvious that programming in C/GObject seriosly lacked behind more modern programming languages like C++, Java and C# for application development.

Unfortunately instead of moving the straightforward route from C to C++, which most of C developers took when C++ matured(that was about 10 years ago), Vala was born.

So instead of using a proven and mature foundation, a new layer of indirection was created to essentially provide the same feature set. Commonly this is referred to as the “not invented here” symptom. A more derogative phrase would be reinventing the wheel..

What is sad here is that being an open source project, GNOME disregards the biggest advantage of open source software, namely standing on the shoulders of giants. With open source software you can use take an existing solution and improve upon it. This way you get the base functionality as well as the bug fixes that went in it for free. If you would develop it from scratch, you most likely would have to fix the same bugs again yourself.

To sum up here is what GNOME is losing right now

  • 30 years of language and library experience by using Vala instead of C++
  • 5 years of deployment and bug fixing by using systemd instead of extending upstart
  • 1 year of development testing and design if they reimplement overlay-scrollbars
  • 8 years of foundation development that went into Eclipse, by developing Gnome Builder from scratch
  • but most importantly: the synergy effects by collaborating with others

Do not get me wrong, I am not saying that the GNOME solutions could be replaced by existing solutions – I am saying that by extending existing solutions the GNOME project and the free software landscape would be better off as a whole.