Odroid U3 in the Nextcloud Box

Until now I used a microSD card for storage of my Owncloud setup. The drawback of doing so is that microSD cards only allow for so many writes until they die and go in a read-only mode.

Therefore the Nextcloud box is an attractive upgrade allowing to use a more failure proof HDD while still keeping everything inside the same housing.

The housing

The first thing to note is that the housing is much larger than one might think from the photos. See the comparison photo with the Odroid housing. Actually this should not be a surprise as the 2.5″ HDD alone is larger than the Odroid board.


The Odroid U3 has to mounted upside-down to be able to attach a network cable. First I though that this is a limitation because the housing was not designed with the U3 in mind – however all officially supported boards have the RJ45 jack and the USB ports at the same side, which makes attaching a network cable even harder.

Furthermore the opening at the top left of the housing seems to be designed for a RJ45 jack, but none of the supported boards has the jack at that place. This leaves me wondering which board the housing was originally designed for.

Yet the major weakness of the box is its openness (pun intended); the noise of the HDD is not silenced in any way and therefore quite noticeable if you stand nearby. Due to this I would not recommend to place the box in your bedroom.

Inside the Box

To fit inside the box, the top of the plastic pins holding the Odroid cooler have to be clipped off. This does not impact their functionality though.

The USB ports of the U3 are located at the side which only allows it to fit inside the box if using exactly one of the horizontal ports. However the supplied USB3 cable requires using a second USB port or the USB charger for power delivery.

This choice is quite unfortunate as the drive works just fine in USB2 mode when powered only over one USB port and no officially supported board actually has an USB3 port. Luckily I had a spare 15cm USB2 cable around which did the job.

The performance of the HDD is good considering the price. In most cases it will be limited by the USB2.0 transfer rate anyway.


Summing up I am quite satisfied with the package, however the compatibility could be easily extended if the supplied cable would deliver power when connected over a single USB port. Furthermore the openings of the housing should fit to at least one of the officially supported boards.

WordPress, AMP and Ads

Delivering your content not only as HTML, but also using the AMP-HTML subset not only reduces the loading times for your readers, but also improves the score of your site in the Google results. On top of that your site will be proxied by the Google AMP Cache, lowering your server load.

If you are using WordPress, adding AMP support is as easy as installing the AMP-Plugin which is developed by Automattic, the company behind WordPress.

Doing so will likely get you more visitors, but unfortunately there is a drawback: the plugin does not support advertisements out of the box.

So if you – like me – rely on advertisements to cover the server costs, you have to apply some tweaks to get ads on AMP pages as well. This is what this post will be about.

Extending the AMP Plugin

Basically you have to modify your current theme. If you are using an off-the shelf theme, you should create a child-theme – otherwise just extend the functions.php of your custom theme.

AMP uses the amp-ad tag for displaying ads which requires a additional script to work. Currently it also works without adding a script, but it already generates a warning. Lets be safe here and add the required script to the list:

add_action( 'amp_post_template_data', 'xyz_amp_post_template_add_ad_script' );
function xyz_amp_post_template_add_ad_script( $data ) {
	$data['amp_component_scripts']['amp-ad'] = 'https://cdn.ampproject.org/v0/amp-ad-0.1.js';
	return $data;

Next we create a filter that injects the actual amp-ad tags into out content:

add_action( 'pre_amp_render_post', 'xyz_amp_add_custom_actions' );
function xyz_amp_add_custom_actions() {
    add_filter( 'the_content', 'xyz_amp_add_ad' );

function xyz_amp_add_ad( $content ) {
    // hack for skipping featured-image
    if ( false !== strpos( $content, 'wp-post-image')) {
        // as it unfortunately uses get_post too
        // fixed in AMP 0.4.1
        return $content;

    '<amp-ad width=300 height=250
    '<amp-ad width=300 height=250

The snippet above assumes you are using google adsense. If you want to integrate a different Ad Network, look here for the specific syntax.

On OGRE versions

Currently one can choose between the following OGRE versions
1.9, 1.10, 2.0 and 2.1

However the versioning scheme has become completely arbitrary while still resembling semantic versioning.
As a consequence somebody even had to put a “What version to choose?” guide on the OGRE homepage.

Unfortunately the guide confuses more than it helps:

  • First of all, forget about 1.9. It lacks most of the bugfixes that are in 1.10 and therefore is less stable. Even if you rely on third-party add-ons – 1.10 is the way to go. The pointless change that broke backward compatibility got reverted in the github fork.
  • If you do not like tracking a moving target and want stability, go with 1.10. It contains most bugfixes and the existing API does not change. This is actually your only choice if you want to rely on third-party add-ons or have existing code using OGRE. Furthermore you get the largest Platform support including HTML5 and Mobile.
  • You can mostly forget about 2.0. It is neither backward compatible with 1.10 nor forward compatible with 2.1. One should rather think of it as 2.1 beta (2.0 beta when using semantic versioning).
    Feature-wise it  boosts the node updating performance by about the factor of 5x by exploiting cache locality (see 1.9 review by Mike Acton). However it lacks many fixes that are in 1.10 and it is unmaintained so things will not improve.
  • Finally 2.1 is where all the new and shiny things are. However it is backwards incompatible to 1.10 and 2.0 so calling it 3.0 would be more appropriate.
    Feature-wise you get the high node updating performance of 2.0 and automatic batching for rendering. The batching results in a higher GPU utilization so with 2.1 rendering speed is bound by the GPU performance. With 2.0 and 1.x on the other hand the GPU was often stalled because of inefficient material and geometry changes. However even 2.1 lacks some bugfixes that are in 1.10 and more importantly it is a moving target as the API still evolves. Also it only works on desktop platforms.

You might wonder why 2.1 while having the latest features, lacks some bugfixes of older releases. Read on..

Lack of branching concept

Usually projects have a default master or trunk branch where all development eventually ends up and optionally some maintenance and bleeding edge branches. See this for a small overview of the according workflows.

Now OGRE has the following branches: v1-9, v1-10, v2-0, v2-1 and default. By common sense default would contain the most recent code while the other branches would be maintenance branches of the respective releases. But not today! Actually all of these branches are under active development with 2.1 containing the latest features and default representing the current stable version. (actually default and 1.10 are pretty much the same)

As there is no clear code flow between the branches this means that branches are diverging. Fixes for issues in one branch do not distribute across the others. Examples:

As you can see we got all possible directions here. The situation can not be easily resolved by successive merging branches into each other.

With the github fork, I therefore manually compared the different branches and cherry-picked all fixed into master (v1-10).

This is why I initially called it the stable fork. Even though it contains new features as discussed in my last post, the API is backwards compatible to existing code/ add-ons and it aggregates all bugfixes in one branch.

Creating PyGTK app snaps with snapcraft

Snap is a new packaging format introduced by Ubuntu as an successor to dpkg aka debian package. It offers sandboxing and transactional updates and thus is a competitor to the flatpak format and resembles docker images.

As with every new technology the weakest point of working with snaps is the documentation. Your best bet so far is the snappy-playpen repository.

There are also some rough edges regarding desktop integration and python interoperability, so this is what the post will be about.

I will introduce some quircks that were needed to get teatime running, which is written in Python3 and uses Unity and GTK3 via GObject introspection.

The most important thing to be aware of is that snaps are similar to containers in that each snap has its own rootfs and only restricted access outside of it. This is basically what the sandboxing is about.
However a typical desktop application needs to know quite a lot about the outside world:

  • It must know which theme the user currently uses, and after that it also needs to access the theme files.
  • For saving anything it needs access to /home
  • If it should access the internet it needs system level access as well; like querying whether there actually is an active internet connection

To declare that we want to write to home, play back sound and use unity features we use the plugs keyword like

         # ...
         plugs: [unity7, home, pulseaudio]

However we must also tell our app to look for the according libraries inside its snap instead of the system paths. For this one must change quite a few environment variables manually. Fortunately Ubuntu provides wrapper scripts that take care of this for us. They are called desktop-launchers.

To use the launcher the configures the GTK3 environment we have to extend the teatime part like this:

        command: desktop-launch $SNAP/usr/share/teatime/teatime.py
        # ...
         # ...
         after: [desktop/gtk3]

The desktop-launch script takes care of telling PyGTK where the GI repository files are.

You can see the full snapcraft.yml here.



Before my fix, one had to use this rather lengthy startup command

env GI_TYPELIB_PATH=$SNAP/usr/lib/girepository-1.0:$SNAP/usr/lib/x86_64-linux-gnu/girepository-1.0 desktop-launch $SNAP/usr/share/teatime/teatime.py

which hard-coded the architecture.


After this teatime will start, but the paths still have to be fixed. Inside a snap “/” still refers to the system root, so all absolute paths must be prefixed with $SNAP.

Actually I think the design of flatpak is more elegant in this regard where “/” points to the local rootfs and one does not have to change absolute paths. To bring in the system parts flatpak uses bind mounts.


Once you get the hang of how snaps work, packaging becomes quite straightforward, however currently there are still some drawbacks

  • the snap package results in 120MB compared to a 12KB deb. This is actually a feature as shipping all the dependencies makes the snap installable on every linux distribution. However I hope that we can get this down by introducing shared frameworks (like GTK3, Python) that do not have to be included in the app package.
  • Due to another issue, your snap has only the C locale available and thus will not be localized.
  • [Update: fixed] Unity desktop notifications do not work. You will get a DBus exception at the corresponding call.
  • [Update: fixed] The shipped .desktop file is not hooked up with the system, so you can only launch the app via the command line.

Introducing the OGRE fork on GitHub

in this post I want to introduce the OGRE fork on github. The goal of the fork is to provide a stable and reliable OGRE 1.x series while at the same time modernizing parts under the hood updates.

The idea behind this is that there are many existing 1.x codebases, actually a whole 1.x ecosystem, that can be modernized that way.
The last release of the 1.x series was over 2 years ago, so using the current 1.10 branch already gives a lot of improvements.

However the 1.10 branch contains some unnecessary changes that make it incompatible to the 1.9 release. These were reverted, so old code should compile again.

Additionally there are modernizing changes when compared with the upstream at bitbucket, which are discussed in more detail in the following.

This will be still a high-level overview though. If you need even more details, take a look on the git-ChangeLog.

Replace legacy renderers

With 1.9 the only stable renderers were GL(2.0) and DX9 which by today are outdated and produce different results than the GLES2 renderer that is used for Web and Mobile, which makes porting applications harder.

Therefore the goal is to get the GL3Plus renderer into a shape where it can act as a drop-in replacement for the old GL render and then drop the latter.

You can see the current status here – while it says that only 41/86 tests pass, most of the failed tests actually only differ in only 1 or 2 pixels which is pretty good when considering we are comparing fixed function vs. dynamically generated shaders (RTSS).

But the fork does not stop here; the GL renderers now also share the context creation code. This allows using the GLES2 renderer on the desktop or creating a GL3+ context using EGL – which is a prerequisite for headless rendering and for running on Wayland/MIR. Overall this makes your applications much more portable.

Improved regression testing

Changing the renderer requires being able to continuously monitor whether the rendering is still correct and to immediately detect regressions.

For this the Testing and VisualTesting frameworks were fixed and now correctly run on Linux and OSX. They run on each pull request to catch errors before the code even touches master.

Furthermore the tests can now be built without input handling (OIS) which hopefully will lead to wider adoption.

The unit tests still use cppunit which is a pain to use when compared to gtest, but changing that would require a large rewrite.

Batteries Included

The build now automatically handles the dependencies by downloading and building them as needed. On all platforms. So you do not have to care about mismatching compile settings any more.

Furthermore the SampleBrowser can now be build without input handling which eases quick testing. And in case you want input, now SDL2 is used instead of the esoteric and outdated OIS.

Next bringing your applications to the web is now easier; OGRE was able to run in the browser for a long time already – but the process was only badly documented. The fork tracks the Emscripten sample code inside the repository and also has documentation on how to use it.

Bindings for Scripting Languages

For using scripting languages, there is traditionally MOGRE (C#), Ogre4j, Python-Ogre which are by now basically all outdated and unmaintained.

To put all these efforts on a solid foundation I took some inspiration from ruby-ogre and added SWIG support.

At the time of writing there are already working Python bindings (see sample here) and a proof-of-concept for C#.

By using SWIG it is now easily to add further bindings for Ruby, Java, JavaScript (Node.js) as well. But as I care mostly for Python this will be an exercise for the reader.

A word on OGRE 2.1

You might wonder why one should care about the 1.x and not go for 2.1 directly. The obvious reason is that you have an existing codebase and OGRE 2.1 drastically changes API and even the material file format.

Then, while faster than 1.x, OGRE 2.x is still far from feature completeness with 1.x – it still completely lacks Web and Mobile support.
Faster also actually depends on your scene/ material usage; For instance, if you only render a single object (product showcase) 2.x will not offer you any advantages.

Finally the main functionality advance in 2.1, namely the new material system (HLMS) and with it physically based shading were backported to 1.x.

Yet if you only care about desktop and want to render large immersive worlds (lots of nodes, lots of materials), 2.1 is the way to go.



Learning Modern 3D Graphics Programming

one of the best resources to learn modern OpenGL and the one which helped me quite a lot is the Book at www.arcsynthesis.org/gltut/ – or lets better say was. Unfortunately the domain expired so the content is no longer reachable.

Luckily the Book was designed as an open source project and the code to generate the website is still available at Bitbucket. Unfortunately this repository does not seem to be actively maintained any more.

Therefore I set out to make the Book to be available again using Github Pages. You can find the results here:


However I did not simply mirror the pages, but also improved it at several places. So what has changed so far?

Continue reading Learning Modern 3D Graphics Programming

meCoffee PID controller Review for the Rancilio Silvia

The Rancilio Silvia Espresso Machine has one major weakness: the high fluctuation of the default thermostat.
The taste of the espresso already varies with temperature deviations as small as 1°C, but the thermostat of the Silvia V3, V4 has a range of ~20°C (~30°C for Silvia V1, V2).

a thermostat having a hard time keeping the target temperature of 110°C

So to get a decent tasting espresso one need to predict the heating phase of the boiler, which is called temperature surfing. However this involves wasting water and one also needs the right timing which is especially difficult for beginners.

PID Controlled Temperature

A better solution to is to replace the default thermostat with a digital one. This means adding microcontroller that will monitor the temperature using the PID algorithm. This way you always have the right temperature – without surfing. Furthermore the microcontroller can be used to add some fancy features to the machine like preinfusion or a shot timer.

Continue reading meCoffee PID controller Review for the Rancilio Silvia

Rust prevention on Rancilio Silvia

The main point one can criticise about the build of the Rancilio Silvia is that it is not all stainless steel. While many important parts like the drip tray and the cup holder are, the main frame (black) is just powder coated steel. This means it can rust and according to reports on the internet it actually does.
However one can do some small and cheap modifications to drastically reduce the risk.

Rust is the product of the reaction of steel and oxygen in the presence of water, so to prevent rust one either has to remove oxygen or water.
The powder coating of the frame typically removes both, but over time scratches penetrate the coating and the frame can rust.

Prevent Scratches

The first step to prevent rust is therefore to prevent scratches on the steel frame. To this end one can add small rubber knobs (8mm diameter) at places where movable parts come in contact with the frame

Prevent Water Splashes

The other thing one can do is to ensure that less water come in contact with the frame. Unfortunately the drain on the Silvia is quite short, so water splashes in all directions and can run behind the drip tray – especially if it is not moved all the way back. See the following video:

An easy fix to improve the situation is to extend the drain with a silicon pipe (19mm inner diameter) for about 4cm. After the modification, the water hardly splashes any more. (again this is the worst-case scenario with the drip tray all the way in the front)

Header Image CC-sa-by Darkone

Changing the Fan on the Corsair CS550M

While the Corasair CS550M is a good power supply unit (PSU), the fan or more specifically the fan bearing is quite poor. Even though the fan is temperature controlled, this results in a annoying clickering noise. Therefore one might want to replace it with a better quality, more quite fan.

In order to do so, you will need the following tools:

  • soldering iron
  • shrink-on tube
  • diagonal cutter
  • flat nose pliers
  • convenience: cable stripper

However note that the used high power capacitors store enough charge to shock you even several minutes after disconnecting the PSU. So you should wait for about 10minutes after disconnecting the PSU and even then watch out not to touch the pins of the capacitors.

If you set-up everything correctly, follow these steps to silence your corsair:

now you can re-asseble the PSU housing and optionally attach the tacho cable to a free plug on your mainboard to monitor the PSU fan.

This post was inspired by this youtube video. The pc fan connector diagram was made by pavouk.org.

Converting a Ubuntu and Windows dual-boot installation to UEFI

UEFI is the successor to BIOS for communicating with the Firmware on your Mainboard.
While the first BIOS was released with the IBM-PC in 1981, the first UEFI version (EFI 2.0) was released 25 years later in 2006 building upon the lessons learned in that timespan. So UEFI is without any doubt the more modern solution.

The user-visible advantages of using UEFI instead of BIOS are

You could reinstall both Windows and Ubuntu to get UEFI. However it is also possible to convert existing installations of both on the fly – without the backup/ restore cycle. You should still do a backup in case something goes wrong though.


Only the 64bit Versions of Windows support UEFI. Therefore this guide assumes that you run the 64bit versions of both Windows and Ubuntu.

Furthermore verify the following items before you continue – otherwise you will not be able to finish the conversion. Use GParted in case you have not enough space before the first or after the last partition.

  • 250MB space in front of first partition
    Typically Windows 8 creates a 350MB System Partition upon installation. This space can be reclaimed for a 100MB EFI partiton and a new 100MB Windows System partition.
  • 1-2MB behind last partiton (for the GPT backup)
  • UEFI bootable Ubuntu USB drive.
    You can use the startup disk creator on ubuntu with an Ubuntu 14.04+ ISO.
  • UEFI bootable Windows USB drive.
    You can use the Microsoft Media Creation tool for Windows 10 to get one.

to test that the sticks are indeed UEFI compatible, try booting them with CSM Mode disabled in your BIOS.

Convert the drive to GPT

UEFI requires a GUID Partition Table (GPT), so first we need to convert from MBR to GPT.

After this step you will not be able to boot your system any more. So make sure you have the Ubuntu USB drive ready.

We will use gdisk to perform the conversion as following:

sudo gdisk /dev/sdX
Command (? for help): w

where sdX is your system drive (e.g. sda)

Convert Windows to UEFI

Now boot your Windows USB drive and enter the command prompt as described in this Microsoft Technet article at step 6.

Continue with the following steps from the Article. Note that we have skipped steps 1-4 as we used Ubuntu to convert the disk to GPT.

We have now created a EFI partition, a new EFI compatible Windows System Partition and we have installed the Windows Bootloader to the EFI partition. Your Windows installation should now start again.
At this point you could also perform an upgrade to Windows 10, as the upgrade would erase grub from the EFI partition anyway.

Next we are going to install grub to the EFI partition and make it manage the boot.

Enter a Ubuntu chroot

As we can not directly boot our Ubuntu installation, we will instead boot from the Ubuntu USB drive and the switch to the installed Ubuntu.
To do the switch we have to setup and enter a chroot as following

sudo mount /dev/sdXY /mnt 
sudo mount /dev/sdX1 /mnt/boot/efi 
sudo mount -o bind /dev /mnt/dev
sudo mount -o bind /sys /mnt/sys
sudo mount -t proc /proc /mnt/proc
sudo cp /proc/mounts /mnt/etc/mtab
sudo cp /etc/resolv.conf /mnt/etc/resolv.conf
sudo chroot /mnt

where sdXY is the partition where your Ubuntu system is installed (e.g. sda5)

Convert Ubuntu to UEFI

Inside your Ubuntu Installation we have to replace grub for BIOS (aka grub-pc) with grub for UEFI (aka grub-efi) as:

sudo apt-get --reinstall install grub-common grub-efi-amd64 os-prober

this would be enough to get the system booting again, however we also aim for secure boot so we also need to install the following:

sudo apt-get install shim-signed grub-efi-amd64-signed linux-signed-generic

This installs signatures for grub and the kernel which are used to verify the integrity of these at boot. Furthermore we install shim, which is a passthrough bootloader that translates from the Microsoft signatures on you mainboard to the signatures by Canonical used to sign grub and the kernel (see this for details).

Next we finally install grub to the EFI partition by:

sudo grub-install --uefi-secure-boot /dev/sdX
sudo update-grub

where sdX is again your system drive (e.g. sda).

Now you can enable secure boot in your BIOS and benefit. Note that some BIOS implementations additionaly require you to select the trusted signatures. Look out for an option called “Install Default Secure Boot keys” or similar to select the Microsoft signatures.