Introducing the OGRE fork on GitHub

in this post I want to introduce the OGRE fork on github. The goal of the fork is to provide a stable and reliable OGRE 1.x series while at the same time modernizing parts under the hood updates.

The idea behind this is that there are many existing 1.x codebases, actually a whole 1.x ecosystem, that can be modernized that way.
The last release of the 1.x series was over 2 years ago, so using the current 1.10 branch already gives a lot of improvements.

However the 1.10 branch contains some unnecessary changes that make it incompatible to the 1.9 release. These were reverted, so old code should compile again.

Additionally there are modernizing changes when compared with the upstream at bitbucket, which are discussed in more detail in the following.

This will be still a high-level overview though. If you need even more details, take a look on the git-ChangeLog.

Replace legacy renderers

With 1.9 the only stable renderers were GL(2.0) and DX9 which by today are outdated and produce different results than the GLES2 renderer that is used for Web and Mobile, which makes porting applications harder.

Therefore the goal is to get the GL3Plus renderer into a shape where it can act as a drop-in replacement for the old GL render and then drop the latter.

You can see the current status here – while it says that only 41/86 tests pass, most of the failed tests actually only differ in only 1 or 2 pixels which is pretty good when considering we are comparing fixed function vs. dynamically generated shaders (RTSS).

But the fork does not stop here; the GL renderers now also share the context creation code. This allows using the GLES2 renderer on the desktop or creating a GL3+ context using EGL – which is a prerequisite for headless rendering and for running on Wayland/MIR. Overall this makes your applications much more portable.

Improved regression testing

Changing the renderer requires being able to continuously monitor whether the rendering is still correct and to immediately detect regressions.

For this the Testing and VisualTesting frameworks were fixed and now correctly run on Linux and OSX. They run on each pull request to catch errors before the code even touches master.

Furthermore the tests can now be built without input handling (OIS) which hopefully will lead to wider adoption.

The unit tests still use cppunit which is a pain to use when compared to gtest, but changing that would require a large rewrite.

Batteries Included

The build now automatically handles the dependencies by downloading and building them as needed. On all platforms. So you do not have to care about mismatching compile settings any more.

Furthermore the SampleBrowser can now be build without input handling which eases quick testing. And in case you want input, now SDL2 is used instead of the esoteric and outdated OIS.

Next bringing your applications to the web is now easier; OGRE was able to run in the browser for a long time already – but the process was only badly documented. The fork tracks the Emscripten sample code inside the repository and also has documentation on how to use it.

Bindings for Scripting Languages

For using scripting languages, there is traditionally MOGRE (C#), Ogre4j, Python-Ogre which are by now basically all outdated and unmaintained.

To put all these efforts on a solid foundation I took some inspiration from ruby-ogre and added SWIG support.

At the time of writing there are already working Python bindings (see sample here) and a proof-of-concept for C#.

By using SWIG it is now easily to add further bindings for Ruby, Java, JavaScript (Node.js) as well. But as I care mostly for Python this will be an exercise for the reader.

A word on OGRE 2.1

You might wonder why one should care about the 1.x and not go for 2.1 directly. The obvious reason is that you have an existing codebase and OGRE 2.1 drastically changes API and even the material file format.

Then, while faster than 1.x, OGRE 2.x is still far from feature completeness with 1.x – it still completely lacks Web and Mobile support.
Faster also actually depends on your scene/ material usage; For instance, if you only render a single object (product showcase) 2.x will not offer you any advantages.

Finally the main functionality advance in 2.1, namely the new material system (HLMS) and with it physically based shading were backported to 1.x.

Yet if you only care about desktop and want to render large immersive worlds (lots of nodes, lots of materials), 2.1 is the way to go.

 

 

Learning Modern 3D Graphics Programming

one of the best resources to learn modern OpenGL and the one which helped me quite a lot is the Book at www.arcsynthesis.org/gltut/ – or lets better say was. Unfortunately the domain expired so the content is no longer reachable.

Luckily the Book was designed as an open source project and the code to generate the website is still available at Bitbucket. Unfortunately this repository does not seem to be actively maintained any more.

Therefore I set out to make the Book to be available again using Github Pages. You can find the results here:

https://paroj.github.io/gltut/

However I did not simply mirror the pages, but also improved it at several places. So what has changed so far?

Continue reading Learning Modern 3D Graphics Programming

Converting a Ubuntu and Windows dual-boot installation to UEFI

UEFI is the successor to BIOS for communicating with the Firmware on your Mainboard.
While the first BIOS was released with the IBM-PC in 1981, the first UEFI version (EFI 2.0) was released 25 years later in 2006 building upon the lessons learned in that timespan. So UEFI is without any doubt the more modern solution.

The user-visible advantages of using UEFI instead of BIOS are

You could reinstall both Windows and Ubuntu to get UEFI. However it is also possible to convert existing installations of both on the fly – without the backup/ restore cycle. You should still do a backup in case something goes wrong though.

Prerequisites

Only the 64bit Versions of Windows support UEFI. Therefore this guide assumes that you run the 64bit versions of both Windows and Ubuntu.

Furthermore verify the following items before you continue – otherwise you will not be able to finish the conversion. Use GParted in case you have not enough space before the first or after the last partition.

  • 250MB space in front of first partition
    Typically Windows 8 creates a 350MB System Partition upon installation. This space can be reclaimed for a 100MB EFI partiton and a new 100MB Windows System partition.
  • 1-2MB behind last partiton (for the GPT backup)
  • UEFI bootable Ubuntu USB drive.
    You can use the startup disk creator on ubuntu with an Ubuntu 14.04+ ISO.
  • UEFI bootable Windows USB drive.
    You can use the Microsoft Media Creation tool for Windows 10 to get one.

to test that the sticks are indeed UEFI compatible, try booting them with CSM Mode disabled in your BIOS.

Convert the drive to GPT

UEFI requires a GUID Partition Table (GPT), so first we need to convert from MBR to GPT.

After this step you will not be able to boot your system any more. So make sure you have the Ubuntu USB drive ready.

We will use gdisk to perform the conversion as following:

sudo gdisk /dev/sdX
Command (? for help): w

where sdX is your system drive (e.g. sda)

Convert Windows to UEFI

Now boot your Windows USB drive and enter the command prompt as described in this Microsoft Technet article at step 6.

Continue with the following steps from the Article. Note that we have skipped steps 1-4 as we used Ubuntu to convert the disk to GPT.

We have now created a EFI partition, a new EFI compatible Windows System Partition and we have installed the Windows Bootloader to the EFI partition. Your Windows installation should now start again.
At this point you could also perform an upgrade to Windows 10, as the upgrade would erase grub from the EFI partition anyway.

Next we are going to install grub to the EFI partition and make it manage the boot.

Enter a Ubuntu chroot

As we can not directly boot our Ubuntu installation, we will instead boot from the Ubuntu USB drive and the switch to the installed Ubuntu.
To do the switch we have to setup and enter a chroot as following

sudo mount /dev/sdXY /mnt 
sudo mount /dev/sdX1 /mnt/boot/efi 
sudo mount -o bind /dev /mnt/dev
sudo mount -o bind /sys /mnt/sys
sudo mount -t proc /proc /mnt/proc
sudo cp /proc/mounts /mnt/etc/mtab
sudo cp /etc/resolv.conf /mnt/etc/resolv.conf
sudo chroot /mnt

where sdXY is the partition where your Ubuntu system is installed (e.g. sda5)

Convert Ubuntu to UEFI

Inside your Ubuntu Installation we have to replace grub for BIOS (aka grub-pc) with grub for UEFI (aka grub-efi) as:

sudo apt-get --reinstall install grub-common grub-efi-amd64 os-prober

this would be enough to get the system booting again, however we also aim for secure boot so we also need to install the following:

sudo apt-get install shim-signed grub-efi-amd64-signed linux-signed-generic

This installs signatures for grub and the kernel which are used to verify the integrity of these at boot. Furthermore we install shim, which is a passthrough bootloader that translates from the Microsoft signatures on you mainboard to the signatures by Canonical used to sign grub and the kernel (see this for details).

Next we finally install grub to the EFI partition by:

sudo grub-install --uefi-secure-boot /dev/sdX
sudo update-grub

where sdX is again your system drive (e.g. sda).

Now you can enable secure boot in your BIOS and benefit. Note that some BIOS implementations additionaly require you to select the trusted signatures. Look out for an option called “Install Default Secure Boot keys” or similar to select the Microsoft signatures.

Updating Crucial MX100 Firmware with Ubuntu

There has been a Firmware update for the Crucial MX100 to MU02. In case you are running Ubuntu there is an easy way to perform the update without using a CD or USB Stick.

As the firmware comes in form of an iso image containing Tiny Core Linux, we can instruct grub2 to directly boot from it. Here is how:

  1. append the following to /etc/grub.d/40_custom:
    menuentry "MX100 FW Update" {
     set isofile="/home/<USERNAME>/Downloads/MX100_MU02_BOOTABLE_ALL_CAP.iso"
     # assuming your home is on /dev/sda3 ATTENTION: change this so it matches your setup
     loopback loop (hd0,msdos3)$isofile
     linux (loop)/boot/vmlinuz libata.allow_tpm=1 quiet base loglevel=3 waitusb=10 superuser rssd-fw-update rssd-fwdir=/opt/firmware rssd-model=MX100
     initrd (loop)/boot/core.gz
    }

    read this for details of the file format.

  2. run sudo update-grub
  3. reboot and select “MX100 FW Update”
  4. Now you can delete the menuentry created in step1

Note that this actually much “cleaner” than using windows where you have to download 150MB of the Crucial Store Executive Software which actually is a local webserver written in Java (urgh!). But all it can do is display some SMART monitoring information and automatically perform the above steps on windows.

Header Image CC-by MiNe

Using the XBox Controller with Ubuntu (the modern way)

If you want to get your Xbox One/ Xbox 360 running on ubuntu you basically have the choice between the in-kernel xpad driver and the userspace xboxdrv driver.

Most of the guides recommend using xboxdrv as xpad has been stagnating. However using xboxdrv has some disadvantages; as it runs as a daemon in userspace you have to manually take care of starting/ stopping it and giving your user access to the virtual devices it creates.
Xpad on the other hand just works as any other linux driver directly inside the kernel which is more  efficient and hassle free.

Fortunately while pushing SteamOS Valve has updated the xpad driver bringing it on par with xboxdrv:

  • they added support for Xbox One Controller
  • they fixed the communication protocol – no more blinking controller light

Update July 22, 2015

Unfortunately there are still several issues with the SteamOS driver. This follow-up post discusses them and the solutions in detail.

The bottom line is that I updated the official linux driver with chunks found in the SteamOS driver, as well as in several patches floating around the internet. Code and install instructions are available at Github.

How to draw a line interpolating 2 colors with opencv

The build-in opencv line drawing function allows to draw a variety of lines. Unfortunately it does not allow drawing a gradient line interpolating the colors at its start and end.

However implementing this on our own is quite easy:


using namespace cv;

void line2(Mat& img, const Point& start, const Point& end, 
                     const Scalar& c1,   const Scalar& c2) {
    LineIterator iter(img, start, end, LINE_8);

    for (int i = 0; i < iter.count; i++, iter++) {
       double alpha = double(i) / iter.count;
       // note: using img.at<T>(iter.pos()) is faster, but 
       // then you have to deal with mat type and channel number yourself
       img(Rect(iter.pos(), Size(1, 1))) = c1 * (1.0 - alpha) + c2 * alpha;
    }
}

Introducing Sensors Unity

Sensors-Unity is a new lm-sensors GUI for the Unity Desktop. It allows monitoring the output of the sensors CLI utility while integrating with the Unity desktop. This means there is no GPU/ HDD support and no plotting.
If you need those you are probably better suited with psensor. However if you just need a overview of the sensor readings and if you appreciate a clean UI you should give it a shot.

Sensors Unity is available from this PPA

It is written in Python3 / GTK3 and uses sensors.py. You can contribute code or help translating via launchpad.

Overview

In contrast to other applications the interface is designed around being a application. Instead of getting another indicator in the top-right, you get an icon in the launcher:

The user interface
The user interface

The idea is that you do not need the sensor information all the time. Instead you launch the app when you do. If you want to passively monitor some value you can minimize the app while selecting the value to display in the launcher icon.

To get the data libsensors is used which means that you need to get lm-sensors running before you will see anything.

However once the sensors command line utility works you will see the same results in Sensors-Unity as it shares the configuration in /etc/sensors3.conf.

Configuration

Unfortunately configuring lm-sensors via /etc/sensors3.conf is quite poorly documented, so lets quickly recap the usage.

  • /etc/sensors3.conf contains the configuration for all sensors known by lm-sensors
  • however every mainboard can use each chip in a slightly different way
  • therefore you can override /etc/sensors3.conf by placing your specific configuration in /etc/sensors.d/ (see this for details)
  • you can find a list of these board specific configurations in the lm-sensors wiki
  • to disable a sensor use the ignore statement
  • #ignore everything from this chip
    chip "acpitz-virtual-0"
     ignore temp1
     ignore temp2
  • to change the label use the label statement
  • chip "coretemp-*"
     label temp1 "CPU Package"

    Sensors-Unity Specific Configuration

Sensors-Unity allows using the Pango Markup Language for sensor labels. For instance if you want “VAXG” instead of “CPU Graphics” to be displayed, you would write:

label in4 "V<sub>AXG</sub>"

In order not to interfere with other utilities and to allow per-user configuration of the labels/ sensors Sensors-Unity first tries to read ~/.config/sensors3.conf before continuing with the lm-sensors config lookup described above.

introducing sensors.py

sensors.py is a new python wrapper for libsensors of the lm-sensors project. libsensors is what you want to use to programmatically read the sensor values of your PC with Linux instead of parsing the output of the sensors utility.

sensors.py is not the first wrapper – there are two alternatives, confusingly both named PySensors.

PySensors (ctypes) follows a similar approach to sensors.py by using ctypes. However instead of exposing the C API it tries to be a object-oriented(OO) abstraction, which unfortunately lacks many features and makes the mapping to the underlying API hard. Furthermore it does not support Python3.

PySensors (extension module)  does not use ctypes and thus is more efficient, but if you write a python script probably compiling a extension module is worse than losing some performance when reading the values.
Additionally there is python3 support and also some OO abstraction. The latter is somewhere in between the C API and proper OO: sensors_get_label(chip_name, feature) is ChipName.get_label(feature) instead of feature.get_label().

So what makes sensors.py immediately different is that it does not try to do any OO abstraction but instead gives you access to the raw C API. It only adds minor pythonification: you dont need to mess with pointers, errors are converted to exceptions and strings are correctly converted from/ to utf-8 for you.

However working with the C API directly is tiresome at times – therefore there is also an optional iterator API, which is best shown by a demo:

import sensors

sensors.init()

for chip in sensors.ChipIterator("coretemp-*"):
    print(sensors.chip_snprintf_name(chip)+" ("+sensors.get_adapter_name(chip.bus)+")")
    
    for feature in sensors.FeatureIterator(chip):
        sfi = sensors.SubFeatureIterator(chip, feature)
        vals = [sensors.get_value(chip, sf.number) for sf in sfi]
        label = sensors.get_label(chip, feature)
        
        print("\t"+label+": "+str(vals))

sensors.cleanup()

result:

coretemp-isa-0000 (ISA adapter)
	Physical id 0: [38.0, 80.0, 100.0, 0.0]
	Core 0: [37.0, 80.0, 100.0, 0.0]
	Core 1: [35.0, 80.0, 100.0, 0.0]
	Core 2: [38.0, 80.0, 100.0, 0.0]
	Core 3: [36.0, 80.0, 100.0, 0.0]

for a more sophisticated example see the example.py in the repository.

Replacing your desktop laptop with a ITX workstation

If you use your laptop as a desktop replacement, you will at some point get an external display and a mouse/ keyboard for more convenient usage.
At this point the laptop becomes only a small case of non-upgradable components.

Now you could as well replace your laptop by a real case of comparable size.  This will make your PC not only easily upgradable, but allow higher-end components while being more silent at the same time.

Continue reading Replacing your desktop laptop with a ITX workstation

Streaming the Screen on Android

In this post I want to discuss way of getting the screen content of your Android device to the TV or monitor. If you wonder why one might want to do such a thing – just think about playing some Android games with a bluetooth gamepad or watching a movie where your PC is not available.

Specifically I want to introduce SlimPort. SlimPort is a feature of Nexus devices which is unfortunately not covered much in reviews.
Basically SlimPort is DisplayPort over the Micro-USB connection of your device allowing you to mirror its display.

But the future has arrived: we got Miracast!

One might wonder why one should go through the hassle of using a old-school HDMI cable.
You can get a Chromecast Stick for 35$ and nowadays it also supports Miracast so you can simply stream the images over WiFi.

Well Miracast is all nice if all you need to do is to put up some slides without carrying all possible adapters with you. But as soon as you try to stream a movie or a game you will reach its limitations.

Remember that Miracast works by grabbing the Framebuffer and compressing it with H.264. While encoding happens in hardware it still takes some time and it inevitably introduces compression artifacts. This means:

  • in games you get a noticeable lag – especially in FullHD
  • in movies you get noticeable artifacts – especially in FullHD
  • in both cases your battery will get drained for heavy WiFi and Encoder usage

Going old-school

Going with the old-school cable on the other hand you get HDMI 1.4 transfer rates for up to 1080p at 60Hz while saving the battery.

Configuring the second screen is quite straightforward in android. As Mirroring is your only option, there is actually nothing to configure. Once you connect the adapter android will set up your monitor based on its EDID information and transfer image and audio over HDMI.
In case you only want to have the image over HDMI, simply attach your speakers to the phone and android will re-route the audio.
The days where you had to manually set up everything are over.

Furthermore most adapters have an micro-USB port allowing to still charge your phone while using SlimPort.

Device Support

The downside is that most of the devices do not support SlimPort. The device list more or less boils down to

  • Google Nexus 4/ 5
  • Google Nexus 7 (2013)
  • LG G2/ G3

Samsung devices go with the alternative MHL. Comparing these two SlimPort has the bandwidth advantage of 5Gb/s vs. 3Gb/s of MHL so it does not have to compress that much. However both are clearly better than going wireless.