Recently the Meson Build System gained somemomentum. It is time to stop that. Not that Meson is a bad piece of software – on the contrary, it is quite well designed. Still it makes building C/C++ applications worse, by (quoting xkcd) basically creating this:
It sets out to create a cross-platform, more readable and faster alternative to autotools. But there is already CMake that solves this.
You might say that CMake is ugly, but note that the CMake 2.x you might have tried is not the same CMake 3.x that is available today. Many patterns have improved and are now both more logical and more readable.
Nowadays the difference between Meson and CMake is just a matter of syntactic preference. The Meson authors seem to agree here.
The actual criterion for selecting a build system however should be tooling support and community spread. CMake easily wins here:
After the introduction of the server mode it got native support by QtCreator, CLion, Android Studio (NDK) and even Microsofts Visual Studio. Native means that you do not have to generate any intermediate project files, but the CMakeLists.txt is used directly by the IDE.
On the community spread side we got e.g. KDE, OpenCV, zlib, libpng, freetype and as of recently Boost. These projects using CMake not only guarantees that you can easily use them, but that you can also include them in your build via add_subdirectory such that they become part of your project. This is especially useful if you are cross-compiling – for instance to a Raspberry Pi.
On the other hand, reinventing a wheel that is tailored to the needs of a specific community (Gnome), means that it will fall behind and eventually die. This is what is currently happening to the Vala language that had a similar birth to Meson.
The meson devs might object that Meson generates build files that run faster on a Raspberry Pi. However if your cross compiling is working you do not need that. And honestly, that particular improvement could have been also achieved by providing a patch to the CMake Ninja generator..
Addendum 15-06-2018 A new guide for CMake called CGold can be found here, which is of comparable quality to the Meson docs.
Addendum 4-1-2018 Some comments (rightfully) note that Meson has generally a better documentation and avoids some of its pitfalls. However this is mostly due to Meson not being around long enough such that the way you do things in Meson changed. Neither did it see such a widespread use like CMake yet. (think of corner-cases)
But even if you argue that this is precisely the point why you should use Meson, I would argue that improving the existing documentation in CMake and adding more educational warnings is easier then writing something from scratch.
Addendum 29-02-2019 Part of the perceived superiority of Meson, was that it just was not in use for long enough to notice its flaws – contrary to long lasting legacy of CMake. With its adaptation, things like only one global namespace start to get attention – things that are already solved in CMake..
Snap is a new packaging format introduced by Ubuntu as an successor to dpkg aka debian package. It offers sandboxing and transactional updates and thus is a competitor to the flatpak format and resembles docker images.
As with every new technology the weakest point of working with snaps is the documentation. Your best bet so far is the snappy-playpen repository.
There are also some rough edges regarding desktop integration and python interoperability, so this is what the post will be about.
I will introduce some quircks that were needed to get teatime running, which is written in Python3 and uses Unity and GTK3 via GObject introspection.
The most important thing to be aware of is that snaps are similar to containers in that each snap has its own rootfs and only restricted access outside of it. This is basically what the sandboxing is about. However a typical desktop application needs to know quite a lot about the outside world:
It must know which theme the user currently uses, and after that it also needs to access the theme files.
For saving anything it needs access to /home
If it should access the internet it needs system level access as well; like querying whether there actually is an active internet connection
To declare that we want to write to home, play back sound and use unity features we use the plugs keyword like
However we must also tell our app to look for the according libraries inside its snap instead of the system paths. For this one must change quite a few environment variables manually. Fortunately Ubuntu provides wrapper scripts that take care of this for us. They are called desktop-launchers.
To use the launcher the configures the GTK3 environment we have to extend the teatime part like this:
After this teatime will start, but the paths still have to be fixed. Inside a snap “/” still refers to the system root, so all absolute paths must be prefixed with $SNAP.
Actually I think the design of flatpak is more elegant in this regard where “/” points to the local rootfs and one does not have to change absolute paths. To bring in the system parts flatpak uses bind mounts.
Conclusions
Once you get the hang of how snaps work, packaging becomes quite straightforward, however currently there are still some drawbacks
the snap package results in 120MB compared to a 12KB deb. This is actually a feature as shipping all the dependencies makes the snap installable on every linux distribution. However I hope that we can get this down by introducing shared frameworks (like GTK3, Python) that do not have to be included in the app package.
[Update: fixed] Due to another issue, your snap has only the C locale available and thus will not be localized.
[Update: fixed] Unity desktop notifications do not work. You will get a DBus exception at the corresponding call.
[Update: fixed] The shipped .desktop file is not hooked up with the system, so you can only launch the app via the command line.
UEFI is the successor to BIOS for communicating with the Firmware on your Mainboard.
While the first BIOS was released with the IBM-PC in 1981, the first UEFI version (EFI 2.0) was released 25 years later in 2006 building upon the lessons learned in that timespan. So UEFI is without any doubt the more modern solution.
The user-visible advantages of using UEFI instead of BIOS are
you can reboot to the firmware (BIOS) setup directly from the grub screen – without the need to press buttons during startup
You could reinstall both Windows and Ubuntu to get UEFI. However it is also possible to convert existing installations of both on the fly – without the backup/ restore cycle. You should still do a backup in case something goes wrong though.
Prerequisites
Only the 64bit Versions of Windows support UEFI. Therefore this guide assumes that you run the 64bit versions of both Windows and Ubuntu.
Furthermore verify the following items before you continue – otherwise you will not be able to finish the conversion. Use GParted in case you have not enough space before the first or after the last partition.
250MB space in front of first partition
Typically Windows 8 creates a 350MB System Partition upon installation. This space can be reclaimed for a 100MB EFI partiton and a new 100MB Windows System partition.
Continue with the following steps from the Article. Note that we have skipped steps 1-4 as we used Ubuntu to convert the disk to GPT.
We have now created a EFI partition, a new EFI compatible Windows System Partition and we have installed the Windows Bootloader to the EFI partition. Your Windows installation should now start again.
At this point you could also perform an upgrade to Windows 10, as the upgrade would erase grub from the EFI partition anyway.
Next we are going to install grub to the EFI partition and make it manage the boot.
Enter a Ubuntu chroot
As we can not directly boot our Ubuntu installation, we will instead boot from the Ubuntu USB drive and the switch to the installed Ubuntu.
To do the switch we have to setup and enter a chroot as following
sudo mount /dev/sdXY /mnt
sudo mount /dev/sdX1 /mnt/boot/efi
sudo mount -o bind /dev /mnt/dev
sudo mount -o bind /sys /mnt/sys
sudo mount -t proc /proc /mnt/proc
sudo cp /proc/mounts /mnt/etc/mtab
sudo cp /etc/resolv.conf /mnt/etc/resolv.conf
sudo chroot /mnt
where sdXY is the partition where your Ubuntu system is installed (e.g. sda5)
Convert Ubuntu to UEFI
Inside your Ubuntu Installation we have to replace grub for BIOS (aka grub-pc) with grub for UEFI (aka grub-efi) as:
This installs signatures for grub and the kernel which are used to verify the integrity of these at boot. Furthermore we install shim, which is a passthrough bootloader that translates from the Microsoft signatures on you mainboard to the signatures by Canonical used to sign grub and the kernel (see this for details).
Next we finally install grub to the EFI partition by:
Now you can enable secure boot in your BIOS and benefit. Note that some BIOS implementations additionaly require you to select the trusted signatures. Look out for an option called “Install Default Secure Boot keys” or similar to select the Microsoft signatures.
There has been a Firmware update for the Crucial MX100 to MU02. In case you are running Ubuntu there is an easy way to perform the update without using a CD or USB Stick.
As the firmware comes in form of an iso image containing Tiny Core Linux, we can instruct grub2 to directly boot from it. Here is how:
append the following to /etc/grub.d/40_custom:
menuentry "MX100 FW Update" {
set isofile="/home/<USERNAME>/Downloads/MX100_MU02_BOOTABLE_ALL_CAP.iso"
# assuming your home is on /dev/sda3 ATTENTION: change this so it matches your setup
loopback loop (hd0,msdos3)$isofile
linux (loop)/boot/vmlinuz libata.allow_tpm=1 quiet base loglevel=3 waitusb=10 superuser rssd-fw-update rssd-fwdir=/opt/firmware rssd-model=MX100
initrd (loop)/boot/core.gz
}
Note that this actually much “cleaner” than using windows where you have to download 150MB of the Crucial Store Executive Software which actually is a local webserver written in Java (urgh!). But all it can do is display some SMART monitoring information and automatically perform the above steps on windows.
Sensors-Unity is a new lm-sensors GUI for the Unity Desktop. It allows monitoring the output of the sensors CLI utility while integrating with the Unity desktop. This means there is no GPU/ HDD support and no plotting. If you need those you are probably better suited with psensor. However if you just need a overview of the sensor readings and if you appreciate a clean UI you should give it a shot.
In contrast to other applications the interface is designed around being a application. Instead of getting another indicator in the top-right, you get an icon in the launcher:
The user interface
The idea is that you do not need the sensor information all the time. Instead you launch the app when you do. If you want to passively monitor some value you can minimize the app while selecting the value to display in the launcher icon.
To get the data libsensors is used which means that you need to get lm-sensors running before you will see anything.
However once the sensors command line utility works you will see the same results in Sensors-Unity as it shares the configuration in /etc/sensors3.conf.
Configuration
Unfortunately configuring lm-sensors via /etc/sensors3.conf is quite poorly documented, so lets quickly recap the usage.
/etc/sensors3.conf contains the configuration for all sensors known by lm-sensors
however every mainboard can use each chip in a slightly different way
therefore you can override /etc/sensors3.conf by placing your specific configuration in /etc/sensors.d/ (see this for details)
you can find a list of these board specific configurations in the lm-sensors repository
to disable a sensor use the ignore statement
#ignore everything from this chip
chip "acpitz-virtual-0"
ignore temp1
ignore temp2
to change the label use the label statement
chip "coretemp-*"
label temp1 "CPU Package"
Sensors-Unity Specific Configuration
Sensors-Unity allows using the Pango Markup Language for sensor labels. For instance if you want “VAXG” instead of “CPU Graphics” to be displayed, you would write:
label in4 "V<sub>AXG</sub>"
In order not to interfere with other utilities and to allow per-user configuration of the labels/ sensors Sensors-Unity first tries to read ~/.config/sensors3.conf before continuing with the lm-sensors config lookup described above.
If you use your laptop as a desktop replacement, you will at some point get an external display and a mouse/ keyboard for more convenient usage.
At this point the laptop becomes only a small case of non-upgradable components.
Now you could as well replace your laptop by a real case of comparable size. This will make your PC not only easily upgradable, but allow higher-end components while being more silent at the same time.
This article is about how to securely configure the machine where your Nextcloud/ Owncloud instance will be running. Even if you set-up your connection with Owncloud in a secure way, your data still can be compromised by exploiting security flaws in the underlying architecture.
In the following we specifically will cover the underlying software stack and brute-force password hacking attempts.
Probably everyone has encountered a package in Ubuntu which was not the newest released version while one for some reason needed the newest one. The first step is to search for a PPA with the desired version. But what if there is no such PPA or you want to build the version yourself? This is where this guide comes in. Note however that this is not aimed at ordinary users – you need some experience with programming/ compiling to successfully build a package.
Before you start
Before you start make sure that you have source packages enabled in your software sources.
Next you obviously need the upstream source tar-ball of the new program which should look something like <packagename>–<version>.tar.gz.
Download this tar-ball to a new directory <somedir> and extract it there.
Updating Package info
For the following commands I assume you are in the previously created directory <somedir>.
First we need to get the old version of the source package
apt-get source <packagename>
This will download and extract the old source package into <packagename>–<oldversion>.
Now we need some helper scripts to perform the upgrading as well as the build-time dependencies of the package
Next change into the extracted sources of the old package and update the packaging
cd <packagename>-<oldversion>
uupdate -v <newversion> ../<packagename>-<newversion>.tar.gz
# change into the extracted new package
cd ../<packagename>-<newversion>
# update version info
dch -l ~ppa -D $(lsb_release -sc)
To trigger a rebuild of the program simply execute
dpkg-buildpackage
Uploading your version to a PPA
To upload a package to a PPA you first need to sign it to prove that you are the author. To do this you have to execute the following in the <packagename>–<newversion> directory
debuild -S
Furthermore you need the upload tool dput to actually perform the uploading
update 24.04.2017 – include Subject Alternative Name field
update 20.12.2017 – discuss Certbot as an alternative
While the Nextcloud Manual suggests enabling SSL, it unfortunately does not go into detail how to get a secure setup. The core problem is that the default SSL settings of Apache are not sane as in they do not enforce strong encryption. Furthermore the used default certificate will not match your server name and produce errors in the browser.
In the following a short guide how to manually set-up a secure Apache 2.4 server for Nextcloud will be presented.
update 27.10.2018 – use TWRP instead of CWM (discontinued)
update 14.10.2017 – new instructions to set-up udev rules update 26.02.2016 – instructions for Android 6 Marshmallow
The Big Picture
Android consists of three parts relevant to rooting
the bootloader
recovery system
main system
typically only the main system is running, that is the Linux Kernel, the launcher, the phone app etc.. If we talk about rooting, that means we want to add an additional app to the main system which has access to secured parts of the system and acts as a gatekeeper for other apps that also want to get access.
The problem is the secured parts of the system are locked down – otherwise they would not be secure. This means that we can not simply install that app (e.g. an apk) from within the main system.
Therefore we have to go one level down. This is where the recovery system is. Typically you do not see it, as it is only active when the main system can not run – either because a system update is installed or because you do a factory reset.
As the recovery system can do a full system update, it means that it has also access to the secured parts of the main system – exactly what we need.
The stock recovery system obviously does not allow altering the main system – otherwise everybody could get your private data if you lose your phone.
So we need to replace it as well. But before that we have to talk about the bootloader.
The bootloader is a tiny piece of software which decides whether to start the recovery or the main system (or another main system, like Ubuntu Phone).
In the default configuration in only starts systems that it knows and trusts. In this configuration the bootloader is called locked.
Although this prevents malicious software to change the phone and spy on us, it also prevents us from replacing the recovery system. By the way, this concept is also coming to the PC where it is called UEFI secure-boot.
Here is a graphical overview of the Android components:
So what we need to do in order to get root access is
unlock the bootloader
replace the recovery system
install a superuser app
Note that unlocking the bootloader also allows attackers to circumvent any of the android security features (PIN etc). It becomes possible to access all the files on the device using a different recovery system. (unless userdata is encrypted) Therefore android will wipe all userdata when the bootloader state is changed from locked to unlocked.
So if you lose your unlocked device or it gets stolen, you better hope the thief is not tech savvy.
Preparations
First you need to install the fastboot binary to be able to perform low-level communication with the device
The android-sdk-platform-tools-common package most importantly contains a whitelist (/lib/udev/rules.d/51-android.rules) with devices to which users can send commands over USB, so you do not have to run fastboot as root.
Now you have to reboot into fastboot mode. Usually there is a key combination you have to press on startup.
Remember this key combination as you will need some more times.
Samsung Devices however, like the Galaxy S3, do not support the fastboot mode – instead they have a download mode, which uses a proprietary Samsung protocol. To flash those you have to use the Heimdall tool. While this article does not cover the heimdall CLI calls, the general discussion still applies.
Unlocking the Bootloader
last warning: this will wipe all user data on the device
for google devices, like a Nexus 4 or Nexus 7 it is just do
fastboot oem unlock
if you have a Sony Xperia device, like a Xperia Z, you additionally have to request a unlock key and then do
fastboot oem unlock 0x<KEY>
where <KEY> is the key you obtained.
Using AutoRoot to install SuperSU
There are several superuser apps to choose from for Android 4 and below. However the only superuser app working on Android 5/ Lollipop and above is SuperSU by Chainfire.
As there are devices like the Nexus 5X shipping with Android 6/ Marshmallow, I will describe this method first.
Chainfire created an “installer” called AutoRoot that includes the fastboot utility and will perform the unlocking step described above. However if you have read this far, you probably also want to understand the rest of the process.
the command above will not flash anything on your device, but just upload the image and immediately start it. The image contains a script to modify the main system (change startup to get around SELinux) and install the superuser app.
If everything goes well, you can now just reboot your phone and you are done.
You could lock your bootloader again now to make your device more secure. However the next Android update will remove root again and repeating the rooting procedure will wipe userdata – so you have to balance security update vs. the risk of your device being stolen. For the latter case you still have the option to enable encryption of userdata though.
Installing OTA updates
Android over the air (OTA) updates contain only the changes to the current system. In order to verify that the update succeeded Android computes a checksum of the patched system and reverts to the old state otherwise.
As SuperSU has changed the boot image to start itself, the updates obviously will fail. So to install an OTA update you will have to grab a factory image and restore the boot partition using the included boot.img
fastboot flash boot boot.img
after this you will have to patch the boot partition again using the procedure described above.
Also note that if you use apps that change the system partition (like AdAway that changes the hosts file), you will have to revert those changes as well in order for the OTA update to succeed.
Optional: Replacing the Recovery System
If you want some advanced features, like backing up all your installed apks, you can permanently replace the recovery image on your device. However this will most likely prevent you from installing OTA updates.
There are two prominent alternative recovery systems with the ability to install apps
I would recommend getting Superuser by CWM, as it is open source and also nag-free as there is no “pro” version of it. There is even a pull-request which might make it also work with Android 5 in the future.
To install the app we need to get this zip archive and copy it to the device. Then we need to reboot into fastboot mode and then select “Recovery Mode” to get to the recovery system. Once in Recovery mode select
install zip -> choose zip from /sdcard
then browse and select the “superuser.zip” you just copied.
Once installed select
Go Back -> reboot system now
Once the system has started you should have a “Superuser” App on your device. Congratulations, you are done.