Fixing Random Freezes with Ubuntu 16.04 LTS, Intel Skylake and an Nvidia GPU

My Lenovo ThinkCentre m900 (10FHCTO1WW) with an Intel i7-6700 showed weird and random freezes from day 1 when trying to install Mint 18 / Ubuntu 16 with any kernel newer than 3x. After investigating for quite some hours, I gave up and installed an Ubuntu 14.04 LTS on it. The device is certified to it, but the old version did not support all features and even some basic things such as audio did not work. At lest the random freezes were gone and I could work with that machine. Now that the system will not receive updates soon, I gave it another try and setup Mint 18.2 (Sonya). Unfortunately, the Lenovo machine froze again after a few minutes, filling up the log again with the following error messages. 

I started the investigation again and found a different trace, which pointed to the graphics card. The important hint and solution came from SO. Following a few other forum posts, it became clear that the Nvidia drivers do not play nicely with recent kernels for some specific Nvidia cards ind combination with newer kernels. So I followed the proposed steps and disabled the card complete. Just removing the card in the BIOS and uninstalling the drivers was not enough. I also had to blacklist the modules for the nouveau kernel driver.

  1. Disable the Nvidia card in the BIOS and use the Intel onchip GPU
  2. Remove all Nvidia packages: 
    sudo apt-get remove nvidia* && sudo apt autoremove
  3. Blacklist the module:  
    sudo vim /etc/modprobe.d/blacklist.conf


  4. Reboot

The card is not used any more and the freezes stopped.

I hope I do not have to remove this article again and the system remains as stable as it is now for six hours.

Continue reading


Deploying MySQL in a Local Development Environment

Installing MySQL via apt-get is a simple task, but the migration between different MySQL versions requires planning and testing. Thus installing one central instance of the database system might not be suitable, when the version of MySQL or project specific settings should be switched quickly without interfering with other applications. Using one central instance can quickly become cumbersome. In this article, I will describe how any number of MySQL instances can be stored and executed from within a user’s home directory.

Adapting MySQL Data an Log File Locations

Some scenarios might require to run several MySQL instances at once, other scenarios cover sensitive data, where we do not want MySQL to write any data on non-encrypted partitions. This is especially true for devices which can get easily stolen, for instance laptops.  If you use a laptop for developing your applications from time to time, chances are good that you need to store sensitive data in a database, but need to make sure that the data is encrypted when at rest. The data stored in a database needs to be protected when at rest.

This can be solved with full disk encryption, but this technique has several disadvantages. First of all, full disk encryption only utilises one password. This entails, that several users who utilise a device need to share one password, which reduces the reliability of this approach. Also when the system needs to be rebooted, full disk encryption can become an obstacle, which increases the complexity further.

Way easier to use is the transparent home directory encryption, which can be selected during many modern Linux setup procedures out of the box. We will use this encryption type for this article, as it is reasonable secure and easy to setup. Our goal is to store all MySQL related data in the home directory and run MySQL with normal user privileges.

Creating the Directory Structure

The first step is creating a directory structure for storing the data. In this example, the user name is stefan, please adapt to your needs.

Create a Configuration File

Make sure to use absolute paths and utilise the directories we created before. Store this file in MySQL-5.6-Local/MySQL-5.6-Conf/my-5.6.cnf. The configuration is pretty self explanatory.

Stop the Running MySQL Instance

If you already have a running MySQL instance, make sure to shut it down. You can also disable MySQL from starting automatically.

Setting up Apparmor

Apparmor protects sensitive applications by defining in which directory they might write. We need to update this configuration to suit our needs. We need to make sure that the global configuration file for the central MySQL instance also includes an additional local information. Edit this file first: /etc/apparmor.d/usr.sbin.mysqld and make sure that the reference for the local file is not commented out.

Now we need to add the directories in stean’s home directory to the local file by editing /etc/apparmor.d/local/usr.sbin.mysqld .

An incorrect Apparmor configuration is often the cause of permission errors, which can be a pain. Make sure to reload the the Apparmor service and if you struggle with it, consider to disable it temporarily and check if the rest works. Do not forget to turn it on again.

Initialize the Local MySQL Instance

Now it is time to initialize the MySQL instance. In this step, MySQL creates all the files it needs in the data directory. It is important that the data directory is empty, when you initiate the following commands.

Note that this command is marked as deprecated. It works with MySQL 5.6 and MySQL 5.7, but can be removed.

Start and Stop the Instance

You can now start the MySQL instance with the following command:

For your convenience, add a custom client configuration in your $HOME/.my.cnf and point it to the user defined socket.

In addition, startup and shutdown scripts are useful as well. Place both scripts in the directory we created before and add execution permissions with chmod +x .

The stop script is similar.

Conclusion

The technique described above allows to install and run multiple MySQL instances from within the user’s home directory. The MySQL instances run with user privileges and can utilise dedicated data and log file directories. As the all data is stored within the $HOME directory, we can easily apply transparent encryption to protect data at rest.

Continue reading


Flashing a NanoPc T3 with DietPi

The NanoPc T3 is a 64 bit octa core single board computer, quite similar to the famous Raspberry Pi boards. It is also often referred to as NanoPi T3 as well.

Hardware Specification

The single board computer has eight cores with up to 1.4GhZ and 1 GB of DDR3 RAM. It has a lot of nice interfaces, the specification below is taken from here.

Overview

The device offers quite a lot considering its small measurements. The picture below is an overview picture taken from here.

The device with the heat sink and attached cables is shown below.

Comparison with the Raspberry Pi Model 3B

It costs about twice as much as the Raspberry Pi 3, but comes with eight cores at 1.4GHz instead of four cores with 1.2GHz, GBit Ethernet instead of just 100 MBit and several additional interfaces. It has a dedicated power switch, supports soft poweroff and provides reset and boot buttons. It comes with an SD card slot instead of micro SD, has only two standard USB ports but also one micro USB port. This port however is not for powering the device, but only for data.

Some remarks at First

The board can get quite warm, so I would recommend buying the heat sins that fit directly on the board as well. The wifi signal is also rather weak, I would recommend investing in the external antenna if the device is in an area with low signal reception. Also it requires an external 5V power source and does not provide a micro USB port for power like similar boards use.

Buying and Additional Information

The board can be obtained for 60$ from here and there also exists a wiki page dedicated to the T3. The images are stored at a One-Click share hoster and the download is very slow. Also the files are not that well organized and can be easily confused with other platforms offered by the same company.

  • Nano PC T3 ($60)
  • Heat sink ($1.99)
  • Power supply ($20)
  • SD card (~ $10)

Additionally there is shipping ($20 to Europe) and also very likely some toll to pay.

Initial Setup

The NanoPi T3 has an internal eMMC storage with 8GB capacity. It comes pre-installed with Android, which is not really useful for my applications. Instead, there exist different ISO images wich can be obtained here. The wiki page documents how to create bootable SD cards with Windows and Linux and there are also scripts offered, which automate the process. Unfortunately, the scripts are not documented well and some of the links are already broken, which reduces the usability of the provided information. Also as the images should be downloaded from some Sharehoster, there is no way of verifying, what kind of image you actually obtained. This is a security risk and not applicable in many scenarios. Fortunately, there also exist alternative images which are more transparent to use.

By default, the device boots from the eMMC flash storage. By pressing the boot button in the lower right corner, we can also boot from the SD card. This is a nice feature, but if you want to reboot the system unattended, then we need to replace the default operating system. In the course of this article, we are going to write an alternative Debian image to the flash memory and boot this OS automatically.

DietPi

DietPi is a Debian based distribution, which claims to be an optimized and lightweight alternative for single board PCs. The number of supported devices is impressive and luckily, also the NanoPC T3 is in the list. It also comes with a list of nice features for the configuration and the backup of the system. DietPi can be dowloaded here and the documentation is available here.

The following steps are requried:

  1. Download the DietPi Image
  2. Write the image to the SD card
  3. Mount the SD card on your desktop and copy the DietPi image to the card
  4. Boot the NanoPC T3 from the card
  5. Flash the DietPi image to the eMMC
  6. Reboot
  7. Configure

Creating a Bootable SD Card

The fist step involves creating a bootable SD card by writing the DietPi image with dd to the card. To do so, download the DietPi image to your local Desktop and then write the file with dd. The process does not differ from other single board machines and is described here. The next step might seem a bit odd. After you finished writing the SD card, mount it on your local Dekstop and copy the DietPi image to the tmp directory of the SD card.The reason we do this is that we need to have a running Linux system so that we can flash the integrated eMMC of the T3. We then use the DietPi Linux zu actually flash the eMMC of the T3 also with the DietPi image. By copying the image we save some time for downloading and we have the image right available in the next step.

Boot the SD Card

Make sure the T3 is powered off and insert the SD card into the board. Hold and keep pressed the boot button and flip the power switch. The T3 then should boot into the DietPi system. It is easier if you attach a monitor and a keyboard to the system for the further configuration. Alternatively, you can also configure the networking settings in advance, by mounting the SD card at the Desktop and edit the configuration files there, but as we simply use this system for installing the actual operating system, this might be a bit too much effort. Press CRTL+ALT+F2 to switch to a new TTY and login. The standard login for the DietPi system is with the user root and password dietpi.

First, create a backup of the original eMMC content, just in case anything does wrong. Use fdisk, to see the available drives.

In the example above you can see my 16GB SD card at /dev/mmcblk0 and the internal eMMC at /dev/mmcblk1. Use dd to create a backup of the whole eMMC like this:

We now have a backup of the original content at the SD card and can proceed with the actual flashing.

Flashing the DietPi Image to the eMMC

In this step, we flash the DietPi image we copied to the SD card before to the eMMC and overwrite its default Android system. To to so, we use again dd:

This may take a while, be patient. After the image has been written, poweroff the T3 and remove the SD card. Now hold again the boot button and flip the power switch again. This causes the T3 to initialize the new DietPi installation, this time from the internal flash memory. After this step, the system automatically boots from the eMMC flash the next time, without having to press the button.

As a result, we now only utilise the internal flash memory for running the OS, which is not only faster than the SD card, but also allows using the SD card as additional storage. The tool fdisk now shows the eMMC with its two partitions from DietPi:

 

Configuration

The DietPi comes with a few nice setup tools, which make the installation process rather easy. After logging in, the DietPi will guide you through the installation, but it expects a working Internet connection. You can add the SSID, pre-shared key and additional information in the file /DietPi/dietpi.txt. The following steps are basic and the setup needs to be completed, otherwise you will be bugged with the same menu after every reboot:

  1. The first step is to configure wireless networking. Add your network information to the file mentioned above and reboot. Todo so, you need to abort the setup process, because following the menu did not work for me, as the menu allows only to setup a hotspot instead of connecting to an existing network.
  2. When you login next time, you will be greeted by the dietpi-software dialogue. You can install basic software components such as editors, build essentials etc
  3. The system will now reboot and you are ready go go
  4. Login again and change the root password with passwd and add a new user with adduser $USER and add this user to the sudo group with adduser $USER sudo

You can re-open the configuration menus later by using dietpi-config for the basic setup and dietpi-software for adding new software. Of course for the latter you can always use apt.

Conclusions

The NanoPC T3 is an affordable single board computer with eight cores and 1GB of DDR3 RAM, integrated wireless and bluetooth interfaces, camera interfaces and many other features more. Its mall size renders it an ideal candidate for hardware and software projects based on Linux. The documentation could be improved, especially the transparency of the images and the details of the installation procedure. Also more RAM (as always) would be nice.

Continue reading


Switch the Git Clone Protocol from HTTPS to SSH

Gitlab offers several options for interacting with remote repositories: git, http, https and ssh. The first option – git – is the native transport protocol and does not encrypt the traffic. The same applies for http, rendering https and ssh the only feasible protocols if you commit and retrieve data via insecure networks. Ssh and https are also both available via the web interfaces of Github and Gitlab. In both systems you can simply copy and paste the clone URLs including the protocol. The following screenshot shows the Github version.

HTTPS

The simplest way to fetch the repository is to just copy the default HTTPS URL and clone it to the local drive. Git will ask you for the Github credentials.

You will be asked for the credentials every time you interact with the Github remote repository. Per default, git stores credentials for 5 minutes. Instead of waiting so long, we can just drop the credentials and proceed with an empty cache again.

To make our live a little easier, we can store the username. In this example, we store this information only locally, valid for this cloned repository only. The same settings can also be applied globally.

Git will store that information locally (i.e. inside the repository) in the config file:

For storing the password temporarily, you can re-activate the cache again and set a timeout.

Git will now store the password for your Github account for one hour. Although this is convenient, this is not an optimal solution. SSH keys are more secure and more convenient, as they do not expose your personal password and can be set individually for your repositories. In addition you can protect your keys with a password and add a second factor.

SSH

The Github documentation is great, you can find details how to create on how to create SSH keys here. All you need to do is to associate your public key with your remote repository on Github or Gitlab, as explained for instance here. Some general tips for working with keys in a secure way can be found here. As git stores information about how you access your repositories in the local repository config file, you can easily modify this information to fit your needs. For automating SSH access to specific repositories, you can also modify the SSH configuration of your local user account ~/.ssh/config .

For instance if we cloned the repository using the HTTPS method and would rather switch to SSH for the reasons mentioned above, there are two steps necessary:

  1. Add a SSH configuration for the host
  2. Adapt the git config

So first, we add a new entry for the SSH authentication with Github in the file ~/.ssh/config.

We define a hostname  github-test-project for the individual repository, define which SSH key to use and specify that we only want to authenticate with the key. Now that this is settled, we need to tell git to use this connection information. This is done in the local git repository configuration ~/Projects/test-project/.git/config . The file initially looks like this:

All we need to do is change the remote repository data to incorporate the SSH connection we defined in the SSH config. Just replace the url target to the SSH connection definition:

Note the semicolon and that we omitted the username before the SSH host. This information will be read from the SSH config. Also note that the repository needs to be initialited so that we have a master branch.

Continue reading


Switching Kernels: Using Python 2.7 and Python 3.5 in Jupyter Notebooks

Jupyter Notebooks are a great way for working with Python interactively. The integration of Python code into documents is very useful for reports or for writing executable documentation of algorithms and functions. The text can be structured and exported in various formats. With the ever increasing popularity of Python based on the data science hype, more and more libraries are available. Although Python3 is considered to be the future of Python, consensus on the question Python 2.7 vs Python 3.5 is not yet reached. There are quite a few differences and Python 3 is not backwards compatible and therefore the code cannot be executed with both versions without modification. When you install Jupyter Notebooks via Anaconda, Python3 is recommended but Python 2.7 packages also exist.

As there is a large number of libraries, which have not yet been ported to Python 3, it can be useful to switch between the language version within a Jupyter Notebook. The following example assumes that you have both Python versions already installed.

Installing a new Kernel

In Jupyter Notebooks, the kernel is responsible for executing Python code. When you install the Anaconda System for Python3, this version also becomes the default for the notebooks. In order to enable Python 2.7 in your notebooks, you need to install a new kernel like this:

Restart Jupyter to activate the new Python 2.7 kernel.

Switching Kernels

After restarting Jupyter, you can select the kernel and thereby which version to run the code easily from the menu:

 

 

Continue reading