Flashing a OnePlus One from CM to LineageOS

As Cyanogen Inc closed down its operations in December 2016, CyanogenMod was affected too and my OnePlus One (OPO) did not receive updates anymore. This is not ideal, as new and old bugs have will remain. For instance did my phone often not reconnect to the 4G network, when a wifi connection was lost, This was very annoying. For this reason, if was about time to upgrade to a new OS: LineageOS.

LineageOS

LineageOS is a fork of CyanogenMod and contunes the quite successful project for our benefit. LineageOS 14 is compatible with Android 7.1 and very easy to install. These are the steps which I had to follow.

How to Upgrade

  • Create a backup with Helios. Use the Chrome Helium app if the app on your mobile phone refuses to start.
  • Download and install adb
  • Download the TWRP custom ROM: https://twrp.me/Devices/ 
  • Download LineageOS. The codename for the OPO is bacon.
  • Download the Google apps mini distribution (stock is too large) here.
  • Enable developer tools and connect the phone with a USB cable
  • Reboot the device with adb:  ./adb reboot bootloader 
  • Check if the device is recognized:  ./fastboot devices 
  • Enable OEM unlock:  fastboot oem unlock 
  • Install the custom rom:  fastboot flash recovery twrp-x.x.x-x-bacon.img 
  • Reboot into the new ROM: With the device powered down, hold the Volume Down and Power buttons.

Copy the files to the device

Install both zip files by selecting first the LineageOS and then the Google Apps Zip file

Thats it. Reboot and begin with the setup or restore the backup.

Update 28.09.2017

The problem that the phone would not reconnect to 3G/4G again after losing the Wifi signal still persisted with LineageOS. A friend recommended flashing the firmware of the device. After installing the version 2016_1-25_.4.0.1.c7-00011 downloaded from here solved the issue for now. No more connection problems so far

Continue reading


Verifying Replication Consistency with Percona’s pt-table-checksum

Replication is an important concept for improving database performance and security. In this blog post, I would like to demonstrate how the consistency between a MySQL master and a slave can be verified. We will create two Docker containers, one for the master one for the slave.

Installing the Percona Toolkit

The Percona Toolkit is a collection of useful utilities, which can be obained for free from the company’s portal. The following commands install the prerequisits, download the package and eventually the package.

Setting up a Test Environment with Docker

The following command creates and starts a docker container. Note that these are minimal examples and are not suitable for a serious environment.

Get the IP address from the master container:

You can connect to this container like this and verify the server id:

We repeat the command for the slave, but use a different id. port and name:

For simplicity, we did not use Docker links, but will rather use IP addresses assigned by Docker directly.

Replication Setup

First, we need to setup a user with replication privileges. This user will connect from the slave to the master.

Now log into the slave container and add the connection details for the master:

Now our simple slave setup is running.

Get some test data

Lets download the Sakila test database and import it into the master. It will be replicated immediately.

Verify that the data is on the slave as well:

After our setup is completed, we can proceed with Percona pt-table checksum.

Percona pt-table-checksum

The Percona pt-table-checksum tool requires the connection information of the master and the slave in a specific format. This is called the DSN (data source name), which is a coma separated string. We can store this information in a dedicated database called percona in a table called dsns. We create this table on the master. Note that the data gets replicated to the slave within the blink of an eye.

The next step involves creating permissions on the slave and the master!

The percona user is needed to run the script. Note that the IP address is this time from the (Docker) host, having the IP 172.17.0.1 by default. In real world scenarios, this script would either be run on the master or on the slave directly.

Now we need to add the information about the slave to the table we created. The Percona tool could also read this from the process list, but it is more reliable if we add the information ourselves. To do so, we add a record to the table we just created, which describes the slave DSN:

The pt-table-checksum tool the connects to the master instance and the the slave. It computes checksums of all databases and tables and compares results. You can use the tool like this:

The result shows a difference in the MySQL internal table for permissions. This is obviously not what we are interested in, as permissions are individual to a host. So we rather exclude the MySQL internal database and also the percona database, because it is not what we are interested in. Also in order to test it the tool works, we delete the last five category assignments from the table  with mysql -u root -h 172.17.0.3 -e "DELETE FROM sakila.film_category WHERE film_id > 995;" and update a row in the city table with 

Now execute the command again:

You see that there is a difference in the tables sakila.city and in the table sakila.film_category. The tool does not report the actual number of differences, but rather the number of different chunks. To get the actual differences, we need to use a different tool, which utilises the checksum table that the previous step created.

Show the differences with pt-tabel-sync

The pt-table-sync tool is the counter part for the pt-table-checksum util. It can print or even replay the SQL statements that would render the slave the same state again to be in sync with the master. We can run a dry-run first, as the tool is potentially dangerous.

With –dry-run, you only see affected tables, but not the actual data because it does not really access the databases tables in question. Use –print additionally or instead of dry-run to get a list:

The command shows how we can rename back from Innsbruck to Yuncheng again and also provides the INSERT statements to get the deleted records back.When we replace –print with –execute, the data gets written to the master and replicated to the slave. To allow this, we need to set the permissions on the master

This error indicates that updating the city table has consequences, because it is a FK to child tables. In this example, we are bold and ignore this warning. This is absolutely not recommended for real world scenarios.

The command –no-check-child-tables ignores child tables and the command –no-foreign-key-checks ignores foreign keys.

Run the checksum command again to verify that the data has been restored:

0 DIFFS, we are done!

 

 

 

 

 

 

 

 

 

 

 

 

 

Continue reading


Fixing Random Freezes with Ubuntu 16.04 LTS, Intel Skylake and an Nvidia GPU

My Lenovo ThinkCentre m900 (10FHCTO1WW) with an Intel i7-6700 showed weird and random freezes from day 1 when trying to install Mint 18 / Ubuntu 16 with any kernel newer than 3x. After investigating for quite some hours, I gave up and installed an Ubuntu 14.04 LTS on it. The device is certified to it, but the old version did not support all features and even some basic things such as audio did not work. At lest the random freezes were gone and I could work with that machine. Now that the system will not receive updates soon, I gave it another try and setup Mint 18.2 (Sonya). Unfortunately, the Lenovo machine froze again after a few minutes, filling up the log again with the following error messages. 

I started the investigation again and found a different trace, which pointed to the graphics card. The important hint and solution came from SO. Following a few other forum posts, it became clear that the Nvidia drivers do not play nicely with recent kernels for some specific Nvidia cards ind combination with newer kernels. So I followed the proposed steps and disabled the card complete. Just removing the card in the BIOS and uninstalling the drivers was not enough. I also had to blacklist the modules for the nouveau kernel driver.

  1. Disable the Nvidia card in the BIOS and use the Intel onchip GPU
  2. Remove all Nvidia packages: 
    sudo apt-get remove nvidia* && sudo apt autoremove
  3. Blacklist the module:  
    sudo vim /etc/modprobe.d/blacklist.conf


  4. Reboot

The card is not used any more and the freezes stopped.

I hope I do not have to remove this article again and the system remains as stable as it is now for six hours.

Continue reading


Using Hibernate Search with Spring Boot

Spring Boot is a framework, that makes it much easier to develop Spring based applications, by following a convention over configuration principle (while in contrast Spring critics claim that the framework’s principle is rather configuration over everything). In this article, I am going to provide an example how to achieve the following:

  • Create a simple Web application based on Spring Boot
  • Persist and access data with Hibernate
  • Make it searchable with Hibernate Search (Lucine)

I use Eclipse with a Gradle plugin for convenience. MySQL will be our back-end for storing the data. The full example can be obtained from my Github Repository.

Bootstrapping: Create a Simple Spring Boot Webapp

The easiest way to start with Spring Boot is heading over to start.spring.io and create a new project. In this example, I will use Gradle for building the application and handling the dependencies and I add Web and JPA starters.

 

 

Download the archive to your local drive and extract it to a folder. I called the project SearchaRoo.

Import the Project with Eclipse

Import it as an existing Gradle Project in Eclipse by using the default settings. You will end up with a nice little project structure as shown below:

We have a central application starter class denoted SearchaRooAppication.java, package definitions, application properties and even test classes. The great thing with Spring Boot is that it is very simple to start and that you can debug it as every other local Java application. There is no need for remote debugging or complex application server setups.

Prepare the Database

We need a few permissions on our MySQL instance before we can start.

We can then add the connection details into the application.properties file. We will edit this file several times when the complexity of this project increases.

Now the basic database setup is done. We can then start adding model classes.

Getting some Employees on Board

MySQL offers a rather small but well documented sample database called employees, which is hosted on Github.  Obtain and import the data as follows:

The script creates a new schema called employees and you will end up with a schema like this:

In the course of this article, we are going to model this schema with Java POJOs by annotating the entities and the a appropriate fields with JPA.

Dependencies

Before we can start modelling the entities in Java, have a look at the Gradle build file. We include additional dependencies for the MySQL connector and Apache commons.

Modelling Reality

The next step covers modelling the data which we imported with Java POJOs. Obviously this is not the most natural way, because in general you would create the model first and then add data to it, but as we already had the data we decided to go in this direction. In the application.properties file, set the database to the imported employees database and set the Hibernate create property to validate. With this setting, we can confirm that we modelled the Java classed in accordance with the database model defined by the MySQL employees database. 

An example of such a class is shown below, the other classes can be found in the Github repository.

Now that we have prepared the data model, our schema is now fixed and does not change any more. We can deactivate the Hibernate based dynamic generation of the database tables and use the Spring database initialization instead.To see if we modelled the data correctly, we import MySQL employee data dump we obtained before and import it into our newly created schema, which maps the Java POJOs.

Importing the Initial Data

In the next step, we import the data from the MySQL employee database into our schema spring_hibernate. This schema contains the tables that Hibernate created for us. The following script copies the data between the two schemata. If you see an error, then there is an issue with your model.

We now imported the data in the database schema that we defined for our project. Spring can load schema and initial data during start-up. So we provide two files, one containing the schema and the other one containing the data. To do that, we create two dumps of the database. One containing the schema only, the other one containing the data only.

By deactivating the Hibernate data generation and activating the Spring way, the database gets initialized every time the application starts. Change and edit the following lines in the application.properties

Before we can import the data with the scripts, make sure to drop the schema and disable foreign key checks in the schema file and enable them again at the end. Spring ignores the actionable MySQL comments. So your schema file should contain this

And also insert the two foreign key statements to the data file. Note that the import can take a while. If you are happy with the initialized data, you can deactivate the initialization by setting the variable to false: spring.datasource.initialize=false

The application.properties file meanwhile looks like this:

Adding Hibernate Search

Hibernate search offers full-text search capabilities by using a dedicated index. We need to add the dependencies to the build file.

Refresh the gradle file after including the search dependencies.

Adding Hibernate Search Dependencies

In this step, we annotate the model POJO classes and introduce the full-text search index. Hibernate search utilises just a few basic settings to get started. Add the following variables to tne application properties file.

Please not that storing the Lucene index in the tmp directory is not the best idea, but for testing we can use this rather futile location. We also use the filesystem to store the index, as this is the simplest approach.

Create a Service

In order to facilitate Hibernate Search on our data, we add a service class, which offers methods for searching. The service uses a configuration, which is injected by Spring during run time. The configuration is very simple.

The @Configuration is loaded when Spring builds the application context. It provides a bean of our service, which can then be injected into the application. The service itself provides methods for creating and searching the index. In this example, the search method is very simple: it only searches on the first and the last name of an employee and it allows users to make one mistake (distance 1).

The service implementation currently only contains an initialization method, which used for creating the Lucene index on the filesystem. Before we can test the index, we need to have at least one indexed entity. This can be achieved by simply adding the annotation @Indexed to the POJO.

When we start the application now, we can see that Hibernate creates the index and a short check on disk shows that it worked:

So far, we did not tell Hibernate search which fields we want add to the index and thus make them full-text searchable. The following listing shows the annotated @Fields.

Starting the application again re-creates the index. Time for some basic searching.

Seaching Fulltext

Hibernate Search offers many features, which are not offered in a similar quality by native databases. One interesting feature is for instance fuzzy search, which allows finding terms within an edit distance of up to two letters. The method for searchin on two fields was already shown above. We can use this method in a small JUnit test:

The user made a small typo by entering Chrisu instead of Chris. As we allowed two mistakes, we receive a list of similar names and the test evaluates to passed. Sone possible results are shown below.

Conclusions

Hibernate Search is a great tool and can be easily integrsted into Spring Boot Applicstions. In this post, I gave a minimalistic example how fulltext fuzzy search can be added to existing databases and allows a flexible and powerful search. A few more advanced thoughts on Hibernate Search are given in this blog post here. The Hibernate Search documentation contains a lot of useful and more elaborate examples. The full example can be obtained on Github.

Continue reading


Hibernate Search and Spring Boot: Building Bridges

Hibernate Search is a very convenient way for storing database content in a Lucine index and add fulltext search capabilities to data driven projects simply by annotating classes. It can be easily integrated into Spring Boot applications and as long as only the basic features are used, it works out of the box.  The fun starts when the Autoconfiguration cannot find out how to properly configure things automatically, then it gets tricky quite quickly. Of course this is natural behaviour, but one gets spoiled quite quickly. 

Using the latest Features: Hibernate ORM, Hibernate Search and Spring Boot

The current version of Spring Boot is 1.5.2. This version uses Hibernate ORM 5.0. The latest stable Hibernate Search versions are 5.6.1.Final and 5.7.0.Final, which in  in contrast require Hibernate ORM 5.1 and 5.2 respectively. Also you need Java 8 now. For this reason if you need the latest Spring Search features in combination with Spring Boot, you need to adapt the dependencies as follows:

Note that the Hibernate Entity Manager needs to be excluded, because it has been integrated into the core in the new Hibernate version. Details are given in the Spring Boot documentation.

Enforcing the Dependencies to be Loaded in the Correct Sequence 

As written earlier, Spring Boot takes care of a lot of configurations for us. Most of the time, this works perfectly and reduces the pain for configuring a new application manually. In some particular cases, Spring cannot figure out that there exists a dependency between different services, which needs to be resolved in a specified order. A typical use case is the implementation of FieldBridges for Hibernate Search. FieldBrides translate between the actual Object from the Java World and the representation of such an object in the Lucene index. Typically an EnumBridge is used for indexing Enums, which are often used for realizing internationalization (I18n).

When the Lucene Index  is created, Hibernate checks if Enum fields need to be indexed and if there exist Bridge that converts between the object and the actual record in the Index. The problem here is that Hibernate JPA is loaded at a very early stage in the Spring Boot startup proces. The problem only arises if the BridgeClass utilises @Autowired fields which get injected. Typically, these fields would get injected when the AnnotationBeanConfigurerAspect bean is loaded. Hibernate creates the session with the session factory auto configuration before the spring configurer aspect bean was loaded. So the FieldBridge used by Hibernate during the initialization of the index does not have the service injected yet, causing a nasty Null Pointer Exception. 

Example EnumBridge

The following EnumBridge example utilises an injected Service, which needs to be available before Hibernate starts. If not taken care of, this causes a Null Pointer Exception.

Enforce Loading the Aspect Configurer Before the Session Factory

In order to enforce that the AnnotationBeanConfigurerAspect is created before the Hibernate Session Factory is created, we simply implement our own HibernateJpaAutoConfiguration by extension and add the AnnotationBeanConfigurerAspect to the constructor. Spring Boot now knows that it needs to instantiate the AnnotationBeanConfigurerAspect before it can instantiate the HibernateJpaAutoConfiguration and we then have wired Beans ready for the consumption of the bridge. I found the correct hint here and here.

As it has turned out, using @DependsOn annotations did not work and also @Ordering the precedence of the Beans was not suffucient. With this little hack, we can ensure the correct sequence of initialization.

Continue reading