Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s

For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s

As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d \
    --rm \
    --name mysql-56-in-memory \
    -p 3307:3306 \
    --tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m \
    -e MYSQL_ALLOW_EMPTY_PASSWORD=TRUE \
    -e MYSQL_DATABASE=dev \
    mysql:5.6

The arguments of Docker run mean the following

  • –rm: delete the container once it was stopped
  • –name: a name for the container
  • -p: map the host’s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  • –tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  • MYSQL_ALLOW_EMPTY_PASSWORD does what it implies
  • MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: mysql -u root -h 127.0.0.1 -P 3307

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: '3'
services:
  mysql-56-integration:
    container_name: mysql-56-in-memory
    restart: unless-stopped
    image: mysql:5.6
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD='true'
      - MYSQL_HOST=''
    volumes:
      - data:/var/cache
    ports:
      - "3307:3306"
 
volumes:
  data:
    driver_opts:
      type: tmpfs
      device: tmpfs

Continue reading


Install Innotop from Source

Innotop is a great tool but not included in the current Ubuntu repositories. Here is how you install it manually:

# Install perl database interface
sudo apt-get install libdbi-perl
# Install MySQL and Terminal perl modules
sudo cpan Term::ReadKey DBD::mysql
# Clone innotop
git clone https://github.com/innotop/innotop.git
# Enter directory
cd innotop
# Make
perl Makefile.PL
# Install
sudo make install

Then you can run innotop like this

innotop --user $ADMIN_USER --password $ADMIN_PASSWORD --host $HOST

 

Continue reading


Grafana and InfluxDB with SSL inside a Docker Container

Self-signed SSL certificates

On the host, create a directory for storing the self signed SSL certificates. This directory will be mounted in the Grafana container as well as in the InfluxDB container to /var/ssl . Create the self signed SSL certificates as follows:

mkdir -p /docker/ssl
cd /docker/ssl/
# Generate a private key
openssl genrsa -des3 -out server.key 1024
# Generate CSR
openssl req -new -key server.key -out server.csr
# Remove password
openssl rsa -in server.key -out server.key
# Generate self signed cert
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# Set permissions
sudo chmod 644 server.crt
sudo chmod 600 server.key

Next, create a config directory and create individual configuration files for Grafana and InfluxB: mkdir conf 

Grafana

In the file ./conf/grafana/defaults.ini set the protocol to https and provide the paths to the mounted ssl directory in the container.

#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
...
...
# https certs & key file
cert_file = /var/ssl/server.crt
cert_key = /var/ssl/server.key
...

InfluxDB

The file ./conf/influxdb/influxdb.conf is also pretty simple. Add a [http] category and add the settings:

[meta]
  dir = "/var/lib/influxdb/meta"
[data]
  dir = "/var/lib/influxdb/data"
  engine = "tsm1"
  wal-dir = "/var/lib/influxdb/wal"
[http]
  https-enabled = true
  https-certificate ="/var/ssl/server.crt"
  https-private-key ="/var/ssl/server.key"

Environment

You can set environment variables in env files  for the services.

env.grafana

GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource

env.influxdb

INFLUXDB_REPORTING_DISABLED=true
INFLUXDB_DB=
INFLUXDB_HTTP_AUTH_ENABLED=true
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=
INFLUXDB_USER=
INFLUXDB_USER_PASSWORD=
INFLUXDB_WRITE_USER=
INFLUXDB_WRITE_USER_PASSWORD=

Docker Compose

Now you can launch the service by using docker-compose up  for the following file. Note

version: '2'

services:
    influxdb:
        image: influxdb:latest
        container_name: influxdb
        ports:
            - "8083:8083"
            - "8086:8086"
            - "8090:8090"
        env_file:
            - 'env.influxdb'
        volumes:
            - data-influxdb:/var/lib/influxdb
            - /docker/ssl:/var/ssl
            - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf


    grafana:
        image: grafana/grafana:latest
        container_name: grafana
        ports:
            - "3000:3000"
        links:
            - influxdb
        env_file:
            - 'env.grafana'
        volumes:
            - data-grafana:/var/lib/grafana
            - /docker/ssl:/var/ssl
            - /docker/conf/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini
volumes:
  data-influxdb:
  data-grafana:

Lets Encrypt Setup

If you require valid certificates, you can also use certificates from lets encrypt.

First, create the certificates on the host:

certbot certonly --standalone --preferred-challenges http --renew-by-default -d iot.example.org

Then use this docker-compose file.

version: '2'

services:
    influxdb:
        image: influxdb:latest
        container_name: influxdb
        ports:
            - "8083:8083"
            - "8086:8086"
            - "8090:8090"
        env_file:
            - 'env.influxdb'
        volumes:
            - data-influxdb:/var/lib/influxdb
            - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
            - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
            - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf


    grafana:
        image: grafana/grafana:latest
        container_name: grafana
        ports:
            - "3000:3000"
        links:
            - influxdb
        env_file:
            - 'env.grafana'
        volumes:
            - data-grafana:/var/lib/grafana
            - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
            - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
            - /docker/conf/defaults.ini:/usr/share/grafana/conf/defaults.ini
volumes:
  data-influxdb:
  data-grafana:

Continue reading


Compile Percona Query Playback

Install the prerequisits and clone the repository.

sudo apt-get install libtbb-dev libmysqlclient-dev libboost-program-options-dev libboost-thread-dev libboost-regex-dev libboost-system-dev libboost-chrono-dev pkg-config cmake  libssl-dev
git clone https://github.com/Percona-Lab/query-playback.git
cd query-playback/
mkdir build_dir
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..

You might see this error

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
MYSQL_LIB
    linked by target "mysql_client" in directory /home/sproell/git/query-playback/percona_playback/mysql_client

-- Configuring incomplete, errors occurred!
See also "/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeOutput.log".
See also "/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeError.log".

I found this issue on Github and after editing the file CMakeLists.txt  (in the directory  ~/git/query-playback/percona_playback/mysql_client/CMakeLists.txt ) as suggested, the tool compiles. You need to replace find_library(MYSQL_LIB “mysqlclient_r” PATH_SUFFIXES “mysql”) with find_library(MYSQL_LIB “mysqlclient” PATH_SUFFIXES “mysql”) (remove the _r suffix).

Then you can compile the project as documented here

~/git/query-playback/build_dir
cd build_dir
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make
sudo make install

Continue reading


Illegal mix of collations: IntelliJ and UTF8mb4

When using variables inside SQL scripts within IntelliJ products (e.g. DataGrip), certain queries will not work because the encodings of the IntelliJ client and the server missmatch. This occurs for instance when you compare variables. A typical error message looks like this:

[HY000][1267] Illegal mix of collations (utf8mb4_unicode_520_ci,IMPLICIT) \
   and (utf8mb4_general_ci,IMPLICIT) for operation 'like'

IntelliJ products do not yet support MySQL’s utf8mb4 character set encodings. The problem occurs when using variables in queries. Per default. IntelliJ uses a UTF-8 encoding for the connection. When you use utf8mb4 as the database default character set, then variables will be encoded in UTF-8 while the database content remailns in utf8mb4. It is not possible to provide the character set encodings to the IntelliJ settings, as it will refuse to connect.

Check your server settings using the MySQL client:

MySQL [cropster_research]> show variables like '%char%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8mb4                    |
| character_set_connection | utf8mb4                    |
| character_set_database   | utf8mb4                    |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8mb4                    |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+

This seems correct, but when you connect with the IntelliJ client, you will get wrong results when you use variables. Until the products support utf8mb4, you would need to add the following settings to the script in order to force the right settings.

SET character_set_connection=utf8mb4;
SET collation_connection=utf8mb4_unicode_520_ci;

Continue reading