Percona Xtrabackup – Some General Tips

When you stream Xtrabackup to agzipped tar file, it also contains the information about the binary log position where the server was at when the backup was started. This information is needed to start the lave at the same position again

time nice -n 15 xtrabackup --backup --slave-info --safe-slave-backup  --stream=tar | gzip -c > /var/backup/percona.dump.tar.gz

In order to get that position without extracting the archive file, you can use the following command

tar -xOf /var/backup/percona.dump.tar.gz xtrabackup_binlog_info

You can list all files in the archive like this:

 tar -ztvf percona.dump.tar.gz 

Note that both operations can take quite some time and resources.

Continue reading


Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s

For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s

As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d \
    --rm \
    --name mysql-56-in-memory \
    -p 3307:3306 \
    --tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m \
    -e MYSQL_ALLOW_EMPTY_PASSWORD=TRUE \
    -e MYSQL_DATABASE=dev \
    mysql:5.6

The arguments of Docker run mean the following

  • –rm: delete the container once it was stopped
  • –name: a name for the container
  • -p: map the host’s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  • –tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  • MYSQL_ALLOW_EMPTY_PASSWORD does what it implies
  • MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: mysql -u root -h 127.0.0.1 -P 3307

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: '3'
services:
  mysql-56-integration:
    container_name: mysql-56-in-memory
    restart: unless-stopped
    image: mysql:5.6
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD='true'
      - MYSQL_HOST=''
    volumes:
      - data:/var/cache
    ports:
      - "3307:3306"
 
volumes:
  data:
    driver_opts:
      type: tmpfs
      device: tmpfs

Continue reading


Measure Replication Delay with Percona Toolkit Heartbeat (pt-heartbeat)

Replication lag occurs when the slave cannot keep up with the changes being made on the master instance. The reasons for a lag can be network congestion, poor performance on the slave or service interruptions of the slave mechanism, leading to a large pile of work to be catched up by the slave. When you use SHOW SLAVE STATUS you can monitor the current delay, which is indicated in the Seconds_Behind_Master: 0column. Ideally, this value is zero, but the smallest granularity you get for this delay counter is one second.

Percona offers a tool that allows measuring the delay more accurately, by writing timestamps in the master database and calculate the difference when they arrive at the slave: pt-heartbeat

On the Master

Create a dedicated database schema called percona and add the following table. 

CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (
    ts                    varchar(26) NOT NULL,
    server_id             int unsigned NOT NULL PRIMARY KEY,
    file                  varchar(255) DEFAULT NULL,    -- SHOW MASTER STATUS
    position              bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS
    relay_master_log_file varchar(255) DEFAULT NULL,    -- SHOW SLAVE STATUS
    exec_master_log_pos   bigint unsigned DEFAULT NULL  -- SHOW SLAVE STATUS
);

Then you can launch the script on the master, where it will write new timestamps periodically as long as the script is running.

pt-heartbeat -D percona --update --utc h=127.0.0.1,u=root,p=SECRET,P=3306

The -D flag specifies the database schema where the table for the heartbeat data is stored. The –update command is needed on the master for updating the table and the last part is the DSN, which specifies host address, user name and password. Very important is the UTC flag –utc, that ensures that the timestamps will be interpreted as UTC, regardless of the timezone setting. 

On the Slave

Create a user for reading the replicated heartbeat table like this:

GRANT REPLICATION SLAVE,PROCESS,SELECT ON percona.* TO `percona_checksum`@'10.20.30.40' IDENTIFIED BY 'SECRET';

Then you can run the script and point it to the slave. It will output precise delay counts in fractions of seconds

pt-heartbeat h=127.0.0.1,u=percona_checksum,p=SECRET,P=3306 -D percona --monitor --utc --master-server-id 1

Notice the different DSN, the –monitor flag and the master-server id, which needs to be the one of your master of course. You need this because the tool supports hierarchies of masters and therefore you would need to know which one is to be considered.

Results

The results will look similar to this

0.09s [  0.00s,  0.00s,  0.00s ]
0.02s [  0.20s,  0.00s,  0.00s ]
0.09s [  0.00s,  0.00s,  0.00s ]
0.03s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.10s [  0.01s,  0.00s,  0.00s ]
0.12s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.11s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.03s,  0.01s,  0.00s ]

The output is the current delay followed by moving averages over 1m, 5m and 15m, as you might know from your favorite OS already.

Have a look at the official documentation, there is a lot of options available. 

 

 

Continue reading


Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well. 

Generate and Sign  the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source.  Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate
ssl-ca=/etc/mysql/ssl/ca.pem
ssl-cert=/etc/mysql/ssl/server-cert.pem
ssl-key=/etc/mysql/ssl/server-key.pem

Then create a user for the slave

CREATE USER 'aws'@'%' IDENTIFIED BY 'SECRET';
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'aws'@'%' IDENTIFIED BY 'SECRET' REQUIRE SSL;

Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material;
CALL mysql.rds_reset_external_master;

Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{"ssl_ca":"-----BEGIN CERTIFICATE-----
MIIBAgMBVRpcm9sMRIw...
...
-----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
KAoIBAQCzn28awhyN8V56Z2bskCiMhJt4
...
-----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
SbeLNsRzrPoCVGGqwqR6gE6AZu
...
-----END RSA PRIVATE KEY-----"}');

A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication;
SHOW SLAVE STATUS\G

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP 1.2.3.4 would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host 1.2.3.4 -vv

 

Continue reading


Grafana and InfluxDB with SSL inside a Docker Container

Self-signed SSL certificates

On the host, create a directory for storing the self signed SSL certificates. This directory will be mounted in the Grafana container as well as in the InfluxDB container to /var/ssl . Create the self signed SSL certificates as follows:

mkdir -p /docker/ssl
cd /docker/ssl/
# Generate a private key
openssl genrsa -des3 -out server.key 1024
# Generate CSR
openssl req -new -key server.key -out server.csr
# Remove password
openssl rsa -in server.key -out server.key
# Generate self signed cert
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# Set permissions
sudo chmod 644 server.crt
sudo chmod 600 server.key

Next, create a config directory and create individual configuration files for Grafana and InfluxB: mkdir conf 

Grafana

In the file ./conf/grafana/defaults.ini set the protocol to https and provide the paths to the mounted ssl directory in the container.

#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
...
...
# https certs & key file
cert_file = /var/ssl/server.crt
cert_key = /var/ssl/server.key
...

InfluxDB

The file ./conf/influxdb/influxdb.conf is also pretty simple. Add a [http] category and add the settings:

[meta]
  dir = "/var/lib/influxdb/meta"
[data]
  dir = "/var/lib/influxdb/data"
  engine = "tsm1"
  wal-dir = "/var/lib/influxdb/wal"
[http]
  https-enabled = true
  https-certificate ="/var/ssl/server.crt"
  https-private-key ="/var/ssl/server.key"

Environment

You can set environment variables in env files  for the services.

env.grafana

GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource

env.influxdb

INFLUXDB_REPORTING_DISABLED=true
INFLUXDB_DB=
INFLUXDB_HTTP_AUTH_ENABLED=true
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=
INFLUXDB_USER=
INFLUXDB_USER_PASSWORD=
INFLUXDB_WRITE_USER=
INFLUXDB_WRITE_USER_PASSWORD=

Docker Compose

Now you can launch the service by using docker-compose up  for the following file. Note

version: '2'

services:
    influxdb:
        image: influxdb:latest
        container_name: influxdb
        ports:
            - "8083:8083"
            - "8086:8086"
            - "8090:8090"
        env_file:
            - 'env.influxdb'
        volumes:
            - data-influxdb:/var/lib/influxdb
            - /docker/ssl:/var/ssl
            - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf


    grafana:
        image: grafana/grafana:latest
        container_name: grafana
        ports:
            - "3000:3000"
        links:
            - influxdb
        env_file:
            - 'env.grafana'
        volumes:
            - data-grafana:/var/lib/grafana
            - /docker/ssl:/var/ssl
            - /docker/conf/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini
volumes:
  data-influxdb:
  data-grafana:

Lets Encrypt Setup

If you require valid certificates, you can also use certificates from lets encrypt.

First, create the certificates on the host:

certbot certonly --standalone --preferred-challenges http --renew-by-default -d iot.example.org

Then use this docker-compose file.

version: '2'

services:
    influxdb:
        image: influxdb:latest
        container_name: influxdb
        ports:
            - "8083:8083"
            - "8086:8086"
            - "8090:8090"
        env_file:
            - 'env.influxdb'
        volumes:
            - data-influxdb:/var/lib/influxdb
            - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
            - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
            - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf


    grafana:
        image: grafana/grafana:latest
        container_name: grafana
        ports:
            - "3000:3000"
        links:
            - influxdb
        env_file:
            - 'env.grafana'
        volumes:
            - data-grafana:/var/lib/grafana
            - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
            - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
            - /docker/conf/defaults.ini:/usr/share/grafana/conf/defaults.ini
volumes:
  data-influxdb:
  data-grafana:

Continue reading