Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well. 

Generate and Sign  the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source.  Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate

Then create a user for the slave


Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material;
CALL mysql.rds_reset_external_master;

Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{"ssl_ca":"-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----"}');

A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication;

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host -vv


Continue reading

Verifying Replication Consistency with Percona’s pt-table-checksum

Replication is an important concept for improving database performance and security. In this blog post, I would like to demonstrate how the consistency between a MySQL master and a slave can be verified. We will create two Docker containers, one for the master one for the slave.

Installing the Percona Toolkit

The Percona Toolkit is a collection of useful utilities, which can be obained for free from the company’s portal. The following commands install the prerequisits, download the package and eventually the package.

sudo apt-get install -y wget libdbi-perl libdbd-mysql-perl libterm-readkey-perl libio-socket-ssl-perl
sudo dpkg -i percona-toolkit_3.0.4-1.xenial_amd64.deb 

Setting up a Test Environment with Docker

The following command creates and starts a docker container. Note that these are minimal examples and are not suitable for a serious environment.

docker run --name mysql_master -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --log-bin \
   --binlog-format=ROW --server-id=1

Get the IP address from the master container:

# Get the IP of the master 
docker inspect mysql_master | grep IPAddress 

"SecondaryIPAddresses": null, 
    "IPAddress": ""

You can connect to this container like this and verify the server id:

stefan@Lenovo ~/Docker-Projects $ mysql -u root -h
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.35-log MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like 'server_id';
| Variable_name | Value |
| server_id     | 1     |
1 row in set (0,00 sec)

We repeat the command for the slave, but use a different id. port and name:

docker run --name mysql_slave -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --server-id=2

For simplicity, we did not use Docker links, but will rather use IP addresses assigned by Docker directly.

Replication Setup

First, we need to setup a user with replication privileges. This user will connect from the slave to the master.

# On the host, interact with the master container
## Get the IP address of the slave container
$ docker inspect mysql_slave | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "",

## Login to the MySQL console of the master
### Grant permissions
### Get the current binlog position
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
| mysqld-bin.000002 | 346 | | | |
1 row in set (0,00 sec)

Now log into the slave container and add the connection details for the master:

## Connect to the MySQL Slave instance
$ mysql -u root -h

### Setup the slave

Query OK, 0 rows affected, 2 warnings (0,05 sec)

### Start and check
mysql>   start slave;
Query OK, 0 rows affected (0,01 sec)

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_User: percona
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000002
          Read_Master_Log_Pos: 346
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 284
        Relay_Master_Log_File: mysqld-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Now our simple slave setup is running.

Get some test data

Lets download the Sakila test database and import it into the master. It will be replicated immediately.

~/Docker-Projects $ tar xvfz sakila-db.tar.gz 

mysql -u root -h < sakila-db/sakila-schema.sql 
mysql -u root -h < sakila-db/sakila-data.sql

Verify that the data is on the slave as well:

mysql -u root -h -e "USE sakila;SHOW TABLES;"
| Tables_in_sakila           |
| actor                      |
| actor_info                 |
| address                    |
| category                   |
| city                       |
| country                    |
| customer                   |
| store                      |

After our setup is completed, we can proceed with Percona pt-table checksum.

Percona pt-table-checksum

The Percona pt-table-checksum tool requires the connection information of the master and the slave in a specific format. This is called the DSN (data source name), which is a coma separated string. We can store this information in a dedicated database called percona in a table called dsns. We create this table on the master. Note that the data gets replicated to the slave within the blink of an eye.

USE percona;
 `dsn` varchar(255) NOT NULL,

The next step involves creating permissions on the slave and the master!

GRANT ALL PRIVILEGES ON percona.* TO 'percona'@'';

The percona user is needed to run the script. Note that the IP address is this time from the (Docker) host, having the IP by default. In real world scenarios, this script would either be run on the master or on the slave directly.

Now we need to add the information about the slave to the table we created. The Percona tool could also read this from the process list, but it is more reliable if we add the information ourselves. To do so, we add a record to the table we just created, which describes the slave DSN:

INSERT INTO percona.DSN-Table VALUES (1,'h=,u=percona,p=SECRET,P=3306');

The pt-table-checksum tool the connects to the master instance and the the slave. It computes checksums of all databases and tables and compares results. You can use the tool like this:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \
  --recursion-method=dsn=t=percona.DSN_Table -h -P 3306 -u percona -pSECRET
09-10T10:13:11      0      0        0       1       0   0.020 mysql.columns_priv
09-10T10:13:11      0      0        3       1       0   0.016 mysql.db
09-10T10:13:11      0      0        0       1       0   0.024 mysql.event
09-10T10:13:11      0      0        0       1       0   0.014 mysql.func
09-10T10:13:11      0      0       40       1       0   0.026 mysql.help_category
09-10T10:13:11      0      0      614       1       0   0.023 mysql.help_keyword
09-10T10:13:11      0      0     1224       1       0   0.022 mysql.help_relation
09-10T10:13:12      0      0      585       1       0   0.266 mysql.help_topic
09-10T10:13:12      0      0        0       1       0   0.031 mysql.ndb_binlog_index
09-10T10:13:12      0      0        0       1       0   0.024 mysql.plugin
09-10T10:13:12      0      0        6       1       0   0.287 mysql.proc
09-10T10:13:12      0      0        0       1       0   0.031 mysql.procs_priv
09-10T10:13:12      0      1        2       1       0   0.020 mysql.proxies_priv
09-10T10:13:12      0      0        0       1       0   0.024 mysql.servers
09-10T10:13:12      0      0        0       1       0   0.017 mysql.tables_priv
09-10T10:13:12      0      0     1820       1       0   0.019 mysql.time_zone
09-10T10:13:12      0      0        0       1       0   0.015 mysql.time_zone_leap_second
09-10T10:13:12      0      0     1820       1       0   0.267 mysql.time_zone_name
09-10T10:13:13      0      0   122530       1       0   0.326 mysql.time_zone_transition
09-10T10:13:13      0      0     8843       1       0   0.289 mysql.time_zone_transition_type
09-10T10:13:13      0      1        4       1       0   0.031 mysql.user
09-10T10:13:13      0      0        1       1       0   0.018 percona.DSN_Table
09-10T10:13:13      0      0      200       1       0   0.028
09-10T10:13:13      0      0      603       1       0   0.023 sakila.address
09-10T10:13:13      0      0       16       1       0   0.033 sakila.category
09-10T10:13:13      0      0      600       1       0   0.023
09-10T10:13:13      0      0      109       1       0   0.029
09-10T10:13:14      0      0      599       1       0   0.279 sakila.customer
09-10T10:13:14      0      0     1000       1       0   0.287
09-10T10:13:14      0      0     5462       1       0   0.299 sakila.film_actor
09-10T10:13:14      0      0     1000       1       0   0.027 sakila.film_category
09-10T10:13:14      0      0     1000       1       0   0.032 sakila.film_text
09-10T10:13:14      0      0     4581       1       0   0.276 sakila.inventory
09-10T10:13:15      0      0        6       1       0   0.030 sakila.language
09-10T10:13:15      0      0    16049       1       0   0.303 sakila.payment
09-10T10:13:15      0      0    16044       1       0   0.310 sakila.rental
09-10T10:13:15      0      0        2       1       0   0.029 sakila.staff
09-10T10:13:15      0      0        2       1       0   0.020

The result shows a difference in the MySQL internal table for permissions. This is obviously not what we are interested in, as permissions are individual to a host. So we rather exclude the MySQL internal database and also the percona database, because it is not what we are interested in. Also in order to test it the tool works, we delete the last five category assignments from the table  with mysql -u root -h -e “DELETE FROM sakila.film_category WHERE film_id > 995;” and update a row in the city table with 

mysql -u root -h -e "update SET city='Innsbruck' WHERE city_id=590;"

Now execute the command again:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona -h -P 3306 -u percona -pSECRET
09-10T10:46:33      0      0      200       1       0   0.017
09-10T10:46:34      0      0      603       1       0   0.282 sakila.address
09-10T10:46:34      0      0       16       1       0   0.034 sakila.category
09-10T10:46:34      0      1      600       1       0   0.269
09-10T10:46:34      0      0      109       1       0   0.028
09-10T10:46:34      0      0      599       1       0   0.285 sakila.customer
09-10T10:46:35      0      0     1000       1       0   0.297
09-10T10:46:35      0      0     5462       1       0   0.294 sakila.film_actor
09-10T10:46:35      0      1     1000       1       0   0.025 sakila.film_category
09-10T10:46:35      0      0     1000       1       0   0.031 sakila.film_text
09-10T10:46:35      0      0     4581       1       0   0.287 sakila.inventory
09-10T10:46:35      0      0        6       1       0   0.035 sakila.language
09-10T10:46:36      0      0    16049       1       0   0.312 sakila.payment
09-10T10:46:36      0      0    16044       1       0   0.320 sakila.rental
09-10T10:46:36      0      0        2       1       0   0.030 sakila.staff
09-10T10:46:36      0      0        2       1       0   0.027

You see that there is a difference in the tables and in the table sakila.film_category. The tool does not report the actual number of differences, but rather the number of different chunks. To get the actual differences, we need to use a different tool, which utilises the checksum table that the previous step created.

Show the differences with pt-tabel-sync

The pt-table-sync tool is the counter part for the pt-table-checksum util. It can print or even replay the SQL statements that would render the slave the same state again to be in sync with the master. We can run a dry-run first, as the tool is potentially dangerous.

pt-table-sync --dry-run  --replicate=percona.checksums --sync-to-master h= -P 3306 \
   -u percona -pSECRET --ignore-databases mysql,percona
# NOTE: --dry-run does not show if data needs to be synced because it
#       does not access, compare or sync data.  --dry-run only shows
#       the work that would be done.
# Syncing via replication P=3306,h=,p=...,u=percona in dry-run mode, without accessing or comparing data
#      0       0      0      0 Chunk     08:57:51 08:57:51 0
#      0       0      0      0 Nibble    08:57:51 08:57:51 0    sakila.film_category

With –dry-run, you only see affected tables, but not the actual data because it does not really access the databases tables in question. Use –print additionally or instead of dry-run to get a list:

pt-table-sync --print --replicate=percona.checksums --sync-to-master h= -P 3306 \
  -u percona -pSECRET --ignore-databases mysql,percona
REPLACE INTO `sakila`.`city`(`city_id`, `city`, `country_id`, `last_update`) VALUES \
   ('590', 'Yuncheng', '23', '2006-02-15 04:45:25') 
  \ /*percona-toolkit src_db:sakila src_tbl:city  ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('997',... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('998', '11 ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('999', '3', ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('1000', '5', ... 

The command shows how we can rename back from Innsbruck to Yuncheng again and also provides the INSERT statements to get the deleted records back.When we replace –print with –execute, the data gets written to the master and replicated to the slave. To allow this, we need to set the permissions on the master

GRANT INSERT, UPDATE, DELETE ON sakila.* TO 'percona'@'';
pt-table-sync --execute  --replicate=percona.checksums --check-child-tables \ 
  --sync-to-master h= -P 3306 -u percona -pSECRET --ignore-databases mysql,percona
REPLACE statements on can adversely affect child table `sakila`.`address` 
   because it has an ON UPDATE CASCADE foreign key constraint. 
   See --[no]check-child-tables in the documentation for more information. 
   --check-child-tables error  while doing on

This error indicates that updating the city table has consequences, because it is a FK to child tables. In this example, we are bold and ignore this warning. This is absolutely not recommended for real world scenarios.

pt-table-sync --execute  --replicate=percona.checksums --no-check-child-tables \
   --no-foreign-key-checks --sync-to-master h= -P 3306 -u percona -pSECRET \ 
   --ignore-databases mysql,percona

The command –no-check-child-tables ignores child tables and the command –no-foreign-key-checks ignores foreign keys.

Run the checksum command again to verify that the data has been restored:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \ 
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona 
   -h -P 3306 -u percona -pSECRET

09-10T11:24:42      0      0      200       1       0   0.268
09-10T11:24:42      0      0      603       1       0   0.033 sakila.address
09-10T11:24:42      0      0       16       1       0   0.029 sakila.category
09-10T11:24:42      0      0      600       1       0   0.275
09-10T11:24:42      0      0      109       1       0   0.023
09-10T11:24:43      0      0      599       1       0   0.282 sakila.customer
09-10T11:24:43      0      0     1000       1       0   0.046
09-10T11:24:43      0      0     5462       1       0   0.284 sakila.film_actor
09-10T11:24:43      0      0     1000       1       0   0.036 sakila.film_category
09-10T11:24:43      0      0     1000       1       0   0.279 sakila.film_text
09-10T11:24:44      0      0     4581       1       0   0.292 sakila.inventory
09-10T11:24:44      0      0        6       1       0   0.031 sakila.language
09-10T11:24:44      0      0    16049       1       0   0.309 sakila.payment
09-10T11:24:44      0      0    16044       1       0   0.325 sakila.rental
09-10T11:24:44      0      0        2       1       0   0.029 sakila.staff
09-10T11:24:44      0      0        2       1       0   0.028

0 DIFFS, we are done!














Continue reading

A MySQL 5.7 Cluster Based on Ubuntu 16.04 LTS – Part 1

A Cluster Scenario

In this example we create the smallest possible MySQL cluster based on four nodes running on three machines. Node 1 will run the cluster management software, Node 2 and Node 3 will serve as dats nodes and Node 4 is the MySQSL API, which runs on the same VM on Node 1.

  +------------------+ +------------------+ +-------------------+ 
  |    Node 1 & 4    | |      Node 2      | |       Node 3      | 
  | IP: | | IP: | | IP:  | 
  |  ndb_mgm; MySQL  | |       ndbd       | |       ndbd        |
  +------------------+ +------------------+ +-------------------+

|                         VirtualBox Host                          |

What is a Cluster and When to Use it?

When applications grow, there are several possibilities to improve the performance of the database layer. The two technologies used in most scenarios are replication and clustering.

Replication is used for moving the data from a master database to one or more slave databases. Per default, this data exchange is asynchronously. Semi-synchronous data replication can be implemented, but if you need synchronous data replication on several machines, clusters are the way to go.

A MySQL Cluster implements a shared nothing architecture and supports synchronous replication. You can very easily add and remove nodes to your cluster and scale out horizontally. This is especially true if you need scaling the database writes. Clusers use a special storage engine called NDB. This technology offers high-availability and high-redundancy. The documentation for a MySQL 5.7 cluster can be found here.


In this example, we will use VirtualBox as virtualization platform and run three nodes as the smallest possible cluster. In the first step, we create a template virtual machine, which comes with the basic packages, passwordless keybased authentication and a manually configured IP address. We will clone this machine and adapt it to our needs. Parts of this tutorial are based on the digital ocean howto.

The Template

We create the first VM based on Ubuntu 16.05 LTS and install the basics. MySQL requires the libaio1 package, which we will install in the template like this:

sudo apt-get install libaio1

Then it is time to download the MySQL Cluster software, which can be obtained for free from the MySQL developer page. This are the two steps necessary:

# Download
# Install
sudo dpkg -i mysql-cluster-gpl-7.5.4-debian7-x86_64.deb

Assign a static IP address in the file /etc/network/interfaces .

# The loopback network interface
auto lo
iface lo inet loopback

## The primary network interface
auto enp0s3
iface enp0s3 inet static

Make sure to install the guest additions, add a MySQL user, copy the MySQL startup script etc. Now the template machine is ready. Shut it down and clone it two times. Make sure to assign a new MAC address and then assign a different, static IP address.

Setting Up the Machines

In the following steps, we are going to setup the virtul machines. We run three machines which provide four services.

Install the Manager on Node 1

The cluster will be controlled via Node 1. First, create a data directory, where we will store the configuration files.

mkdir /opt/mysql/server-5.7/mysql-cluster/cluster-data
chown -R mysql:mysql /opt/mysql/server-5.7/mysql-cluster/cluster-data

Then create the configuration file. This file defines the cluster and will be read during the startup of the cluster. Each node needs to have a unique id. Usualy, the ID starts with 0, but in this example we  defined the node IDs starting from 1 in order to align it with the IP addresses.

root@node1:~# cat /opt/mysql/server-5.7/mysql-cluster/config.ini
# Management process options:
hostname= #node1  # Hostname of the manager
datadir=/opt/mysql/server-5.7/mysql-cluster/cluster-data  # Directory for the log files

hostname= #node2    # Hostname of the first data node
datadir=/usr/local/mysql/data   # Remote directory for the data files

datadir=/usr/local/mysql/data   # Remote directory for the data files

# SQL node options:

Note that if you make changes to the config.ini files, they might not be loaded directly, but are read from a cache file. So if you change for instance the data directory, make sure to delete all cached files, e.g. ndb_1_config.bin.1*.

Install the MySQL Server on Node1

The installation of the MySQL service is pretty straight forward and follows a similar pattern as a regular MySQL setup. You just need to ensure that you use the Cluster Version which we already installed in the template machine and create symlinks accordingly. The configuration of the MySQL Server is done in /etc/my.cnf , as usual. In the following you can see an example configuration.

root@node1:~# cat /etc/my.cnf 
socket          = /tmp/mysql.sock
nice            = 0

# * Basic Settings

user            = mysql
pid-file        = /tmp/
socket          = /tmp/mysql.sock
port            = 3306
basedir         = /opt/mysql/server-5.7
datadir         = /opt/mysql/server-5.7/data
tmpdir          = /tmp


key_buffer_size         = 16M
max_allowed_packet      = 16M

query_cache_limit       = 1M
query_cache_size        = 16M

log_error = /var/log/mysql/error.log

server-id              = 1
expire_logs_days       = 10
max_binlog_size   = 100M


ndbcluster # run NDB storage engine

# Options for MySQL Cluster processes:
ndb-connectstring=  # location of management server

The most important bits are at the bottom of the file. The command ndbcluster makes the NDB storage engine available and the section [mysql_cluster] points to the cluster management node. In our case, the MySQL server node and the cluster management node are on the same machine. In larger settings it can be benefitial to have a dedicated VM for each task.

Make sure to set the permissions for the data directory correctly and to place the startup script in /etc/init.d/mysql . Make sure to start the MySQL server on Node 1 and check the error.log file for any strange messages.

Setting up the Data Nodes

In the next step, we need to configure the data nodes. Launch Node 2 and login.

# Create the data directory
sudo mkdir -p /var/mysql/data
# permissions
sudo chown -R mysql:mysql /var/mysql

The basic configuration is very simple. Just edit the /etc/my.cnf file and provide the address of the management node.

stefan@node2:~$ cat /etc/my.cnf 
# Options for mysqld process:


Then we can start the data node with the following command:

sudo /opt/mysql/server-5.7/bin/ndbd  --initial

You should see something like this:

stefan@node3:~$ sudo /opt/mysql/server-5.7/bin/ndbd 
2016-11-26 16:39:29 [ndbd] INFO     -- Angel connected to ''
2016-11-26 16:39:29 [ndbd] INFO     -- Angel allocated nodeid: 3

If something goes wrong, check the error log file called ndb_X_out.log , where  X is the node id. So in this example the actual file name is ndb_2_out.log . Proceed the same way with the second data node called Node 3.

Starting the Cluster

Now comes the great moment. We start the cluster. Please note that you cannot start th cluster before you have configured at least two data nodes and a MySQL API node. All components need to be running, otherwise there will be error messages.

/opt/mysql/server-5.7/bin/ndb_mgmd --reload -f /opt/mysql/server-5.7/mysql-cluster/config.ini

If there are no errors, we can login into the management console for the cluster with the following command.


You can then get an overview of your cluster with the show  command. In the ideal case, the output reads as follows:

root@node1:~# /opt/mysql/server-5.7/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at:
Cluster Configuration
[ndbd(NDB)]	2 node(s)
id=2	@  (mysql-5.7.16 ndb-7.5.4, Nodegroup: 0, *)
id=3	@  (mysql-5.7.16 ndb-7.5.4, Nodegroup: 0)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@  (mysql-5.7.16 ndb-7.5.4)

[mysqld(API)]	1 node(s)
id=4	@  (mysql-5.7.16 ndb-7.5.4)

Connecting to MySQL

Now that our cluster is up and running, we can interact with MySQL, by connecting to the MySQL server running on Node 1. In order to distribute the data automatically on the nodes, we need to utilise the NDB Engine when we create our tables. This is a distributed version of the InnoDB engine and comes with different features. A comparison is given here. Connect to the MySQL instance on Node 1 as you always would.

root@node1:~# mysql -u root -p
Enter password:

Then we can create a test table to see of the system works properly. In contrast to replication, the nodes do not store the whole data each, but rather gets the data distributed over at least two nodes. This is also the reason why we need to have at least two nodes.

Sakila in the Cluster

We can use the famous Sakila database that is available from the MySQL web site to test our Cluster briefly. The following series of commands retrieves the database and imports it into our cluster.

# Fetch the data
# Extract
tar xvfz sakila-db.tar.gz 
cd sakila-db/
# Change the default engine with the cluster engine
sed -i 's/engine=InnoDB/ndbcluster/g' sakila-schema.sql 
# Import schema and data
mysql -u root -p  < sakila-schema.sql 
mysql -u root -p sakila < sakila-data.sql 

Now the data resides in our cluster and is distributed via our two nodes. We can run the following sample query and the system transparently gets the data from the two nodes.

USE sakila;
SELECT CONCAT(customer.last_name, ', ', customer.first_name) AS customer,, film.title
    FROM rental INNER JOIN customer ON rental.customer_id = customer.customer_id
    INNER JOIN address ON customer.address_id = address.address_id
    INNER JOIN inventory ON rental.inventory_id = inventory.inventory_id
    INNER JOIN film ON inventory.film_id = film.film_id
    WHERE rental.return_date IS NULL
    AND rental_date + INTERVAL film.rental_duration DAY < CURRENT_DATE();
183 rows in set (0.00 sec)


If you try to start the cluster before all components are up and ready, finding the error can be a bit tricky. In the following example, we did not start the data nodes. The MySQL API node is running, but it does not show up. 

root@node1:~# /opt/mysql/server-5.7/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at:
Cluster Configuration
[ndbd(NDB)]	2 node(s)
id=2 (not connected, accepting connect from
id=3 (not connected, accepting connect from

[ndb_mgmd(MGM)]	1 node(s)
id=1	@  (mysql-5.7.16 ndb-7.5.4)

[mysqld(API)]	1 node(s)
id=4 (not connected, accepting connect from

Check the error log of the management cluster in that case. As long as the cluster does not consist of at least two data nodes and a MySQL API node, it will not start.

Continue reading