Enterprise Monitor: “Add Bulk MySQL Instances” 50 in 1-click.

Carrying on with my MySQL 5.7 Labs Multi Source Replication scenario, I wanted to evaluate performance impact via MySQL Enterprise Monitor.

Whilst I opened my environment, I remember that I had generated lots of different skeleton scripts that allowed me to deploy the 50 servers quickly, and I didn’t want to add each of my targets 1 by 1 in MEM. So, I used one of the many features available, “Add Bulk MySQL Instances”.

So, I’ve got 50 (3001-3050) masters but only 1 slave (3100).

By default, MEM monitors it’s own repository, i.e. the 1/1 server being monitored in the All group.

I want to add my slave in first, because that’s how I’m organizing things, and I’ll take the opportunity to create the monitoring group I want to keep them all in.

Click on “Add MySQL Instance” and after entering the server details for 3100, I go to Group Settings, and add a special group for all my servers:

Adding a New Group

Adding a New Group

As no group existed before, I write the name I want to call the new group, and the comment “new group” appears, i.e. confirming that it’s a new group.

In fact, in the greyed out part, you can see that there are 45 of the 50 unmonitored. I’ve got 5 to start yet, but almost there.

Now to add them in in Bulk mode.

I’ve got a few I haven’t started, and MEM tells me that there are 44 instances up and running and eager to be monitored:

44 of the 50 new instances are unmonitored.

44 of the 50 new instances are unmonitored.

Click on the Add Bulk MySQL Instances button.

The following window appears:

One window, 50 servers added.

One window, 50 servers added.

So with some copy & pasting, I’ve got all 50 servers added into the Connection Endpoint window, made sure each port is ok and entered the root user & passwd which is the same for all servers here. I also chose to not create the general & limited users, but could have done so to ease monitoring all with the root user.

And that’s it. All servers that are up and running are added. As I haven’t started them all, I will see those 6 as Bad Configurations, i.e. unreachable because it’s hard to connect if it’s just not there…

Now I’m going to go away and look at the performance graphs for the different load tests.

a multisource replication scenario: 10 masters, 1 slave.

A customer asked whether we they could have 10 or more masters consolidate all the data into a single central slave. After getting a bit more information from them and seeing the application functionality, it was clear that MySQL Labs 5.7.5 Multi Source Replication could be a good candidate. Why?:
– Each master is independent from the rest of the masters.
– One-way traffic: there is only one way to update a row, and that’s from the master.
– All the masters use the same schema and table, but no single master will ever need to, nor be able to update a row from another master.
– PK determined via app & master env.

Multisource replication is still in http://labs.mysql.com, but here’s what I did to test it out.

First, I read:
http://on-mysql-replication.blogspot.se/2013/09/mysql-labs-multi-source-replication.html

Then, downloaded what I needed from labs.mysql.com:

mysql-5.7.5-labs-msr-preview-linux-el6-x86_64.tar.gz

And tried for myself:

cd /opt/mysql/msr
mkdir -p 3100/data 3001/data 3002/data 3003/data 3004/data 3005/data 3006/data 3007/data 3008/data 3009/data 3010/data

How many masters do I want to test? Well.. for now 10, but I’ll see later how far I can go.

My setup will look like the following, all on the same host:

datadirs, port, socket location, server-id (1-10, 100):

3100               the single slave

3001 – 3010   the 10 masters

and a my.cnf that will let me use multi_mysqld and make my life a whole lot easier, as all this is being done on a single machine:

[mysqld_safe]
log-error                       =/opt/mysql/msr/mysqld.log

[mysqld_multi]
#no-log
log                             =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/msr/mysqld_multi.log
mysqld                          =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe
mysqladmin                      =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqladmin
user                            =root
password                        =pass

[mysqld]
basedir                         =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64
performance-schema-instrument   ='%=ON'

[mysqld3100]
server-id                       =3100
mysqld                          =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe
ledir                           =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin
port                            =3100
pid-file                        =/opt/mysql/msr/3100/pchost_3100.pid
socket                          =/opt/mysql/msr/3100/mysql.sock
datadir                         =/opt/mysql/msr/3100/data
log-error                       =/opt/mysql/msr/3100/msr.err
innodb_buffer_pool_size         =40M
innodb_file_per_table           =1
innodb_log_buffer_size          =2M
innodb_log_file_size            =12M
innodb_flush_log_at_trx_commit  =2
innodb_data_file_path           =ibdata1:12M;ibdata2:12M:autoextend
innodb_open_files               =50
language                        =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/share/english
table_open_cache                =80
open_files_limit                =100
query_cache_size                =0
max_connections                 =10
user                            =mysql
# Replication
#log-bin                        =3100
binlog-row-image                =minimal
binlog-rows-query-log-events    =1
log-slave-updates               =TRUE
gtid-mode                       =ON
enforce-gtid-consistency =TRUE
master-info-repository =TABLE
relay-log-info-repository =TABLE
sync_binlog =1
sync_master_info =1
slave-parallel-workers =2
slave_transaction_retries =0
binlog-checksum =CRC32
master-verify-checksum =1
slave-sql-verify-checksum =1
binlog-rows-query-log-events =1
binlog_format =ROW
report-host =pchost
report-port =3100
replicate-ignore-db =nexus

[mysqld3001]
server-id =3001
mysqld =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe
ledir =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin
port =3001
pid-file =/opt/mysql/msr/3001/pchost_3001.pid
...
[mysqld3002]
...
[mysqld3003]
...
..
[mysqld3010]
...

All entries were the same for each [mysqld300x] except for the obvious, i.e. port, datadir, socket, pid, etc.

And once that’s done, install each server’s datadir, from 3100, 3001 to 3010:

bin/mysql_install_db --user=mysql --datadir=/opt/mysql/msr/3100/data --basedir=/usr/local/mysql-5.7.5-m15-linux-glibc2.5-x86_64
bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  start 3100
cat ~/.mysql_secret
bin/mysqladmin -uroot -p -S/opt/mysql/msr/3100/mysql.sock password 'pass'

Remember, this is 5.7.5 Labs, i.e. be aware of some changes occurring here (bin/mysql_install_db).

Let’s comment out the log-bin entry on all 10 masters first, as I don’t want some things I’m about to do to be logged and cause replication to break before I start it all up:

bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  stop 3001-3010
bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  start 3001-3010

Now, on all the masters, 3001-3010 and on the slave, 3100, create the table:

 create database coredb;
 use coredb;
 create table recording_data (
   centre_code SET('A','AB','AC','AL','B','BI','BU','C','CA','CE','CO','CU') NOT NULL,
   case_id INT NOT NULL,
   user_email VARCHAR(55) NOT NULL,
   start_info DATETIME NOT NULL,
   end_info DATETIME NOT NULL,
   INDEX (user_email),
   PRIMARY KEY (centre_code,case_id)
 );

The app at each master has programming logic and config that will force each user to use the default value for centre_code for that specific centre and not be able to adjust this in anyway.

Also, case_id could be autoincrement, but for simplistic sake I will force a common id just to prove my point.

These 2 factors mean that when multi source replication occurs on the slave, each row will be unique from each master, i.e. no need for conflict resolution or headaches along those lines.

Now, for the masters, uncomment log-bin again, as we’ll obviously need it, and restart masters 3001-3010.

Create the replication user on each master (from a single slave, so it will be same one for all the masters):

create user 'rpl'@'localhost' identified by '';
grant replication slave on *.*   to 'rpl'@'localhost';

Lets get replicating: go to the slave and create all the channels for each individual master:

change master to master_host='localhost', master_user='rpl', master_port=3001,  master_auto_position=1 for channel "CHANNEL1";
change master to master_host='localhost', master_user='rpl', master_port=3002,  master_auto_position=1 for channel "CHANNEL2";
change master to master_host='localhost', master_user='rpl', master_port=3003,  master_auto_position=1 for channel "CHANNEL3";
...
change master to master_host='localhost', master_user='rpl', master_port=3010,  master_auto_position=1 for channel "CHANNEL10";

And start each channel:

start slave for channel "CHANNEL1";
start slave for channel "CHANNEL2";
start slave for channel "CHANNEL3";
...
start slave for channel "CHANNEL10";

Right.. all’s well, but now for some data.

We insert some data, unique to each master, and see what happens:

On master 3001:
 use coredb
 insert into recording_data values
  ('A',1234567890,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('A',1234567891,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('A',1234567893,'prueba1@test.com',sysdate()-1,sysdate()+1)
 ;
On master 3002:
 use coredb
 insert into recording_data values
  ('AB',1234567890,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('AB',1234567891,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('AB',1234567893,'prueba1@test.com',sysdate()-1,sysdate()+1)
 ;
...
On master 3010:
 use coredb
 insert into recording_data values
  ('CE',1234567890,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('CE',1234567891,'prueba1@test.com',sysdate()-1,sysdate()+1),
  ('CE',1234567893,'prueba1@test.com',sysdate()-1,sysdate()+1)
 ;

And on the slave 3100, what do we see:

bin/mysql -uroot -p -S/opt/mysql/msr/3100/mysql.sock

mysql> use coredb;
Database changed
mysql> select * from recording_data ;
+-------------+------------+------------------+---------------------+---------------------+
| centre_code | case_id    | user_email       | start_info          | end_info            |
+-------------+------------+------------------+---------------------+---------------------+
| A           | 1234567890 | prueba1@test.com | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 |
| A           | 1234567891 | prueba1@test.com | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 |
| A           | 1234567893 | prueba1@test.com | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 |
| AB          | 1234567890 | prueba1@test.com | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 |
| AB          | 1234567891 | prueba1@test.com | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 |
| AB          | 1234567893 | prueba1@test.com | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 |
| AC          | 1234567890 | prueba1@test.com | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 |
| AC          | 1234567891 | prueba1@test.com | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 |
| AC          | 1234567893 | prueba1@test.com | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 |
| AL          | 1234567890 | prueba1@test.com | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 |
| AL          | 1234567891 | prueba1@test.com | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 |
| AL          | 1234567893 | prueba1@test.com | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 |
| B           | 1234567890 | prueba1@test.com | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 |
| B           | 1234567891 | prueba1@test.com | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 |
| B           | 1234567893 | prueba1@test.com | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 |
| BI          | 1234567890 | prueba1@test.com | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 |
| BI          | 1234567891 | prueba1@test.com | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 |
| BI          | 1234567893 | prueba1@test.com | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 |
| BU          | 1234567890 | prueba1@test.com | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 |
| BU          | 1234567891 | prueba1@test.com | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 |
| BU          | 1234567893 | prueba1@test.com | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 |
| C           | 1234567890 | prueba1@test.com | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 |
| C           | 1234567891 | prueba1@test.com | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 |
| C           | 1234567893 | prueba1@test.com | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 |
| CA          | 1234567890 | prueba1@test.com | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 |
| CA          | 1234567891 | prueba1@test.com | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 |
| CA          | 1234567893 | prueba1@test.com | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 |
| CE          | 1234567890 | prueba1@test.com | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 |
| CE          | 1234567891 | prueba1@test.com | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 |
| CE          | 1234567893 | prueba1@test.com | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 |
+-------------+------------+------------------+---------------------+---------------------+
30 rows in set (0,00 sec)

And as mentioned in MySQL Labs: Multi Source Replication – examples now to use Performance Schema to see all my masters:

mysql> select * from performance_schema.replication_execute_status_by_worker;
+--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+
| CHANNEL_NAME | WORKER_ID | THREAD_ID | SERVICE_STATE | LAST_SEEN_TRANSACTION                  | LAST_ERROR_NUMBER | LAST_ERROR_MESSAGE | LAST_ERROR_TIMESTAMP |
+--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+
| channel1     |         1 |        27 | ON            | c1b91daa-a55a-11e4-be70-b86b23917877:6 |                 0 |                    | 0000-00-00 00:00:00  |
| channel1     |         2 |        28 | ON            | c1b91daa-a55a-11e4-be70-b86b23917877:7 |                 0 |                    | 0000-00-00 00:00:00  |
| channel2     |         1 |        44 | ON            | fd35c073-a55e-11e4-be8c-b86b23917877:7 |                 0 |                    | 0000-00-00 00:00:00  |
| channel2     |         2 |        45 | ON            | fd35c073-a55e-11e4-be8c-b86b23917877:8 |                 0 |                    | 0000-00-00 00:00:00  |
| channel3     |         1 |        50 | ON            | 3183d919-a569-11e4-bece-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel3     |         2 |        51 | ON            | 3183d919-a569-11e4-bece-b86b23917877:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel4     |         1 |        54 | ON            | 2205d704-a56c-11e4-bee1-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel4     |         2 |        55 | ON            | 2205d704-a56c-11e4-bee1-b86b23917877:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel5     |         1 |        58 | ON            | 3e007b4e-a56c-11e4-bee2-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel5     |         2 |        59 | ON            | 3e007b4e-a56c-11e4-bee2-b86b23917877:4 |                 0 |                    | 0000-00-00 00:00:00  |
| channel6     |         1 |        62 | ON            | fecf246b-a616-11e4-833c-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel6     |         2 |        63 | ON            | fecf246b-a616-11e4-833c-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel7     |         1 |        66 | ON            | 6ccfd319-a617-11e4-833f-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel7     |         2 |        67 | ON            | 6ccfd319-a617-11e4-833f-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel8     |         1 |        70 | ON            | a800f433-a617-11e4-8341-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel8     |         2 |        71 | ON            | a800f433-a617-11e4-8341-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel9     |         1 |        74 | ON            | bd6fac63-a617-11e4-8341-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel9     |         2 |        75 | ON            | bd6fac63-a617-11e4-8341-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  |
| channel10    |         1 |        78 | ON            | d1d47dec-a618-11e4-8348-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  |
| channel10    |         2 |        79 | ON            | d1d47dec-a618-11e4-8348-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  |
+--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+
20 rows in set (0,00 sec)

So what’s next? In my “to do” list I’ve got:

– Evaluate performance impact on the slave as a new master is added. Today: 10, tomorrow: 50, next week: the www.

– See how the slave impact increases and hence, server resources. CPU will be key I expect, due to the number of channels, but then also will depend on how frequently each master sends it’s own load of bin logs and the size of them, network will be hard for me to look into here, as it’s all self-contained….

BRB.

MySQL Enterprise Monitor 3.0: viewing Query Analyzer for 5.5.x servers.

So, the good thing about MEM 3.0 is that it’s agentless, i.e. you don’t need an agent to use Query Analyzer data and see when performance is at it’s worst and dive into the offending SQL’s and explain plans to see what’s happening.

That’s great, however, sometimes it’s not always an easy road to migrate to 5.6 and even if you’re doing so, there’s nearly always a time when you want to continue viewing things in 5.5.x and compare performance between the 2.

The thing is, that in order to see the Explain Plans we need 5.6.14 or upwards (and setting “UPDATE performance_schema.setup_consumers SET enabled = ‘YES’ WHERE name = ‘events_statements_history_long';” ).

So, here’s how to do it:

Use the MEM 2.3 Agent & proxy.

It’s really that simple. How simple? (Ref.Man: Using the 2.3 Agent Proxy)

./mysqlmonitoragent-2.3.17.2217-linux-glibc2.3-x86-64bit-installer.bin

– make sure you reply “yes” in order to use the “Proxy Enabled:”, default port 6446, and the “Backend Port:” is your MySQL Server’s port, 3306.

– next it’s going to ask you for a user that can connect to the server, in name of the agent, with the appropriate privileges, in order to retrieve all the info for the MEM 3 repository. I used “memagent23″. I pre-created this user as follows:

GRANT PROCESS ON *.* TO 'memagent23'@'127.0.0.1' IDENTIFIED BY password;
GRANT SELECT, INSERT, UPDATE ON `sakila`.* TO 'memagent23'@'127.0.0.1';
GRANT SELECT ON `mysql`.`inventory` TO 'memagent23'@'127.0.0.1';

BUT, I bet you’ve looked on MEM 3 before installing any agent, and if you have another instance on the same server already being monitored, you’ve seen that it will have automatically detected the 5.5.x instance.

Did you click on ‘monitor unmonitored instance’? It doesn’t matter. If you did, no problems as, at the most, it will have created the required table mysql.inventory, that for earlier versions like 5.1.x we’d need to do this anyway.

Just delete this instance before we start up the proxied agent.

Now, let’s start the agent up, with the standard:

service mysql-monitor-agent start

In mysql-monitor-agent.log you will see what’s happening, and if there are any privilege errors, they’ll appear here.

If you’re on another server or whatever, you might want to test connectivity, just in case firewalls are enforcing rules that you’re unaware of, etc.

And depending on your setup, you should see your 5.6.x instances using the built-in 3.0 agent, and the 5.5.x using the 2.3 agent.

Time to show off:

– execute your app (or in my case, use Workbench and run some slow queries on sakila tables) and make sure you’re connecting to the proxy port, 6446, so that you’re feeding the MEM repository.

– Go to the Query Analyzer tab, click on the 5.5.x server, and you should start to see all the SQL queries, and then be able to click on them, and then choose the Explain Plan tab, and hey presto, we’re focused on SQL Tuning now.

I hope this has helped someone else.

 

MySQL Enterprise Backup 3.10: Teasing compression.

Ok, so I wanted to look into the new compression options of MEB 3.10.

And I would like to share my tests with you. Remember, they’re just this, tests, so please feel free to copy n paste and obtain your own results and conclusions, and should I say it, baselines, in order to compare future behaviour, on your own system.

An Oracle Linux 6.3 virtual machine with 3Gb RAM, 2 virtual threads, on a 1x quad core, windows laptop. Not pretty, but hey.

So, these tests are solely about backup. I’ll do restore when I get some *more* time.

 

First up, lets compare like with like, i.e. MEB version 3.9 & 3.10:

Let’s make this interesting, hence, want to use as much resources available as possible, read, write, process threads and number of buffers.

mysqlbackup --user=root --password=oracle --socket=/tmp/mysql5614.sock \
--backup-dir=/home/mysql/MEB/test --with-timestamp --read-threads=4 --write-threads=4 \
--process-threads=4 --limit-memory=300 backup

empty DB         1.2Gb        2.2Gb     2.6Gb

v3.9          0:14 min            2:37          4:26        6:04

v3.10        0:07                  2:34          4:03        5:52

So, with a bit more adjustments to use as much as possible, without compression, here’s testing 3.10:

mysqlbackup --user=root --password=oracle --socket=/tmp/mysql5614.sock \
--backup-dir=/home/mysql/MEB/test --with-timestamp --read-threads=2 --write-threads=2 \
--process-threads=8 --limit-memory=1280 --number-of-buffers=80 backup

This takes 5:20 mins. 2685708 Kb backup size.

Now onto compression.

And I want to double check, let’s check how compression is in v3.9:

mysqlbackup --user=root --password=oracle --socket=/tmp/mysql5614.sock \
--backup-dir=/home/mysql/MEB/test --backup-image=full_$BKUP_DATE.img --with-timestamp \
--read-threads=2 --write-threads=2 --process-threads=8 --limit-memory=1280 --number-of-buffers=80 \
--compress backup-to-image

6:31 min    867264 Kb (32% of original size, 68% compression)

 

Time for v3.10. Same options, just using the mysqlbackup binary in another location, /usr/local/meb310/bin/mysqlbackup:

4:28    Lz4 default compression. 1271720 Kb (47% of original size, 53% compression)

Ok, now let’s use 3.10 properly:

mysqlbackup --user=root --password=oracle --socket=/tmp/mysql5614.sock \
--backup-dir=/home/mysql/MEB/test --backup-image=full_$BKUP_DATE.img --with-timestamp \
--read-threads=2 --write-threads=2 --process-threads=8 --limit-memory=1280 --number-of-buffers=80 \
--compress --compress-level=1 --compress-method=lzma backup-to-image

3:43    469916 Kb (82.6% compression).

changing it slightly, with:

--compress-level=9 --compress-method=lzma

4:35    2512022 Kb (compression 6.33%)

Ok, so now I feel I have something useful, compress-level=1 and lzma for my environment is the best option, with it taking ~83% less disk, and at 1:37 min faster. This obviously will depend on the memory (number-of-buffers) available, and of course the processing power and core’s available. Not to mention that I expect lots of you to have much better IO rates than me, so, now it’s up to you to tease MEB compression…

Try it.. just for me.

–use-tts backup & restore

In addition to my recent post, I just had to go into using the –use-tts for specific tables and selective backup sets.

As all my schemas were employeesn, I thought it would be a good idea to run:

mysqlbackup -uroot --socket=/tmp/mysql.sock --backup-dir=/home/mysql/MEB/restore \
--with-timestamp --use-tts --include=employees* backup

as I want all the tables. If I only wanted a specific table, say ‘salaries’ I could have done:

mysqlbackup -uroot --socket=/tmp/mysql.sock --backup-dir=/home/mysql/MEB/restore \
--with-timestamp --use-tts --include=employees*\.*salaries backup

and backup just the ‘salaries’ tables but from all the employeesn databases.

then.. be a mean little dba:

drop database employees;
drop database employees10;
drop database employees11;
..
..
drop database employees2;
drop database employees20;
drop database employees3;
..
drop database employees9;

And now prepare my backup to be applied:

mysqlbackup --backup-dir=/home/mysql/MEB/restore/2014-02-04_23-09-42/ apply-log

(Note: There’s a little hint in the log output that says:

"mysqlbackup: INFO: Backup was originally taken with the --include regexp option"

)

And now, to restore:

mysqlbackup --defaults-file=/home/mysql/MEB/restore/2014-02-04_23-09-42/backup-my.cnf -uroot -poracle \
--socket=/tmp/mysql.sock --backup-dir=/home/mysql/MEB/restore/2014-02-04_23-09-42 \
--datadir=/opt/mysql/5615/data copy-back

sit back, and watch all the schemas and tables be restored:

 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization.
 mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
140204 23:43:02 mysqlbackup: INFO: Copy-back operation starts with following threads
                1 read-threads    1 write-threads
140204 23:43:02 mysqlbackup: INFO: Creating table: employees.departments.
140204 23:43:03 mysqlbackup: INFO: Creating table: employees.dept_emp.
140204 23:43:04 mysqlbackup: INFO: Creating table: employees.dept_manager.
140204 23:43:04 mysqlbackup: INFO: Creating table: employees.employees.
140204 23:43:05 mysqlbackup: INFO: Creating table: employees.salaries.
140204 23:43:06 mysqlbackup: INFO: Creating table: employees.titles.
140204 23:43:06 mysqlbackup: INFO: Creating table: employees1.departments.
140204 23:43:07 mysqlbackup: INFO: Creating table: employees1.dept_emp.
140204 23:43:08 mysqlbackup: INFO: Creating table: employees1.dept_manager.
...
140205 00:01:04 mysqlbackup: INFO: Importing table: employees9.titles.
140205 00:01:09 mysqlbackup: INFO: Copy-back operation completed successfully.
140205 00:01:09 mysqlbackup: INFO: Finished copying backup files to '/opt/mysql/5615/data/'

mysqlbackup completed OK!

And that’s it! All restored.

And if we want to get creative with the backup options, not only can we get all the ‘salaries’ tables from all the employeesn databases, but we could also backup as many databases as we want, using the regexp connotation such that:

mysqlbackup -uroot -poracle --socket=/tmp/mysql.sock --backup-dir=/home/mysql/MEB/test/ \
--with-timestamp --use-tts --include=employees*\|sakila* backup

will work too.

Note: When taking these types of backups, please take into consideration the requirements.

MySQL Enterprise Backup: parallel config & backup n restore results.

In this post I go into some performance metrics and time spent on using MySQL Enterprise Backup instead of mysqldump, and seeing how far I could go with some parallel configuration.

Setup:

It’s on an old laptop:

–Ubuntu 12.04 LTS, 32bit Intel Pentium M 1.86Ghz, 2Gb
–Source disk:  internal 80Gb ATA ST9808211A
–Destination:  external 1Tb SAMSUNG HD103SI
–MySQL Enterprise Edition 5.6.15
–MySQL Enterprise Backup 3.9.0
Employees sample database duplicated via MySQL Utilities 1.3.6 (on Win7 PC) to generate a ~5Gb MySQL Server.
And to simulate data size, I used the MySQL Utilities:
mysqldbcopy --source=root:pass@host:3356 --destination=root:pass@host:3356 employees:employees1 \
employees:employees2 employees:employees3 employees:employees4 ... employees:employees18 \
employees:employees19 employees:employees20
But before we jump right in, how fast can I expect to go anyway. Simple, nothing fancy:
local disk:
dd if=/dev/zero of=/root/ddIOtest1 bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 41.0825 s, 26.1 MB/s
external HD disk:
dd if=/dev/zero of=/media/1T_iomega/ddIOtest1 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 41.214 s, 26.1 MB/s
To external HD disk via symbolic link:
dd if=/dev/zero of=/home/mysql/MEB/test/ddIOtest1 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 41.3047 s, 26.0 MB/s

Other dd tests were run: bs=300M count=3 @ 26.8 MB/s, bs=200M count=5 @ 26.7 MB/s, bs=100M count=10 @ 26.7 MB/s, bs=300M count=5 @ 26.1 MB/s.

Ubuntu helped a little with some nice pretty graphs for the internal & external disk:

mysqldump:

So the times we’re looking at before we go Enterprise are:

Backup:

mysqldump -uroot -ppass -S /tmp/mysql.sock --all-databases --add-drop-database -r mysqldump_full.sql
start: Fri Jan  3 13:30:36 CET 2014
end:   Fri Jan  3 13:41:17 CET 2014
10:41 min.
$ du –sk     3453640 mysqldump_full.sql
5387 Kb/sec.

Restoring:

$ cp /usr/local/mysql/my.cnf mysqldump_full_test
$ cd mysqldump_full_test
$ vi my.cnf  
    (change all file paths, port numbers, change server-id (just in case) and add a new socket dir.)
$ cd /usr/local/mysql
$ scripts/mysql_install_db --defaults-file=/home/mysql/MEB/test/mysqldump_full_test/my.cnf \
--user=mysql --datadir=/home/mysql/MEB/test/mysqldump_full_test/datadir
$ mysqld_safe --defaults-file=/home/mysql/MEB/my.cnf --user=mysql \
--datadir=/home/mysql/MEB/test/mysqldump_full_test/datadir --socket=/tmp/mysql_mysqldump_restore.sock \
--port=13356
$ mysqladmin -uroot password ‘pass' -S /tmp/mysql_mysqldump_restore.sock
$ mysql -uroot -ppass -S /tmp/mysql_mysqldump_restore.sock
mysql> show databases;
(To avoid the GTID error, you might have to execute "reset master;" before trying the restore.)

mysql -uroot –ppass -S /tmp/mysql_mysqldump_restore.sock < mysqldump_full.sql
start: Fri Jan  3 15:16:45 CET 2014
end:   Fri Jan  3 16:26:26 CET 2014
1:09:41 min. (1hr 9 mins 41 sec.) 4181 sec.
$ du –sk   4974081 (21x employee schemas dir's)
1189.68 Kb / sec.

MySQL Enterprise Backup:

Now to use ‘mysqlbackup':

mysqlbackup --user=root --socket=/tmp/mysql.sock --backup-dir=/home/mysql/MEB/test --with-timestamp \
 backup
And the results:
start: Tue Dec 31 13:09:05 CET 2013
end:   Tue Dec 31 13:13:56 CET 2013
5054864 2013-12-31_13-09-05
4:51 min.
17370.66666666667 Kb/sec.

Configuration and Performance Times:

Parallel Backup
By using parallelization configuration we can improve backup performance significantly. The following is the default setting for MEB 3.9 (for more information see parallel backup description):
--read-threads=1 --process-threads=6 --write-threads=1 --limit-memory=300

And again, the results:

read threads

process threads

write threads

limit memory

number

of buffers

(16M each)

Backup

size

(Kb)

Duration (hh:mm:ss)

Kb/sec

1

6

1

300

14

5054864

00:04:50

17430.57

2

3

2

300

10

5054872

00:04:32

18584.09

3

6

3

600

18

5054872

00:04:11

20138.93

3

3

3

300

12

5054868

00:03:57

21328.56

4

4

4

300

16

5054872

00:03:51

21882.56

4

4

4

320

16

5054876

00:03:37

23294.35

4

4

4

640

16

5054876

00:03:37

23294.35

4

8

4

640

20

5054876

00:03:37

23294.35

(apologies for the formatting)

Restoring:

Scenario specifics:

We want to restore a single table, Partial Recovery, not the whole server.

  • The table to be restored is on a running server and can’t be stopped.
  • No compression is being used.

What about having specified –user-tts at backup time?

  • Backups taken with this option are meant to transport the table to another, new environment, i.e. tables can’t be restored selectively from these types of backups.
  • The tables to be restored can’t exist on the destination server.
  • mysqlbackup is already flexible enough to be able to single out a table and restore it via the transportable tablespace definition, i.e. discard / import tablespace.

Converting the on-disk backup to image.

First convert to image, so that we can then extract the table, or apply-logs from incremental backups taken post-Full backup:

mysqlbackup --user=root --socket=/tmp/mysql.sock \ 
--backup-image=/home/mysql/MEB/test/2014-01-02_13-57-17.mbi \
--backup-dir=/home/mysql/MEB/test/2014-01-02_13-57-17 --read-threads=4 \
--write-threads=4 --process-threads=8 --limit-memory=320 backup-dir-to-image
Start: 140102 15:57:34
End:   140102 16:04:50
7:16 min.
5054052 /home/mysql/MEB/test/2014-01-02_13-57-17.mbi
11591.86 Kb/sec.

Or…

… have backed up directly to the single image file :

mysqlbackup --user=root --socket=/tmp/mysql.sock \
--backup-image=/home/mysql/MEB/test/full.mbi –backup-dir=/home/mysql/MEB/test/full \
--with-timestamp --read-threads=4 --write-threads=4 --process-threads=8 --limit-memory=320 \
backup-to-image
Start: 140102 14:43:01
End:   140102 14:47:43
4:42 min.
5054124 full.mbi
16735.50 Kb/sec

Now to restore…

Native Transportable tablespace

Knowing we’re going to use transportable tablespace from the image backup file, and that the table has just lost some rows (as if it had been deleted, we’d have to recover in another env and then mysqldump it out).

First check the image file and list the table location:

mysqlbackup --user=root --socket=/tmp/mysql.sock \
--backup-image=/home/mysql/MEB/test/full.mbi --read-threads=4 --write-threads=4 \
--process-threads=8 --limit-memory=320 --sleep=0 list-image

Restore:

mysql> lock tables salaries write;
mysql> alter table salaries discard tablespace;
mysqlbackup --user=root --socket=/tmp/mysql.sock --backup-image=/home/mysql/MEB/test/full.mbi \
--read-threads=4 --write-threads=4 --process-threads=8 --limit-memory=320 --sleep=0 \
--src-entry=datadir/employees/salaries.ibd --dst-entry=/opt/mysql/5615/data/employees/salaries.ibd \
extract
Start: 140102 16:48:04
End:   140102 16:48:09
0:05 min.
143360 /opt/mysql/5615/data/employees1/salaries.ibd
28672.00 Kb/sec.
mysql> alter table salaries import tablespace;
mysql> desc salaries;

A Full Restore

Consider that the table has been deleted and we need to restore the full backup.

  • Restore the image backup to a new backup / datadir.
  • Start the restored environment with a modified my.cnf.
  • Export & import the desired data for the affected table into the working environment.
  • Carry on working with some minor disruption to the affected table.

The details:

mysqlbackup --user=root --password=oracle --socket=/tmp/mysql.sock \
--backup-image=/home/mysql/MEB/test/full.mbi –backup-dir=/home/mysql/MEB/test/full-img-restore \
--read-threads=4 --write-threads=4 --process-threads=8 --limit-memory=320 --sleep=0 \
image-to-backup-dir
start: Thu Jan 2 16:58:28 CET 2014
end:   Thu Jan 2 17:04:46 CET 2014
6:18 min.
5054972 full-img-restore
13372.94 Kb/sec.

Now, we can start up the restored env to use and abuse the salaries table we’re worried about:

$ cd /hoe/mysql/MEB/test/full-img-restore
$ vi server-my.cnf
  change all paths (datadir, socket, pid_file, log_bin, log_bin_index, log_error, port & report 
  port if using replication.
$ mysqld_safe --defaults-file=/home/mysql/MEB/test/full-img-restore/server-my.cnf --user=mysql \
--datadir=/home/mysql/MEB/test/full-img-restore/datadir -P13356 --socket=/tmp/mysql_restore.sock &
$ mysql -uroot –ppass -S /tmp/mysql_restore.sock
mysql> use employees
mysql> show table like salaries;

Now to double check the instance where we’re missing the salaries table:

$ mysql -uroot –ppass -S /tmp/mysql.sock
mysql> use employees
mysql> show table status;
mysql> show create table salaries;
ERROR 1146 (42S02): Table 'employees.salaries' doesn't exist

So now to import / recreate the previously dropped ‘employees.salaries’ table, and check times, after all, we are interested:

mysqldump –uroot –ppass -S /tmp/mysql_restore.sock --tables employees salaries | mysql -uroot –ppass \
-S /tmp/mysql.sock employees
Start: Fri Jan 3 13:16:38 CET 2014
End:   Fri Jan 3 13:19:37 CET 2014
2:59 min
143364 /opt/mysql/5615/data/employees/salaries.ibd
800.91 Kb/sec.

Checking & Alternatives

MySQL Utilities 1.3.6

Once the backup has been restored in another environment, use MySQL Utilities ‘mysqldiff’ to see if it’s missing:

mysqldiff --server1=root:oracle@141.144.12.45:3356 --server2=root:oracle@141.144.12.45:13356  \
employees.salaries:employees.salaries

If the row number is different, mysqldiff doesn’t detect this, it’s only for object and structure differences.

# server1 on 141.144.12.45: ... connected.
# server2 on 141.144.12.45: ... connected.
# Comparing employees.salaries to employees.salaries         [PASS]
Success. All objects are the same.

Use Utilities mysqldbcompare to get exact changes and data consistency checking. Saves having to restore the whole table:

mysqldbcompare --server1=root:oracle@141.144.12.45:3356 --server2=root:oracle@141.144.12.45:13356 \
employees:employees -a -d differ –vvv

mysqlfrm could be used, however be wary of auto-increment sequences & Foreign Keys.

Also restore the .frm from the backup image fileset, to change the datadir for that specific tablespace and then use mysqldump | mysql to import the row data.

 Conclusion

mysqlbackup restores the single transportable tablespace at 28672.00 Kb/sec where mysqldump is done at 800.91 Kb/sec: 35.79 times faster.

(In this scenario, a 4 column table with 2844047 rows).