MySQL TDE: Online key store migration

So, if we’re applying GDPR to our system, and we’re already making use of MySQL Transparent Data Encryption / keyring, then here’s an example on how to migrate from filed-based keyring to the encrypted keyring. Online.

If you’re looking to go deeper into the TDE then I suggest reading the MySQL Server Team’s InnoDB Transparent Tablespace Encryption blog.

You’d already have your environment running, whereas I have to create one.. give me a minute please, 8.0.12 here we come:

mysqld --defaults-file=my_okv.cnf --initialize-insecure --user=khollman
mysqld --defaults-file=my_okv.cnf --user=khollman &

mysql --defaults-file=my_okv.cnf -uroot
show plugins;
show variables like 'keyring%';
alter user 'root'@'localhost' identified by 'oracle';

create database nexus;
create table nexus.replicant (id INT(11) NOT NULL AUTO_INCREMENT ,
`First name` varchar(40) not null default '',
`Last name` varchar(40) not null default '',
`Replicant` enum('Yes','No') not null default 'Yes',
PRIMARY KEY (id)) engine=InnoDB row_format=COMPACT ENCRYPTION = 'Y';

INSERT INTO nexus.`replicant` (`First name`,`Last name`,`Replicant`)
('Roy','Hauer','Yes'), ('Rutger','Batty','Yes'), ('Voight','Kampff','Yes'), ('Pris','Hannah','Yes'), ('Daryl','Stratton','Yes'), ('Rachael','Young','Yes'), ('Sean','Tyrell','Yes'), ('Rick','Ford','No'), ('Harrison','Deckard','Yes');

select * from nexus.replicant;

Now we have an environment using the keyring file-based TDE.

Before migrating the key store, there are a few things we need to be aware of, as well as reading the manual on this topic:

  • mysqld. Yes, we start up another mysqld process, but it’s not a fully functioning server, far from it. It is just a means to migrate the keys from the old file-based  to the new encrypted file. So don’t worry about the defaults-file, the innodb_xxxx params nor anything else. We actually need to reuse the existing datadir.
  • datadir. As just mentioned, don’t try and use another datadir as it won’t find any files there to encrypt with the new key and the process won’t be successful. Use the existing online server datadir. (of course, I recommend this process be run in a non-production test environment first!)
  • -source & -destination. I think this is quite obvious.  The plugin we’re coming from, and going to.
  • keyring_file_data is the existing file-based keyring being used.
  • keyring_encrypted_file_data & _password is the new encrypted password being stored in its file in this location.
  • keyring-migration- params. We need to connect to the existing instance with super user privs. As it’s locally to the instance, we can use -socket.


mysqld --basedir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64 \
 --plugin-dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin \
 --lc_messages_dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/share \
 --datadir=/opt/mysql/okv/data \ \
 --keyring_file_data=/opt/mysql/okv/keyring \ \
 --keyring_encrypted_file_data=/opt/mysql/okv/keyring_enc \
 --keyring_encrypted_file_password=oracle2018 \
 --keyring-migration-socket=/opt/mysql/okv/mysql.socket \
 --keyring-migration-user=root \

And if, only if, the migration is successful, you should see output like the following. Anything else, i.e. if no output comes back, or some of the lines don’t appear in your scenario, double check the parameters in the previous command as it’s more than likely impeding a successful key migration somewhere:

2018-10-08T11:26:22.227161Z 0 [Note] [MY-010098] [Server] --secure-file-priv is set to NULL. Operations related to importing and exporting data are disabled
2018-10-08T11:26:22.227219Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/.
2018-10-08T11:26:22.227226Z 0 [System] [MY-010116] [Server] mysqld (mysqld 8.0.12-commercial) starting as process 13758
2018-10-08T11:26:22.254234Z 0 [Note] [MY-011085] [Server] Keyring migration successful.
2018-10-08T11:26:22.254381Z 0 [Note] [MY-010120] [Server] Binlog end
2018-10-08T11:26:22.254465Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_encrypted_file'
2018-10-08T11:26:22.254642Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file'
2018-10-08T11:26:22.255757Z 0 [System] [MY-010910] [Server] mysqld: Shutdown complete (mysqld 8.0.12-commercial) MySQL Enterprise Server - Commercial.



To make sure the instance has the new parameters in the defaults file, and before any risk of restarting the instance, we’ll need to add the new ‘encrypted’ params to the my.cnf:

plugin_dir =/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin
#keyring_file_data =/opt/mysql/okv/keyring
keyring_encrypted_file_data =/opt/mysql/okv/keyring_enc
keyring_encrypted_file_password =oracle2018


And upon the next most convenient / least inconvenient moment, restart the instance:

mysqladmin --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf -uroot -poracle shutdown
mysqld --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf --user=khollman &

And let’s double check which keyring plugin we’re using:

select * from information_schema.plugins where plugin_name like '%keyring%' \G
*************************** 1. row ***************************
PLUGIN_NAME: keyring_encrypted_file
PLUGIN_AUTHOR: Oracle Corporation
PLUGIN_DESCRIPTION: store/fetch authentication data to/from an encrypted file
1 row in set (0,00 sec)

And also that we can select the data from the encrypted tablespace:

select * from nexus.replicant;
| id | First name | Last name | Replicant |
| 1 | Roy         | Hauer     | Yes       |
| 2 | Rutger      | Batty     | Yes       |
| 3 | Voight      | Kampff    | Yes       |
| 4 | Pris        | Hannah    | Yes       |
| 5 | Daryl       | Stratton  | Yes       |
| 6 | Rachael     | Young     | Yes       |
| 7 | Sean        | Tyrell    | Yes       |
| 8 | Rick        | Ford      | No        |
| 9 | Harrison    | Deckard   | Yes       |
9 rows in set (0,00 sec)


Seems quite straight forward.

Well how about, in a test environment, changing the keyring_encrypted_file_password value to something different now, and restart the instance, and run the same select on the same table.

Hey presto:

select * from nexus.replicant;
ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully.
Error (Code 3185): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully.
Error (Code 1877): Operation cannot be performed. The table 'nexus.replicant' is missing, corrupt or contains bad data.


Hope this helps someone out there. Enjoy encrypting!

Now we can run encrypted backups safely and not worry about moving those files around different systems now.

Posted in Backup, GDPR, MEB, MySQL, MySQL 8, MySQL Enterprise Backup, MySQL Enterprise Edition, Oracle, TDE, Uncategorized | Tagged , , , , , , | Leave a comment

Upgrading MySQL to 8.0.12 with Audit plugin.

As a spin-off from the previous post,, I thought that it would be good to see how well the Audit plugin upgrades to MySQL 8. The big change in auditing is that the tables change from MyISAM to InnoDB, so keep your eyes open.

I’m using the previously used instance in version 5.7.18.


Before we do anything, let’s make sure auditing will be in place when we restart the instance with 8.0.12:

Uncomment the plugin-load & audit-log params we had originally commented out. After all, this is something we should have done in the last post (apologies!):

vi my_audit.cnf:

Restart the 5.7 instance so we upgrade from a rebooted / ‘as real as can be expected’ scenario:

bin/mysqladmin --defaults-file=my_audit.cnf -uroot shutdown
bin/mysqld --defaults-file=my_audit.cnf &

With the tail of the audit.log still running, login again as the audit_test_user:

bin/mysql -uaudit_test_user -paudittest123 -S /opt/mysql/audit/mysql_audit.sock

Observe that the login is being audited with out audit.log tail:

SELECT * from mysql.audit_log_user;
SELECT * from mysql.audit_log_filter;
SELECT @@audit_log_filter_id;

Lets check the instance with the upgrade checks:

Run the check-upgrade script:

mysqlcheck -S /opt/mysql/audit/mysql_audit.sock -uroot --all-databases --check-upgrade

Check for partitioned tables:

WHERE ENGINE NOT IN ('innodb', 'ndbcluster')
AND CREATE_OPTIONS LIKE '%partitioned%';

Check for table names:


If you find anything, then now’s the moment to change the names, partitions, etc.

Shutdown the instance:

bin/mysqladmin --defaults-file=my_audit.cnf -uroot shutdown



Change to 8.0.12 binaries.

Copy & edit the my.cnf.

cp /usr/local/mysql/mysql-advanced-5.7.18-linux-glibc2.5-x86_64/my_audit.cnf /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_audit80.cnf

Get rid of query_cache & sql_mode params from the my.cnf, and adjust the audit log params:

vi my_audit80.cnf
   basedir =/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64


Start the instance with the 8.0.12 binaries:

bin/mysqld --defaults-file=my_audit80.cnf &

And time to upgrade:

bin/mysql_upgrade -S /opt/mysql/audit/mysql_audit.sock -uroot
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.audit_log_filter OK
mysql.audit_log_user OK
mysql.columns_priv OK
mysql.component OK
mysql.db OK
mysql.default_roles OK
mysql.engine_cost OK
mysql.func OK
mysql.general_log OK
mysql.global_grants OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.password_history OK
mysql.plugin OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.role_edges OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
nexus.replicant OK
sys.sys_config OK
Upgrade process completed successfully.
Checking if update is needed.

Ok, all ok.

Double checking:

ls -lrt /opt/mysql/audit/data/*upg*
-rw-rw-r-- 1 khollman khollman 6 ago 22 19:58 /opt/mysql/audit/data/mysql_upgrade_info

cat /opt/mysql/audit/data/mysql_upgrade_info


Let’s check the audit plugin status:

bin/mysql -uroot -S /opt/mysql/audit/mysql_audit.sock

| audit_log | ACTIVE |

SELECT @@audit_log_filter_id;

SELECT * from mysql.audit_log_user;
| USER            | HOST      | FILTERNAME     |
| %               |           | log_connection |
| audit_test_user | localhost | log_connIUD    |
| root            | localhost | log_nothing    |

SELECT * from mysql.audit_log_filter;

Here we see the table is now InnoDB:

show create table mysql.audit_log_user;

So now we see everything is as it was, we suddenly remember we saw something about upgrading in the install section of the manual:


As of MySQL 8.0.12, for new MySQL installations, the USER and HOST columns in the audit_log_user table used by MySQL Enterprise Audit have definitions that better correspond to the definitions of the User and Host columns in the mysql.user system table. For upgrades to an installation for which MySQL Enterprise Audit is already installed, it is recommended that you alter the table definitions as follows:

So we run the following commands:

ALTER TABLE mysql.audit_log_user
DROP FOREIGN KEY audit_log_user_ibfk_1;
ALTER TABLE mysql.audit_log_filter
CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_as_ci;
ALTER TABLE mysql.audit_log_user
CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_as_ci;
ALTER TABLE mysql.audit_log_user
ALTER TABLE mysql.audit_log_user

Lets do the tail of the audit.log that hasn’t had anything written to it since we started the upgrade.

tail -100f /opt/mysql/audit/data/audit.log

Now to login as our audit_test_user user.

bin/mysql -uaudit_test_user -paudittest123 -S /opt/mysql/audit/mysql_audit.sock nexus

And run some I/U/D:

DELETE FROM replicant where `First name`='Rick';
UPDATE replicant set `Replicant` = 'No' where `First name` = 'Harrison';
INSERT INTO replicant (`First name`,`Last name`,`Replicant`) VALUES ('Rick','Ford','No');
UPDATE replicant set `Replicant` = 'Yes' where `First name` = 'Harrison';

exit, and watch it all being logged, except the SELECT’s.

A “status;” shows us:

bin/mysql Ver 8.0.12-commercial for linux-glibc2.12 on x86_64 (MySQL Enterprise Server - Commercial)

Connection id: 18
Current database: 
Current user: audit_test_user@localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 8.0.12-commercial MySQL Enterprise Server - Commercial


Upgraded! With Auditing in place.

Posted in audit, InnoDB, MySQL, MySQL Enterprise Edition, Oracle, Uncategorized, upgrade | Tagged , , , , , , | Leave a comment

Get the Auditors in: MySQL Enterprise Audit.

Here I have been looking into using the MySQL Enterprise Edition Audit Log plugin for 5.7. We have many options to audit (filters, encryption, compression, Workbench, rotation & purging, viewing the log, etc.) and it’s quite clear cut on what we’re auditing and not when active.

If you’re looking to go deep into the Audit Plugin, as part of the Enterprise Edition, you’ll want to look at the following Support note:

Master Note for MySQL Enterprise Audit Log Plugin (Doc ID 2299419.1)

And if you’re looking for other Audit Plugin examples, I’d recommend Tony Darnell’s blog post:



Venturing onwards, have a read of the install (or upgrade) steps:

and then what a “filter”is:

That said, although I started with a new install, it’s more than likely you won’t. So let’s install the plugin accordingly.

Remember, this is the Audit Plugin only available with the Enterprise Edition binaries. So we will need to download the MySQL Server from or from, “Patches & Updates”.

Prepare the env

mkdir -p /opt/mysql/audit

ls -lrt /usr/local/mysql/mysql-advanced-5.7.18-linux-glibc2.5-x86_64
ls -lrt /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64

(we’ll use the 8.0.12 binaries later)

cd /usr/local/mysql/mysql-advanced-5.7.18-linux-glibc2.5-x86_64

Edit the my.cnf commenting out the audit log params which we will use later.

vi my_audit.cnf
basedir =/usr/local/mysql/mysql-advanced-5.7.18-linux-glibc2.5-x86_64

Initialize & startup

bin/mysqld --defaults-file=my_audit.cnf --initialize-insecure

Yes, using –initialize-insecure defeats the whole object of auditing, but here we’re testing. I expect the environment you’ll be using has already some minimum security in place.

bin/mysqld --defaults-file=my_audit.cnf &
bin/mysql --defaults-file=my_audit.cnf -uroot


SELECT @@audit_log_filter_id;

To install the audit log plugin, we have to run:

bin/mysql -uroot -S /opt/mysql/audit/mysql_audit.sock < /usr/local/mysql/mysql-advanced-5.7.18-linux-glibc2.5-x86_64/share/audit_log_filter_linux_install.sql

Now in another window:

tail -100f /opt/mysql/audit/data/audit.log


Confirm which version has the audit_log tables in InnoDB or MyISAM (latter won’t work on GR/IdC for obvious reasons):
– 5.7.18 -> MyISAM
– 8.0.12 -> InnoDB

bin/mysql -uroot -S /opt/mysql/audit/mysql_audit.sock
show create table mysql.audit_log_user;


Auditing time has come

Check if any user account is being audited:

SELECT * from mysql.audit_log_user;

And what filter, if any, is active:

SELECT @@audit_log_filter_id;

As we haven’t created anything yet, it’s all empty.

Now to create a user to audit (Thanks Tony Darnell!):

CREATE USER 'audit_test_user'@'localhost' IDENTIFIED BY 'audittest123';
GRANT ALL PRIVILEGES ON *.* TO 'audit_test_user'@'localhost';

Create an audit filter to log only the connections of the previously created user:

SELECT audit_log_filter_set_filter('log_connection', '{ "filter": { "class": { "name": "connection" } } }');


SELECT audit_log_filter_set_filter('log_connection', '{ "filter": { "log": false ,"class": { "log": true, "name": "connection" } }}');

And assign the filter just created to the user account we want to audit:

SELECT audit_log_filter_set_user('audit_test_user@localhost', 'log_connection');

Make sure that all auditing changes have been committed and set in proverbial stone:

SELECT audit_log_filter_flush()\G

So what filter did we create or do we have?

SELECT * from mysql.audit_log_filter;

Now login with that user and run some SQL, whilst another window has a tail -100f running on the audit.log:

bin/mysql -uaudit_test_user -paudittest123 -S /opt/mysql/audit/mysql_audit.sock

SELECT @@audit_log_filter_id;
SELECT * from mysql.audit_log_user;

Now exit and reconnect to see in the tail of the audit.log the disconnect & connect.
In the window with a “tail -100f audit.log” we will only see:

<TIMESTAMP>2018-08-21T14:58:34 UTC</TIMESTAMP>

but no sql being audited.

Let’s create a filter just for sql queries, without logging connections as we already how to create a filter for that:

SELECT audit_log_filter_set_filter('log_sql', '{ "filter": { "log": true ,"class": { "log": false, "name": "connection" } }}');
SELECT audit_log_filter_set_user('audit_test_user@localhost', 'log_sql');
SELECT audit_log_filter_flush()\G

Run some selects on any table to view the result in the audit.log.

Now activate logging of I/U/D but not for Selects / Reads:

SELECT audit_log_filter_set_filter('log_IUD', '{
  "filter": {
    "class": {
      "name": "table_access",
        "event": {
          "name": [ "insert", "update", "delete" ]

Lets apply it to the user:

SELECT audit_log_filter_set_user('audit_test_user@localhost', 'log_IUD');
SELECT audit_log_filter_flush()\G

Let’s confirm the user has the new filter applied:

SELECT * from mysql.audit_log_user;

Let’s create a table and test some I/U/D:

create database nexus;
use nexus;
create table replicant (
`First name` varchar(40) not null default '',
`Last name` varchar(40) not null default '',
`Replicant` enum('Yes','No') not null default 'Yes'
) engine=InnoDB row_format=COMPACT;
INSERT INTO `replicant` (`First name`,`Last name`,`Replicant`)
DELETE FROM replicant where `First name`='Rick';
UPDATE replicant set `Replicant` = 'No' where `First name` = 'Harrison';
INSERT INTO replicant (`First name`,`Last name`,`Replicant`) VALUES ('Rick','Ford','No');
UPDATE replicant set `Replicant` = 'Yes' where `First name` = 'Harrison';

Create a filter for both login connections & I/U/D actions:

SELECT audit_log_filter_set_filter('log_connIUD', '{
  "filter": {
    "class": [
      {"name": "connection" },
      {"name": "table_access",
         "event": {
           "name": [ "insert", "update", "delete" ]

Apply it / make it stick and then confirm:

SELECT audit_log_filter_set_user('audit_test_user@localhost', 'log_connIUD');
SELECT audit_log_filter_flush()\G
SELECT * from mysql.audit_log_user;

Now re-run the I/U/D & exit/connect and view the audit.log.

Upon assigning a filter to a specific account (user+host),  the previous filter is automatically replaced.
So let’s apply the log_connection filter to all users, i.e. “%”:

SELECT audit_log_filter_set_filter('log_connection', '{ "filter": { "class": { "name": "connection" } } }');
SELECT audit_log_filter_set_user('%', 'log_connection');
SELECT audit_log_filter_flush()\G
SELECT * from mysql.audit_log_user;

Although we have just assigned the log_connection filter to all users, the audit_test_user has the log_IUD filter assigned specifically, which means that no logins for this user are being recorded in the audit.log. We would have to use the log_connIUD filter for that.

So maybe we don’t want to log anything for the root user. So we can remove logging of root from the log_connection filter

SELECT audit_log_filter_remove_user('root@localhost');
SELECT audit_log_filter_flush()\G
SELECT * from mysql.audit_log_user;

If we log off and log back on we’ll observe that the root user is removed just for that session. Logging is enabled again via the generic filter for the root user once logged on again.

Given that the previous is only per session, we’ll now create a “log_nothing” filter and apply it to the user accounts that we don’t want anthing to be logged:

SELECT audit_log_filter_set_filter('log_nothing', '{ "filter": { "log": false } }');
SELECT audit_log_filter_set_user('root@localhost', 'log_nothing');
SELECT audit_log_filter_flush()\G
SELECT * from mysql.audit_log_user;

Try logging on:

mysql -uroot -S /opt/mysql/audit/mysql_audit.sock

and view the audit.log tail output. “root” is no longer being logged.


I hope this has helped give an insight into some examples of how to audit MySQL. There are many many more examples, i.e. no logging for DDL, logging just specific tables and/or schemas. It is entirely up to you what you log… or not.

Happy auditing!

Posted in InnoDB, MySQL, MySQL Enterprise Edition, Oracle, Uncategorized, upgrade | Tagged , , , , , , | 2 Comments

Going to Oracle MySQL Cloud via Enterprise Backup.

First of all, if you want a guide on how to deploy MySQL in Oracle Cloud, then the best place to start is here.

So, what are we going to do here?

  • Create an Oracle Storage container to keep our backups safe.
  • Run a MySQL Enterprise Backup from on-premise 5.7 MySQL (GPL or Enterprise) streaming to our Oracle Storage container.
  • Create an Oracle Cloud MySQL instance initiated from the previously backed up instance.

What data do we have:

  • An e-mail address that you signed up to Oracle MySQL Cloud Service with.
  • An IdentityDomain.
  • You might also have a Data Centre assigned, as this can be chosen when you sign up.

And now, let’s go through how to get our data into the cloud and into a running MySQL instance:

To make things simpler, let’s use curl to create the Oracle Storage container (all this can be done via the Storage console page if needed):

We need to get authenticated first, for security reasons (the Auth token lasts 30 minutes so expect to revisit this part if we take long between steps):

 $ curl -v -X GET \
  -H 'X-Storage-User:' \
  -H 'X-Storage-Pass: password' \
 < HTTP/1.1 200 OK
 < X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267

Create a container called “5718_upload”:

curl -v -X PUT \
 -H "X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267" \
 < HTTP/1.1 201 Created

List contents:

$ curl -v -s -X GET \
  -H 'X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267' \
 < HTTP/1.1 204 No Content


Run an online backup (MEB version 4) of our 5.7.18 instance to cloud storage:

mysqlbackup --user=root --socket=/opt/mysql/5718/mysql_5718.sock \
 --cloud-service=openstack --cloud-container=5718_upload \
 --cloud-ca-info=/etc/ssl/certs/ca-certificates.crt \
 --cloud-object=5718_full_meb_backup.mbi \ \
 --cloud-password=password \
 --cloud-tempauth-url= \
 --backup-dir=/tmp/5718backup/cloud \
 --compress=true --backup-image=- backup-to-image

(depending on your distro, the cloud-ca-info path might be different)

List contents again:

curl -v -s -X GET \
 -H 'X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267' \


Now to create the cloud instance: -> “Sign In”

My account is what’s known as a Oracle Cloud Infrastructure Classic (OCI Classic) account. But it all depends on when you’ve signed up, what other cloud services you may have, etc.
All the following info you will get in an e-mail when you sign up for Oracle Cloud:

select account type -> “Traditional Cloud Account
select Data Center -> “US Commercial 2 (us2)
click on “My Services” button.

Now to enter some specific info:

Enter your Identity Domain -> “mysqltest
click on “Go

User Name -> “
Password -> “password
click on “Sign in

Now you’ll see the Dashboard with all your cloud resources available.

There are some helpful rectangles / buttons “Guided Journey“, “Create Instance“, “Account Management” & “Customize Dashboard“.
Below that, you’ll see a summary of your “Cloud Services“.
Then, beneath that, you’ll see the services themselves, i.e. “MySQL“, “Compute Classic“, etc.
If you click on the “MySQL” wording, it will take you to an overview of all your information. This can be useful, eg. for the Public IP Address needed later on for ssh.
So, either on that screen, we can click on “Open Service Console“, or on the previous screen, click on the “Action Menu” which is the little button of 4 horizontal lines, and then choose the same: “Open Service Console“.


Now, we’ll see the different options available for our “Oracle MySQL Cloud Service“.
First you’ll see the “Instances“, “Activity“, “SSH Access” & “IP Reservations“. I’ll let you look into each one of those at your own pace, as what we want to do now is go direct to the “Create Instance” button on the right.

Create New Instance” page.

                             Instance – Details – Confirm

Here we have to specify the Instance Name and Region. The others should be automatically selected or are optional:
Instance Name -> “MySQL57
Notification Email
Region -> “uscom-east-1” (chosen from the options in the dropdown)
IP Network -> “No Preference” (this option appears once we’ve chosen the Region.)

click on “Next

“Service Details”

                             Instance – Details – Confirm

Here we have to specify how to create our instance from the backup we previously did.

First, remember how large the original environment was and this will help us to choose the “Compute Shape” that we need.

Then, we need a “SSH Public Key”.

Edit -> (a window opens up “SSH Public Key for VM Access”. Here we have different options, where I’m choosing “Create New Key”
click on “Enter”ssh
Once the key pair is created we can now download the resulting zip file with the private & public keys. KEEP THESE SAFE. We will be needing these.
click on “Download”
save them somewhere safe.
click on “Done”
This closes the small pop up window, and now we can carry on sizing the environment according to our needs.

MySQL Configuration
Usable Database Storage (GB) -> Set the size of the storage needed for the database instance.
Password -> (We need to provide a good password, eg. “Oracle20!8” but it doesn’t matter as we’ll be using the existing passwords for our instance)

Backup and Recovery Configuration
Cloud Storage Container -> (this should be automatically filled out with something like “; . We will want to change this to: ““)
Username -> “
Password -> “password
Create Cloud Storage Container -> UNCHECK THIS! We’ve already created ours.

Initialize Data From Backup
Create Instance from Existing Backup -> “Yes
Backup File Cloud Storage Path -> “
Username -> “
Password -> “password

click on “Next


                             Instance – Details – Confirm

Here we get a summary of what we’ve entered.
It has all been validated, i.e. if there’s an error somewhere, i.e. the password isn’t strong enough, the backup path is wrong, or a username doesn’t exist, then you’ll know so.

click on “Create

You might get a window saying “Create Service Request” with just one option saying “OK”.
This is ok.

If you try to click on “Create” again, it will fail saying that the service already exists.
You can click on “Cancel” and it will take us back to the Summary page.
We can see that for the instance we’ve just created it will say “Status: Creating service …
Then click on the name of the instance you gave, in my case “MySQL57“.
On the next page we’ll see the “Instance Overview” page.
Now click on the small arrow in the box under “In-Progress Operation Messages” to view the creation log.

Now, we have to get the IP address from the instance overview.

Connecting & confirming the instance has been restored

Let’s see what’s in the cloud instance:

$ ssh -i /path/2/private/key/previously/downloaded opc@123.456.78.90

Once connected:

$ sudo su - oracle
$ mysql -uroot -p (use the password from your backed up instance)

And now for our data:

mysql> show databases;
mysql> select user, host from mysql.user;

It should be self-explanatory, me thinks.

You could also connect in with MySQL Workbench. Here are the instructions on how to connect and remotely administer your cloud instance.

Hope this has helped!


Posted in Uncategorized | Leave a comment

InnoDB Cluster: setting up Production… for disaster! (2/2)

Ok, so now we’re got our InnoDB Cluster a-clustering, MySQL Router a-routing, now we need some disaster to be a-disaster-recovering…

A foreword first.

If you’re looking to use Enterprise Backup to recover a single node and restore that node back into an existing InnoDB Cluster, LeFred takes you through that one nicely here.

Preparing for backup

On our single primary server, the one that allows write, which was ic2/ in my case:

mysql -uroot -poracle << EOF 
SET sql_log_bin = OFF; 
 create user 'backup'@'%' identified by 'oracle';
 grant all on *.* to 'backup'@'%';
SET sql_log_bin = ON; 

Let’s create something to backup (if you haven’t already done so of course):

mysqlsh --uri root:oracle@localhost:3306 --sql

create database nexus;

Let’s run a full backup to start with:

mysqlbackup -ubackup -poracle --socket=/var/lib/mysql/mysql.sock \
--backup-dir=/home/vagrant/backup/full \
--backup-image=full_backup.img  backup-to-image

ls /home/vagrant/backup/full/full_backup.img

mysqlbackup --backup-image=/home/vagrant/backup/full/full_backup.img validate

Ok, everything ok.

Enterprise Backup needs to create 2 tables, in CSV format, which is why we commented out the disabled_storage_engine parameter in the my.cnf before. If we hadn’t commented it out, there would be warnings saying that these tables can’t be created, and hence, we’d need to use end_lsn in the incremental backup instead of “history:last_backup”. A more manual process.

Let’s create something to use our incremental backup:

create table nexus.replicant (
`First name` varchar(40) not null default '',
`Last name` varchar(40) not null default '',
`Replicant` enum('Yes','No') not null default 'Yes',
) engine=InnoDB row_format=COMPACT;

Let’s back it up:

mysqlbackup --user=backup -poracle --socket=/var/lib/mysql/mysql.sock \
 --incremental=optimistic \
--incremental-base=history:last_backup \
 --backup-dir=/home/vagrant/backup/incre \
 --backup-image=incr_backup1.mbi  backup-to-image

And validate it too:

mysqlbackup --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi validate


Disaster Time

On all 3 nodes, it’s time to create a disaster:

sudo -i
systemctl stop mysqld.service
rm -rf /var/lib/mysql/*
rm -rf /var/log/mysqld.log


Now, on ic2, where we took the backup (from the master) restore the full optimistic backup :

mysqlbackup --defaults-file=/etc/my.cnf \
 --backup-image=/home/vagrant/backup/full/full_backup.img \
 --backup-dir=/tmp/mysql_backup_pitr --datadir=/var/lib/mysql \
 --with-timestamp copy-back-and-apply-log

And then, restore the incremental:

mysqlbackup --defaults-file=/etc/my.cnf \
 --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi \
 --incremental-backup-dir=/tmp/mysql_backup_pitr_incr \
 --datadir=/var/lib/mysql --incremental --with-timestamp \


Still on ic2 (from where the backup was run), and as the o.s. root user:

chown -R mysql:mysql /var/lib/mysql/*
systemctl start mysqld.service

mysql -uroot -poracle << EOF
SET sql_log_bin = OFF; 
reset master;
# How To Manually Remove InnoDB Cluster Metadata Directory (Doc ID 2274320.1)
drop database mysql_innodb_cluster_metadata;
SET sql_log_bin = ON; 

The previous step is done, as when we start the instance, it still has information on the metadata but everything has been lost, so in order to return to a safe InnoDB Cluster environment, we drop the schema and will rebuild. The slight difference here is that we still have the /etc/my.cnf file.

Now to configure our InnoDB Cluster again:

mysqlsh --uri ic:oracle@ic2:3306
var cluster = dba.createCluster( 'myCluster');

Now we have 1 instance, we can reuse the same backup (full & incremental) to restore the other 2 nodes. We reuse the same commands as before, on both ic1 & ic3:

mysqlbackup --defaults-file=/etc/my.cnf \
 --backup-image=/home/vagrant/backup/full/full_backup.img \
 --backup-dir=/tmp/mysql_backup_pitr --datadir=/var/lib/mysql \
 --with-timestamp copy-back-and-apply-log

mysqlbackup --defaults-file=/etc/my.cnf \
 --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi \
 --incremental-backup-dir=/tmp/mysql_backup_pitr_incr \
 --datadir=/var/lib/mysql --incremental --with-timestamp \

Now to do some specifics.

On ic1:

cd /var/lib/mysql
rm ic2-bin*

chown -R mysql:mysql /var/lib/mysql/*
systemctl start mysqld.service

mysql -uroot -poracle << EOF
SET sql_log_bin = OFF; 
reset master;
drop database mysql_innodb_cluster_metadata;
SET sql_log_bin = ON; 

mysqlsh --uri ic:oracle@ic1:3306

And on ic3:

cd /var/lib/mysql
rm ic2-bin*

chown -R mysql:mysql /var/lib/mysql/*
systemctl start mysqld.service

mysql -uroot -poracle << EOF
SET sql_log_bin = OFF; 
reset master;
drop database mysql_innodb_cluster_metadata;
SET sql_log_bin = ON; 

mysqlsh --uri ic:oracle@ic3:3306

Now, of course, this seems like a lot of manual process. We could also have just as simply taken a full backup from the restored & reestablished ic2, and then have used “full/meta/backup_gtid_executed.sql” to start the instances in the cluster. This way we reuse the original full and incremental backups.

Back on ic2:

mysqlsh --uri ic:oracle@ic2:3306


We’re back in business.


Posted in Backup, InnoDB, innodb cluster, MEB, MySQL, MySQL Enterprise Backup, MySQL Enterprise Edition, numactl, Oracle Linux, PITR, Recovery, Replication | Tagged , , , , , , | Leave a comment

InnoDB Cluster: setting up Production… for disaster! (1/2)

Want to setup InnoDB Cluster and be prepared for a Disaster Recovery scenario? Get ready:

Here’s a way to set up InnoDB Cluster using the 3 environments, on Oracle Linux 7.2, 5.7.19 MySQL Commercial Server, MySQL Shell 8.0.3 DMR, MySQL Router. As this is the first blog post for a complete disaster recovery scenario of InnoDB Cluster, we’ll also be installing MySQL Enterprise Backup.

If you’re new to InnoDB Cluster then I’d highly recommend looking at the following to understand how it works and what Group Replication, Shell & Router are.:

So, to start installing, we’ll need to include a few things first:

  • Python is a must, if it’s not already installed, so:
sudo yum install python
  • And numa awareness is needed for 5.7 MySQL Server:
sudo yum install numactl

As I’m installing the Enterprise software, as I will need Enterprise Backup, I’ve previously downloaded the rpm’s for OLinux 7.2 from, and left them in the directory on each of the 3 servers I’m installing from.

Therefore I can run:

sudo yum install -y mysql-commercial-common-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-client-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-libs-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-server-5.7.19-1.1.el7.x86_64.rpm \

And, as I downloaded the following from http:/

sudo yum install -y mysql-shell-8.0.3-0.1.dmr.el7.x86_64.rpm \

Remember, this is done on all 3 servers (in my case ic1, ic2 & ic3 /, &

Start the mysql servers. As this is the first time, we instantiate at the same time (i.e. the randomly generated root password will be needed).

sudo systemctl start mysqld.service

Enable the service to automatically start on server reboot:

sudo systemctl enable mysqld.service

And, in my scenario, as you will see, it’s really not a Production environment, so I’m going to take it easy with my password & security convention. I would NOT recommend this for your production environment.

Insert the following at the end of the config file, to avoid having to create complex passwords:

sudo vi + /etc/my.cnf

Then restart mysql so the parameter is taken:

sudo systemctl restart mysqld.service

Change the root password (if you really want to):

sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1
mysql -uroot -p

SET sql_log_bin = OFF; 
alter user 'root'@'localhost' identified by 'oracle';
SET sql_log_bin = ON; 

I don’t want these changes being replicated across the nodes, because I’m actually doing this on all servers, hence I use “SET sql_log_bin = OFF” so the changes aren’t reflected in the bin logs.

Now, change to root and run MySQL Shell to start configuring the InnoDB Cluster environment:

sudo -i
MySQL JS > dba.configureLocalInstance();
MySQL JS > \q

This is where we get asked a couple of questions:

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: [Y|n]: 
MySQL user 'root' cannot be verified to have access to other hosts in the network.

1) Create root@% with necessary grants
2) Create account with different name
3) Continue without creating account
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.

Account Name: ic
Password for new account: ******
Confirm password: ******
Validating instance...

As you can see, I chose option 2, to create the user ic/oracle for my cluster. You can choose the name you wish here.

It’s quite normal for the output to tell you to restart the mysql server for the changes in the config to take place. We’ll do it after the next step.

Now, because MEB needs a couple of CSV type tables to run the full and incremental backup (mysql.backup_history & mysql.backup_progress), and if we want MEB to run automatically and not have to script the incremental backup (using end_lsn) and also well want these tables to monitor them from Enterprise Monitor and as well as reduce the amount of warnings being produced on each backup run, we’ll have to comment out the “disabled_storage_engines” entry in the config file:

sudo vi + /etc/my.cnf
 #disabled_storage_engines = MyISAM,BLACKHOLE,FEDERATED,CSV,ARCHIVE

And restart (remember, we’re still doing this on all 3 nodes):

systemctl restart mysqld.service

We should get an “OK” here. Using older versions of Shell might give you otherwise.

Once the mysql server has restarted, run the following. Specific privileges are required for our “ic” user:

mysql -uroot -poracle << EOF 
SET sql_log_bin = OFF; 
SET sql_log_bin = ON; 

So, everything we’ve done up until now, has been done on all 3 OL servers.

Now, on just one server, doesn’t matter which, let’s create our InnoDB Cluster:

mysqlsh --uri ic:oracle@ic2:3306

var cluster = dba.createCluster( 'myCluster');


Now we’ve got 1 instance in our cluster, let’s add the other 2 that we’ve configured at the same time as this one.


Remember, by default we’re in single primary mode, i.e. there will only be 1 RW server and the other 2, the ones we just added, will be in SUPER READ ONLY mode, i.e. no writes will be allowed. If you want to go to multi master mode, then that’s a different createCluster option.

Now to check that all 3 are in the same cluster:


Ok, a slight word of warning, and if you haven’t paid attention to the other referenced documentation mentioned at the beginning, if you come out of the mysqlsh session, and want to go back in and get the cluster status, you’ll need to run this first:

var cluster = dba.getCluster("myCluster")

Otherwise shell will just complain saying that it doesn’t understand what “cluster.” is.

And so, after that, we should see something like the following:

   "clusterName": "myCluster", 
   "defaultReplicaSet": {
       "name": "default", 
       "primary": "ic2:3306", 
       "ssl": "REQUIRED", 
       "status": "OK", 
       "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
       "topology": {
           "ic1:3306": {
               "address": "ic1:3306", 
               "mode": "R/O", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"
           "ic2:3306": {
               "address": "ic2:3306", 
               "mode": "R/W", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"
           "ic3:3306": {
               "address": "ic3:3306", 
               "mode": "R/O", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"

Looking good! Before we jump for joy, we have to make the changes persistent in the my.cnf. This has to be done locally on each machine:

On ic2:


On ic1:
Say Y to when it asks you to override SUPER READ ONLY mode, otherwise no changes will occur.


On ic3:
Again, say Y to when it asks you to override SUPER READ ONLY mode, otherwise no changes will occur.


And, now, just to make sure, back on the master server, or even from any of the 3 nodes:

mysqlsh --uri ic:oracle@ic2:3306
var cluster = dba.getCluster( 'myCluster');

With our InnoDB Cluster up and running, all we need to do now is to let the application connect seamlessly using MySQL Router.

I’m installing on the same servers as the instances but the normal operation would be to install each mysqlrouter process on the app server.

And to make things more clearer, I’ll only install 2 mysqlrouters on ic1 & ic3:

On ic1:

sudo mkdir /opt/mysql/router
chown -R mysql:mysql /opt/mysql/router
mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --user=mysql 

On ic3:

sudo mkdir /opt/mysql/router
chown -R mysql:mysql /opt/mysql/router
mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --user=mysql --force

As we’re on the same servers as the MySQL instance, we could even forget tcp/ip access and use sockets:

mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --conf-use-sockets

In fact, maybe installing a 3rd mysqlrouter process away from the app servers isn’t a bad idea.. and the connect string from the app could choose the hostnames & sockets of the first 2 mysqlrouters, and if they fail, there’s a fall back mysqlrouter connection.

So, now they’re configured, let’s test them. Try this on both ic3 (or ic1):

mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"

Remember, by default, port 6446 is for RW and port 6447 read-only, and that is just read-only.

mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"

# And on ic1 or ic3:

mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"

mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"

Obviously, you can personalize mysqlrouter. Let’s read from all 3 instances independently of whomever is primary or secondary in the innodbcluster_metadata_schema cache:

vi /opt/mysql/router/mysqlrouter.conf


Thanks for reading. Hopefully this has helped someone…

…. Next up, a complete disaster recovery situation …

Posted in Backup, InnoDB, innodb cluster, MEB, MySQL, MySQL Enterprise Backup, MySQL Enterprise Edition, numactl, Oracle, PITR, Recovery | Tagged , , , , , , , , , , , | Leave a comment

MySQL Cluster in Docker: quick step by step

I just wanted to share the ‘how to’ in getting to know MySQL Cluster a bit more, via Docker, as mentioned in the docker store install guide:

all I’m doing here is replicating what’s mentioned there. Hopefully someone might find it interesting, if considering wanting to get a MySQL Cluster up and running to see how it works.

For more info on MySQL in Docker, I’d highly recommend looking at this blog:

(Thanks to Giuseppe here!)

So, what’s needed? Here we go.

Get the cluster image from docker:

$ docker pull mysql/mysql-cluster

Create the private network we’ll need:

$ docker network create cluster --subnet=

Start the management node:

$ docker run -d --net=cluster --name=management1 --ip= mysql/mysql-cluster ndb_mgmd

Start the datanodes:

$ docker run -d --net=cluster --name=ndb1 --ip= mysql/mysql-cluster ndbd
$ docker run -d --net=cluster --name=ndb2 --ip= mysql/mysql-cluster ndbd

And finally the MySQL server node:

$ docker run -d --net=cluster --name=mysql1 --ip= -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql/mysql-cluster mysqld

Change the root password on the mysqld node:

$ docker logs mysql1 2>&1 | grep PASSWORD
$ docker exec -it mysql1 mysql -uroot -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

And last but not least, see how the cluster is running from the management node:

$ docker run -it --net=cluster mysql/mysql-cluster ndb_mgm

ndb_mgm> show

Obviously, now we can connect to the sqlnode and create a database using the NDB storage engine, and enter some rows in the tables and take it from there:

$ docker exec -it mysql1 mysql -uroot -p

mysql> create database nexus; use nexus;

mysql> create table replicant ... engine=NDBCLUSTER;

Hope this helps someone! Thanks for reading.

Posted in Uncategorized | Leave a comment