And continuing with our Clustercontrol welcoming, here we take a look at how to deploy a PostgreSQL 2 node cluster, when, as many of you might know, the author has no idea of how to do this!
Now this is the beauty of Clustercontrol: No prior knowledge required!
For any of us getting used to new technology, a fair amout of time is invested in reading the install guide, getting to know the architecture layout of each component of the new platform to deploy. Trial and error takes time. We make errors and recover from it, learning of course, but then, how many times will we do this in the future… really? it’s a huge cost for questionable benefits. Clustercontrol frees this time up for us so we’re not just talking about automation here, as obviously that’s key. It’s taking the fear away, replacing this with time to focus on the more valuable tasks, i.e. enablement / empowerment. In the end, we’re talking data sovereignty of our platforms, applications and processes.
So, encharged with controlling our data sources and deployments, for whatever reason, new environments, migration from one rdbms to another, from nosql to relational, etc. Lets see how simple it is to get PostgreSQL high availability up and running in just a couple of steps.
Ok, maybe I’ve jumped into the nitty gritty too fast, and hence I need to mention that the documentation is the best starting place: Clustercontrol User Guide – Deploying PostgreSQL. But didn’t I just say that it’s easier than having the RTFM solution applied? Here goes:
Once we’ve chosen the option to Deploy, we see the list of Clusters to choose from. Select PostgreSQL:
And like we’ve done before, fill in the vagrant box VM details.
We now get to add the more specific PG info: port, user & pass, version of PostgreSQL you want to deploy and whether you want to personalize the datadir.
Now we have to add the hosts that we want to use
2 nodes added, next step: specify whether we want synchronous replication or not.
And hit “Deploy”.
Job created, we should see something like the following start growing in front of us:
All the lovely details of getting 2 PostgreSQL nodes replicting without having to look at single manual.
Now we should see something like the following:
The nodes:
Have a look at the Topology tab:
Looking good. So, maybe you’re more of a PostgreSQL expert and want to have a look at the logs:
All done. I got a PostgreSQL 2 node cluster up and running in about 10 minutes without any prior experience. Who else wants to try? Please go for it!
In line with maintaining sovereignty of our data and architecture, nor sacrificing security over control or viceversa, let’s see what else Clustercontrol can do to help us down that path.
There’s a “new” kid on the block, so to speak, name of StoneDB. Well after installing the “Open Source”, “100% Compatible with MySQL 5.7”, “Integration of OLTP and OLAP” and other values that each of you can consider in your own time, I thought to myself: “If it’s 100% compatible, then Clustercontrol shouldn’t have any issue with it neither, right”. So let’s take a look.
And once it’s all up and running, went to Clustercontrol to get it monitored:
Here the only noticeable thing is the name I give the Cluster.
Again, here I’ve ensured the ol7ic2 node is reachable, ssh-able, etc.
And, after making sure the mysql client binaries were installed, as when using the manual tar install that gets left out of course, so adapting is the name of the game, and hey-presto we’re a-monitoring!:
Now, it’s a MySQL compatible, which means that the engineering team at Severalnines haven’t been able to include nor certify nor ensure any StoneDB specific features are added to the Clustercontrol GUI. Buuuut, if that’s something you’re looking for to complete the manageability of all of your instances and platform, reach out to us please! Or even slack the team!
So, as you have seen in my previous post, you can get Clustercontrol up and running extremely fast in order to deploy new instances and bring existing environments closer and take control of them all, in one foul swoop.
You’re highly unlikely to have installed Clustercontrol and now want to start letting your MySQL, PostgreSQL, Mongo, Redis, etc. go forth and multiply. I wage a bet it’s the other way around: You have lots of various environments out there, wild & untamed. Here’s Clustercontrol to take charge of them, manage them, monitor them and back them up too and even let you go beyond and create replicas, clusters and so forth.
So, let’s see an example of how to use Clustercontrol and make a wild MySQL instance tame.
[
FYI: All I did here was use my vagrant box with Clustercontrol installed, download MySQL 8.0.32 Community rpm bundles, manually install, open it and change the password and make it look like an pre-existing environment.
In future posts I’ll look at doing an automatic deployment (the “Deploy” button) so you don’t have to do this anymore.
With this empty console, we want to “import” an environment into Clustercontrol and thus take control of it.
You might see the following which allows you to directly go to “Import”:
If not, go over to the top right and hit “Import“.
You’ll see a screen like the following:
As we want to “manage” the environment we have to use a secure shell connection, using the Clustercontrol server certificate and user so this is a must.
As you can see I’m using vagrant boxes, ccdemo is my Clustercontrol box and ol7ic1 is an Oracle Linux 7 box:
Note: I’ve used a specific user, ic, that has access both from remote as well as from localhost as I don’t want root to be that user nor open up that security risk. Please choose wisely here. My “ic” user has “with grant option” as we’ll need this for more things I’ll highlight later on.
When you add the node name it needs you to hit enter as it checks the ssh permissions and makes sure it is reachable before going any further. A wise check:
Once that’s confirmed … :
.. we then hit the yellow “import” button at the bottom, and the job is submitted:
Top right we can see the job.
Under the Activity tab, you’ll find the Alarms, Jobs, CMON Log & Audit Log sections. We want the job detail now to see how it’s progressing.
The following screenshots show the detail we get and all the different tasks that have been automated.
More:
So I’m human; cut me and I’ll bleed.
I used the “ic” user and initially I forgot to create it for both remote and localhost access, hence the error. I did this afterwards, manually, and then added the “GRANT PROXY .. WITH GRANT OPTION” privilege too.
Once the job has finished we should see something similar to:
We can then click on the Clusters option on the left, and there we have it!
Click on the cluster and we’ll see:
And now it’s time for you to move around and play.
I’ll leave you some screen shots here so you can see what each of the tabs within this newly detected instance looks like, as I’m sure some of them will make you want to have this for more than just MySQL… but that’s another story / post.
As it’s been a while since I wrote an entry, I think it’s fair to share that I’m now dedicating my time to Severalnines as Product Marketing Manager, where MySQL is still MySQL and opensource is still important.
So, first things first, What do we do in Severalnines? I’m sure you all know, but just in case, I think a picture paints a thousand words so:
Make database ops easy by automating the boring stuff, and, I’d say more importantly, help you deploy all those databases such as MySQL (official & Percona), NDB, Galera, TimeScale, PostgreSQL, Mongo Redis and even SQLServer (on Linux)!
And some requirements in order to either “import” existing db’s or “deploy” needs ssh to each server from the ClusterControl / CMon server.
But you’ll end up with a line saying:
2023-02-11 11:49:47 CET -- ClusterControl installation completed!
Open your web browser to https://<the-IP-of-your-server>/clustercontrol and create a default Admin User.
If you want to uninstall ClusterControl then please follow the instructions here, https://severalnines.com/docs/administration.html#uninstall
If anything crops up and you need help, check out the Supporting docs.
We have never had to be so careful and keep safe with respect to our health as ever before.
I don’t have to go into details.
For the health and safe-keeping of our IT infrastructure, we always remember our weaknesses when we suffer too. It’s probably human nature.
So let’s be proactive where we can.
IT Security has a lot to do with reducing the exposure of the system, controlling the risk and minimizing the attackers options.
Assess, Prevent, Detect and Recover: MySQL Enterprise Security can help you fulfill all of your security needs. Whether it’s meeting with Compliancy and the applicable Regulations, attending specifics like Data Encryption or SQL Injection or quite simply securing your data, there are many options.
Then what is “MySQL Enterprise Security”?
Come along and join us in our MySQL Security webinars that Syone and MySQL are holding in February and find out over the 2 sessions what can be done, and how to keep your data safe, and even involve us afterwards in determing what specific solution may help your organisation and what this entails for the application. The session will only be 30 minutes long so we’ll try and be as concise as possible:
Based in Portugal since 1999, Syone works with some of the world’s leading companies in Telco, Banking & Insurance, Government, Utilities, Industry, Retail, Hospitality and others by implementing mission-critical projects and helping organisations to increase agility to drive Digital Transformation.
Just recently, I have been asked to look into what a Disaster Recovery site for InnoDB Cluster would look like.
If you’re reading this, then I assume you’re familiar with what MySQL InnoDB Cluster is, and how it is configured, components, etc.
Reminder: InnoDB Cluster (Group Replication, Shell & Router) in version 8.0 has had serious improvements from 5.7. Please try it out.
So, given that, and given that we want to consider how best to fulfill the need, i.e. create a DR site for our InnoDB Cluster, let’s get started.
Basically I’ll be looking at the following scenario:
InnoDB Cluster Source site with a Group Replication Disaster Recovery Site.
Now, just before we get into the nitty-gritty, here’s the scope.
Life is already hard enough, so we want as much automated as possible, so, yes, InnoDB Cluster gets some of that done, but there are other parts we will still have to assume control over, so here’s the idea:
Source Site: – InnoDB Cluster x3 – Router – Accesses Source (Dynamic) & DR (Static) – Full Automation
DR Site: – Group Replication x2 – Router – Accesses Source (Dynamic) & DR (Static) – Configured Asynchronous Replication from Primary node.
External / LBR 3rd Site: – MySQL Router (static) routing connections to Source Router – Allowing for 2 instance automatic failovers at Source site. – Manually reroute connections to DR site once Source Site outage confirmed or use different Router port. – Redundancy recommended to reduce SPOF.
Let’s get it all started then.
First things first, if you’re not quite sure what you’re doing, then I highly suggest looking at the following:
(And thanks to Andrew & Tony for their continuous help!)
Let’s set up our Source Site
3x CentOS 7 servers.
8.0.20: Server, Shell & Router.
:: All servers have had their SELinux, firewalls, ports, /etc/hosts and so on checked and validated, haven’t they? We will want to be using a private IP between all hosts, for obvious reasons..
I downloaded all the MySQL Enterprise rpm’s from https://edelivery.oracle.com and run the following on all 3 servers (getting mystic here: “centos01”, “centos02” & “centos03”):
sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1 mysql -uroot -p
Once we’re in, we want to control what command get into the binlogs, and the ”alter user’ and create user’ will cause us issues later on, hence “sql_log_bin=OFF” here:
SET sql_log_bin = OFF;
alter user 'root'@'localhost' identified by 'passwd';
create user 'ic'@'%' identified by 'passwd';
grant all on . to 'ic'@'%' with grant option;
flush privileges;
SET sql_log_bin = ON;
This “ic@%” user will be used throughout, as we just don’t know when and which instance will end up being the source primary and/or a clone so by using this single user setup, I keep things easier for myself. For your production setup, please look deeper into specific privs ‘n’ perms.
Now, you will see that, although we have installed the binaries and started up the instances on centos02 & centos03, “addInstance” goes and clones our primary. This makes life soooo much easier.
HINT: Before cloning, if you need to provision data, do it prior and we kill a couple of proverbial birdies here.
Now we have a 3 instance Single Primary InnoDB Cluster.
Setting up the local “InnoDB Cluster aware” Router:
mkdir -p /opt/mysql/myrouter
chown -R mysql:mysql /opt/mysql/myrouter
cd /opt/mysql
mysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysql
./myrouter/start.sh
Now Router is up and running.
Let’s set up our Group Replication Disaster Recovery site now.
Here, I’m doing a naughty, and just setting up a 2 node Group Replication group. If you want to see how you should be doing it, then let me reference Tony again:
:: Again, ports, firewalls, SELinux, /etc/hosts n similar have been validated again? Please?
The environment:
2x Oracle Linux 7 servers.
Version 8.0.20: MySQL Server, Router & Shell
Do the following on both replica servers (olrep01 & olrep02):
sudo yum install -y mysql-8.0.20rpm sudo systemctl start mysqld.service sudo systemctl enable mysqld.service sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1 mysql -uroot -p
SET sql_log_bin = OFF;
alter user 'root'@'localhost' identified by 'passwd';
create user 'ic'@'%' identified by 'passwd';
grant all on . to 'ic'@'%' with grant option;
flush privileges;
SET sql_log_bin = ON;
No we have basic servers created, we need to clone the data from the Primary instance in our InnoDB Cluster.
SET sql_log_bin = OFF; INSTALL PLUGIN CLONE SONAME "mysql_clone.so"; GRANT BACKUP_ADMIN ON . to 'ic'@'%'; GRANT SELECT ON performance_schema.* TO 'ic'@'%'; GRANT EXECUTE ON . to 'ic'@'%'; SET sql_log_bin = ON;
Now on each replica:
INSTALL PLUGIN CLONE SONAME "mysql_clone.so";
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
SET GLOBAL clone_valid_donor_list = 'centos02:3306';
GRANT CLONE_ADMIN ON . to 'ic'@'%';
# Keep this, if we ever want to clone from any of the replicas
GRANT BACKUP_ADMIN ON . to 'ic'@'%';
GRANT SELECT ON performance_schema.* TO 'ic'@'%';
GRANT EXECUTE ON . to 'ic'@'%';
Now to execute the clone operation:
set global log_error_verbosity=3;
CLONE INSTANCE FROM 'ic'@'centos02':3306 IDENTIFIED BY 'passwd';
View how the clone process is going:
select STATE, ERROR_NO, BINLOG_FILE, BINLOG_POSITION, GTID_EXECUTED, CAST(BEGIN_TIME AS DATETIME) as "START TIME", CAST(END_TIME AS DATETIME) as "FINISH TIME", sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))) as DURATION from performance_schema.clone_status \G
As we’re cloning, we might run into the duplicate UUID issue, so, on both of the replicas, force the server to have a new UUID:
vi /etc/my.cnf
# GR setup
server-id =11
log-bin =mysql-bin
gtid-mode =ON
enforce-gtid-consistency =TRUE
log_slave_updates =ON
binlog_checksum =NONE
master_info_repository =TABLE
relay_log_info_repository =TABLE
transaction_write_set_extraction=XXHASH64
plugin_load_add ="group_replication.so"
group_replication = FORCE_PLUS_PERMANENT
group_replication_bootstrap_group = OFF
#group_replication_start_on_boot = ON
group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E
group_replication_local_address = '10.0.0.41:33061'
# Adjust the following according to IP's and numbers of hosts in group:
group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061'
On 2nd node:
server-id =22
log-bin =mysql-bin
gtid-mode =ON
enforce-gtid-consistency =TRUE
log_slave_updates =ON
binlog_checksum =NONE
master_info_repository =TABLE
relay_log_info_repository =TABLE
transaction_write_set_extraction=XXHASH64
plugin_load_add ="group_replication.so"
group_replication = FORCE_PLUS_PERMANENT
group_replication_bootstrap_group = OFF
#group_replication_start_on_boot = ON
group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E
group_replication_local_address = '10.0.0.42:33061'
# Adjust the following according to IP's and numbers of hosts in group:
group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061'
Restart both servers:
systemctl restart mysqld
Check they’re in a GR group and the plugin is ok:
mysql -uroot
SELECT * FROM performance_schema.replication_group_members;
SELECT * FROM performance_schema.replication_group_members\G
select * from information_schema.plugins where PLUGIN_NAME = 'group_replication'\G
Now to create the recovery replication channel on all servers (although this is for single primary setup, the source could fail and then come back as a Read-Only replica, so we need to set this up):
CHANGE MASTER TO MASTER_USER='ic', MASTER_PASSWORD='passwd' FOR CHANNEL 'group_replication_recovery';
On Server 1:
SET GLOBAL group_replication_bootstrap_group=ON; START GROUP_REPLICATION; SET GLOBAL group_replication_bootstrap_group=OFF;
On Server 2:
START GROUP_REPLICATION;
Check all the servers super_read_only mode and if they’re in a group:
select @@super_read_only;
SELECT * FROM performance_schema.replication_group_members;
Set up Router on one of the replicas bootstrapping against the InnoDB Cluster source:
mkdir -p /opt/mysql/myrouter chown -R mysql:mysql /opt/mysql/myrouter mysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysql cd /opt/mysql/myrouter ./start.sh
Check connectivity:
mysql -uic -P6446 -h olrep01 -N -e "select @@hostname, @@port"
mysql -uic -P6446 -h centos02 -N -e "select @@hostname, @@port"
Replication config for replicas
Setting up replication for the replica from the primary means that we MUST be prepared to script the following for when the primary fails in the group replication setup, or keep the following command at hand, as it isn’t automatic so we’ll need to control this ourselves.
Also, albeit a Group Replication group, when the primary fails n the DR site, we will also need to check all connections and sessions to the GR DR site to make sure we can set this up safely.
So, from the PRIMARY instance only, execute the following:
As you can see, this is replicating via the GR DR site Router that is bootstrapped against the InnoDB Cluster. Hence when the primary on the cluster moves, Router will take us to the new primary without having to re-create replication or similar.
Now let’s start it all up:
start slave ;
And have a look at our InnoDB Cluster source site replicating data to our DR Group Replication site:
show slave status for channel 'idc_gr_replication'\G
Application Connectivity
Now we have a HA solution for our data, what about connecting to one and the other.
We have 2 routers, each local to each data-saving site, fine. But, if we’re being realistic, we have 1 data entry point, the InnoDB Cluster Primary instance on centos01, and then 4 copies of the data, syncing nicely between a (dedicated? please?) local network to ensure we’re as safe as possible.
As you’ll probably have noticed, upon bootstrapping Router on both sides, it chooses the routing_strategy=first_available on the default port 6446 which is fine. And yes, this port can be changed if we want to, so feel free to adjust as required.
But, depending on where the application is, and the user entry point, you’ll have a VIP, floating IP or something similar halping to load balance maybe. But here is where we start to ask more questions:
What happens when the Source site goes down?
eg:
Well, there are a number of things to keep in mind. The local Routers will take care of instance failures, and here, I feel obliged to forewarn you all about what happens when we loose 2 out of 3 instances on the source site. Here, the last instance could be configured to have group_replication_exit_state_action=’OFFLINE_MODE’ or ABORT for example. That way, it doesn’t get used as a valid node via Router.
Now, once that has happened, i.e. the source site goes down, we need to manually intervene, and make sure that everything is really down, and kick into effect the DR site. Now both Routers on Source & DR site that were bootstrapped to the InnoDB Cluster will fail of course.
We are left with the Group Replication DR site.
So what do we do here? Well, create an additional Router, configured with a static (non-dynamic) setup, as we don’t have the metadata that the IdC-aware Router had via the mysql_innodb_cluster_metadata schema that gets stored in “dynamic_state=/opt/mysql/myrouter/data/state.json”.
What I did was create a new Router process on one of the other servers (instead of centos02 & olrep01, on centos01 & olrep02). Now this isn’t something you’d do in Production as we want Router close to the app, not the MySQL instances. So in effect you could just use a new path ,eg.
mkdir -p /opt/mysql/myrouter7001 chown -R mysql:mysql /opt/mysql/myrouter7001 cd /opt/mysql/myrouter7001
mysql -uic -P7001 -h olrep02 -N -e "select @@hostname, @@port"
mysql -uic -P7001 -h centos01 -N -e "select @@hostname, @@port"
And this will give us back olrep01, then when that fails, olrep02.
When the InnoDB Cluster-bootstrapped-Router that listens on 6446 fails, we know to redirect all requests to port 7001 on our production IP’s (in my case, olrep02 & centos01).
Summing up, we’ve now got 4 routers running, 2 are InnoDB Cluster aware, listening on 6446, and the static GR DR Routers are listening on 7001.
That’s a lot of ports and a long connect string. Maybe we could make this simpler?
Ok, well I added yet another Router process, high level. This Router process, again, has a hard-coded static configuration, because we don’t have InnoDB Cluster-aware Routers on both sides, which means having a mysql_innodb_cluster_metadata schema on both sides, fully aware of both InnoDB Clusters, and also fully aware that metadata changes on the source are NOT to be replicated across. Who knows.. maybe in the future…
So in my case I created another Router process configuration on centos03:
mkdir -p /opt/mysql/myrouter
chown -R mysql:mysql /opt/mysql/myrouter
cd /opt/mysql/myrouter
(You might want to use something different, but I did this. I could have all of these on the same server, using different paths. There isn’t any limit to how many Router processes we have running on the same server. It’s more a “port availability” or should I say “network security” concern)
Just in case I’ve lost you a little, what I’m doing here is providing a Router access point via centos03:8008 to the InnoDB-Cluster-aware Routers on centos02:6446 and olrep01:6446. And then an additional / backup access point to the static Routers on centos01: 7001 and olrep02:7001. The app would then have a connect string similar to “centos03:8008,centos03:8009”.
Now, I know to send Source site connections to centos03:8008 always. And when I get errors and notifications from monitoring that this is giving problems, I can use my back up connection to centos03:8009.
Side note: I originally setup the single Router process on each side, but with a mix of InnoDB Cluster aware entry and an additional static configuration. This didn’t work well as really, having bootstrapped against the IdC, the dynamic_state entry only allows for the members taken from the mysql_innodb_cluster_metadata schema. So it won’t work. Hence, the additional router processes. But if you think about it, making each Router more modular and dedicated, it makes for a more atomic solution. Albeit a little more to administer.
However, maybe instead of this high-level Router process listening on 8008 & 8009, you want to reuse your keepalived solution. Have a look at Lefred’s “MySQL Router HA with Keepalived” solution for that.
And there you have it!
This was all done on Oracle Cloud Infrastructure compute instances, opening ports internally within the Security List and not much else.
Por si no se hubiera visto o promocionado lo suficiente, quería compartir la lista de webcasts en Español que ya hay disponible en On Demand webinars en el apartado de News & Events en mysql.com:
Ok, so if you’re reading this, then I can guess you’ve got a MySQL InnoDB Cluster in an awkard shape, i.e. you need to restore a backup and add the instance back into the cluster, so we have all our instances again.
As it might be logical to think “ah, but I’ve only lost 1 instance, a read-only instance, so all I have to do is backup & restore the other read-only instance and I’m home free. Well I want to make it a little harder. So in this scenario, assume that we’ve lost both the READ-ONLY instances, so I need to backup my primary READ-WRITE instance.
I’ve got a 8.0.16 instance, on Oracle Linux 7.4. We’ll be looking at 2 hosts, ic1 & ic3.
We’ll be using the MySQL Enterprise Edition Server, that bundles MySQL Enterprise Backup with the rpm’s so we don’t need to install anything else.
I’ll assume you’ve got access to Oracle Support and hence, you might like to have a quick look at the following note:
Backup and Restore Of Group Replication Node ( MySQL 8.0 ) (Doc ID 2449881.1)
So, assuming that you’re scratching your head at the moment, asking me to “get to it” here goes:
On the READ-WRITE server, ic1, run an online hot backup with MySQL Enterprise Backup, MEB:
And to avoid anyone trying to access the environment ‘by accident’ and make sure nothing else will start trying to write:
vi /etc/my.cnf
..
[mysqld]
..
super_read_only=1
event_scheduler=off
..
Because we’ve restored a backup from the master, we’ll need to know what GTID was active when we ran the backup, as this will be the starting point for the node we’re restoring. So:
$ cat ~vagrant/backup/mysql/2019-05-30_10-39-45/meta/backup_gtid_executed.sql
# On a new slave, issue the following command if GTIDs are enabled:
SET @@GLOBAL.GTID_PURGED='155d01e3-5b85-11e9-b5b5-080027b46af9:1-21395756,bc780881-5b85-11e9-ba66-080027b46af9:1-356';
# Use the following command if you want to use the GTID handshake protocol:
# CHANGE MASTER TO MASTER_AUTO_POSITION=1;
Let’s start up the restored instance:
systemctl start mysqld.service
And let’s reset the “ic1” instance restored onto the “ic3” server MAKING SURE WE USE “set sql_log_bin = OFF”:
mysqlsh --uri ic@ic3:3306 --sql
SET sql_log_bin = OFF;
RESET SLAVE;
RESET MASTER;
SET @@GLOBAL.GTID_PURGED='155d01e3-5b85-11e9-b5b5-080027b46af9:1-21395756,bc780881-5b85-11e9-ba66-080027b46af9:1-356';
SET sql_log_bin = ON;
So, if we’re applying GDPR to our system, and we’re already making use of MySQL Transparent Data Encryption / keyring, then here’s an example on how to migrate from filed-based keyring to the encrypted keyring. Online.
mysqld. Yes, we start up another mysqld process, but it’s not a fully functioning server, far from it. It is just a means to migrate the keys from the old file-based to the new encrypted file. So don’t worry about the defaults-file, the innodb_xxxx params nor anything else. We actually need to reuse the existing datadir.
datadir. As just mentioned, don’t try and use another datadir as it won’t find any files there to encrypt with the new key and the process won’t be successful. Use the existing online server datadir. (of course, I recommend this process be run in a non-production test environment first!)
-source & -destination. I think this is quite obvious. The plugin we’re coming from, and going to.
keyring_file_data is the existing file-based keyring being used.
keyring_encrypted_file_data & _password is the new encrypted password being stored in its file in this location.
keyring-migration- params. We need to connect to the existing instance with super user privs. As it’s locally to the instance, we can use -socket.
And if, only if, the migration is successful, you should see output like the following. Anything else, i.e. if no output comes back, or some of the lines don’t appear in your scenario, double check the parameters in the previous command as it’s more than likely impeding a successful key migration somewhere:
2018-10-08T11:26:22.227161Z 0 [Note] [MY-010098] [Server] --secure-file-priv is set to NULL. Operations related to importing and exporting data are disabled
2018-10-08T11:26:22.227219Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/.
2018-10-08T11:26:22.227226Z 0 [System] [MY-010116] [Server] mysqld (mysqld 8.0.12-commercial) starting as process 13758
2018-10-08T11:26:22.254234Z 0 [Note] [MY-011085] [Server] Keyring migration successful.
2018-10-08T11:26:22.254381Z 0 [Note] [MY-010120] [Server] Binlog end
2018-10-08T11:26:22.254465Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_encrypted_file'
2018-10-08T11:26:22.254642Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file'
2018-10-08T11:26:22.255757Z 0 [System] [MY-010910] [Server] mysqld: Shutdown complete (mysqld 8.0.12-commercial) MySQL Enterprise Server - Commercial.
Migrated.
To make sure the instance has the new parameters in the defaults file, and before any risk of restarting the instance, we’ll need to add the new ‘encrypted’ params to the my.cnf:
And let’s double check which keyring plugin we’re using:
select * from information_schema.plugins where plugin_name like '%keyring%' \G
*************************** 1. row ***************************
PLUGIN_NAME: keyring_encrypted_file
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: KEYRING
PLUGIN_TYPE_VERSION: 1.1
PLUGIN_LIBRARY: keyring_encrypted_file.so
PLUGIN_LIBRARY_VERSION: 1.9
PLUGIN_AUTHOR: Oracle Corporation
PLUGIN_DESCRIPTION: store/fetch authentication data to/from an encrypted file
PLUGIN_LICENSE: PROPRIETARY
LOAD_OPTION: ON
1 row in set (0,00 sec)
And also that we can select the data from the encrypted tablespace:
select * from nexus.replicant;
+----+------------+-----------+-----------+
| id | First name | Last name | Replicant |
+----+------------+-----------+-----------+
| 1 | Roy | Hauer | Yes |
| 2 | Rutger | Batty | Yes |
| 3 | Voight | Kampff | Yes |
| 4 | Pris | Hannah | Yes |
| 5 | Daryl | Stratton | Yes |
| 6 | Rachael | Young | Yes |
| 7 | Sean | Tyrell | Yes |
| 8 | Rick | Ford | No |
| 9 | Harrison | Deckard | Yes |
+----+------------+-----------+-----------+
9 rows in set (0,00 sec)
Seems quite straight forward.
Well how about, in a test environment, changing the keyring_encrypted_file_password value to something different now, and restart the instance, and run the same select on the same table.
Hey presto:
select * from nexus.replicant;
ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully.
Error (Code 3185): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully.
Error (Code 1877): Operation cannot be performed. The table 'nexus.replicant' is missing, corrupt or contains bad data.
Hope this helps someone out there. Enjoy encrypting!
Now we can run encrypted backups safely and not worry about moving those files around different systems now.
As a spin-off from the previous post, https://mysqlmed.wordpress.com/2018/08/23/get-the-auditors-in/, I thought that it would be good to see how well the Audit plugin upgrades to MySQL 8. The big change in auditing is that the tables change from MyISAM to InnoDB, so keep your eyes open.
I’m using the previously used instance in version 5.7.18.
Preparation
Before we do anything, let’s make sure auditing will be in place when we restart the instance with 8.0.12:
Uncomment the plugin-load & audit-log params we had originally commented out. After all, this is something we should have done in the last post (apologies!):
vi my_audit.cnf:
..
[mysqld]
plugin-load =audit_log.so
audit-log =FORCE_PLUS_PERMANENT
..
Restart the 5.7 instance so we upgrade from a rebooted / ‘as real as can be expected’ scenario:
bin/mysql_upgrade -S /opt/mysql/audit/mysql_audit.sock -uroot
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.audit_log_filter OK
mysql.audit_log_user OK
mysql.columns_priv OK
mysql.component OK
mysql.db OK
mysql.default_roles OK
mysql.engine_cost OK
mysql.func OK
mysql.general_log OK
mysql.global_grants OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.password_history OK
mysql.plugin OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.role_edges OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
nexus.replicant OK
sys.sys_config OK
Upgrade process completed successfully.
Checking if update is needed.
Ok, all ok.
Double checking:
ls -lrt /opt/mysql/audit/data/*upg*
-rw-rw-r-- 1 khollman khollman 6 ago 22 19:58 /opt/mysql/audit/data/mysql_upgrade_info
cat /opt/mysql/audit/data/mysql_upgrade_info
8.0.12
Let’s check the audit plugin status:
bin/mysql -uroot -S /opt/mysql/audit/mysql_audit.sock
SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM INFORMATION_SCHEMA.PLUGINS
WHERE PLUGIN_NAME LIKE 'audit%';
+-------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------+---------------+
| audit_log | ACTIVE |
+-------------+---------------+
SELECT @@audit_log_filter_id;
SELECT * from mysql.audit_log_user;
+-----------------+-----------+----------------+
| USER | HOST | FILTERNAME |
+-----------------+-----------+----------------+
| % | | log_connection |
| audit_test_user | localhost | log_connIUD |
| root | localhost | log_nothing |
+-----------------+-----------+----------------+
SELECT * from mysql.audit_log_filter;
Here we see the table is now InnoDB:
show create table mysql.audit_log_user;
So now we see everything is as it was, we suddenly remember we saw something about upgrading in the install section of the manual:
As of MySQL 8.0.12, for new MySQL installations, the USER and HOST columns in the audit_log_user table used by MySQL Enterprise Audit have definitions that better correspond to the definitions of the User and Host columns in the mysql.user system table. For upgrades to an installation for which MySQL Enterprise Audit is already installed, it is recommended that you alter the table definitions as follows:
So we run the following commands:
ALTER TABLE mysql.audit_log_user
DROP FOREIGN KEY audit_log_user_ibfk_1;
ALTER TABLE mysql.audit_log_filter
CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_as_ci;
ALTER TABLE mysql.audit_log_user
CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_as_ci;
ALTER TABLE mysql.audit_log_user
MODIFY COLUMN USER VARCHAR(32);
ALTER TABLE mysql.audit_log_user
ADD FOREIGN KEY (FILTERNAME) REFERENCES mysql.audit_log_filter(NAME);
Lets do the tail of the audit.log that hasn’t had anything written to it since we started the upgrade.
DELETE FROM replicant where `First name`='Rick';
UPDATE replicant set `Replicant` = 'No' where `First name` = 'Harrison';
INSERT INTO replicant (`First name`,`Last name`,`Replicant`) VALUES ('Rick','Ford','No');
UPDATE replicant set `Replicant` = 'Yes' where `First name` = 'Harrison';
exit, and watch it all being logged, except the SELECT’s.
A “status;” shows us:
--------------
bin/mysql Ver 8.0.12-commercial for linux-glibc2.12 on x86_64 (MySQL Enterprise Server - Commercial)
Connection id: 18
Current database:
Current user: audit_test_user@localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 8.0.12-commercial MySQL Enterprise Server - Commercial
...