Ok, so if you’re reading this, then I can guess you’ve got a MySQL InnoDB Cluster in an awkard shape, i.e. you need to restore a backup and add the instance back into the cluster, so we have all our instances again.
As it might be logical to think “ah, but I’ve only lost 1 instance, a read-only instance, so all I have to do is backup & restore the other read-only instance and I’m home free. Well I want to make it a little harder. So in this scenario, assume that we’ve lost both the READ-ONLY instances, so I need to backup my primary READ-WRITE instance.
I’ve got a 8.0.16 instance, on Oracle Linux 7.4. We’ll be looking at 2 hosts, ic1 & ic3.
We’ll be using the MySQL Enterprise Edition Server, that bundles MySQL Enterprise Backup with the rpm’s so we don’t need to install anything else.
I’ll assume you’ve got access to Oracle Support and hence, you might like to have a quick look at the following note:
Backup and Restore Of Group Replication Node ( MySQL 8.0 ) (Doc ID 2449881.1)
So, assuming that you’re scratching your head at the moment, asking me to “get to it” here goes:
On the READ-WRITE server, ic1, run an online hot backup with MySQL Enterprise Backup, MEB:
mysqlbackup -uic -p --socket=/var/lib/mysql/mysql.sock \ --backup-dir=/home/vagrant/backup/mysql --with-timestamp \ --compress backup-and-apply-log
Now copy or transfer the resulting backup-dir to host “ic3”.
On server to be restored, ic3:
$ systemctl stop mysqld.service $ rm -rf /var/lib/mysql/*
And run the MEB restore:
mysqlbackup --defaults-file=/etc/my.cnf -uroot \ --backup-dir=/home/vagrant/backup/mysql/2019-05-30_10-39-45/ \ --datadir=/var/lib/mysql --uncompress copy-back
Once that’s all restored, make sure the ‘mysql’ o.s. user has permissions and also do some housekeeping (we don’t need any relay logs do we now?):
chown -R mysql:mysql /var/lib/mysql rm -rf /var/lib/mysql/*-bin*
And to avoid anyone trying to access the environment ‘by accident’ and make sure nothing else will start trying to write:
vi /etc/my.cnf .. [mysqld] .. super_read_only=1 event_scheduler=off ..
Because we’ve restored a backup from the master, we’ll need to know what GTID was active when we ran the backup, as this will be the starting point for the node we’re restoring. So:
$ cat ~vagrant/backup/mysql/2019-05-30_10-39-45/meta/backup_gtid_executed.sql # On a new slave, issue the following command if GTIDs are enabled: SET @@GLOBAL.GTID_PURGED='155d01e3-5b85-11e9-b5b5-080027b46af9:1-21395756,bc780881-5b85-11e9-ba66-080027b46af9:1-356'; # Use the following command if you want to use the GTID handshake protocol: # CHANGE MASTER TO MASTER_AUTO_POSITION=1;
Let’s start up the restored instance:
systemctl start mysqld.service
And let’s reset the “ic1” instance restored onto the “ic3” server MAKING SURE WE USE “set sql_log_bin = OFF”:
mysqlsh --uri ic@ic3:3306 --sql SET sql_log_bin = OFF; RESET SLAVE; RESET MASTER; SET @@GLOBAL.GTID_PURGED='155d01e3-5b85-11e9-b5b5-080027b46af9:1-21395756,bc780881-5b85-11e9-ba66-080027b46af9:1-356'; SET sql_log_bin = ON;
Without exiting from our MySQL session:
\js dba.checkInstanceConfiguration('ic@ic3:3306') dba.configureInstance('ic@ic3:3306')
We answer “y” when it asks us to reconfigure and restart the instance.
Now, back on the primary instance server, ic1:
mysqlsh --uri ic@ic1:3306 cluster = dba.getCluster(); cluster.status();
We should see that the ic3 instance is MISSING, so let’s remove it:
cluster.rescan()
“y” to remove ic3.
And now to add the restored instance back into the cluster. From ic1:
cluster.addInstance('ic@ic3:3306')
And just to make sure:
cluster = dba.getCluster(); cluster.status();
Back on ic3:
vi /etc/my.cnf [mysqld] #super_read_only=1 #event_scheduler=off
or
mysqlsh --uri ic@ic3:3306 --sql set SQL_LOG_BIN=0; set global super_read_only=0; set global event_scheduler=1 set SQL_LOG_BIN=1;
Well, it worked for me! Hope it helps someone else.
Happy Clustering!