Ok, so now we’re got our InnoDB Cluster a-clustering, MySQL Router a-routing, now we need some disaster to be a-disaster-recovering…
A foreword first.
If you’re looking to use Enterprise Backup to recover a single node and restore that node back into an existing InnoDB Cluster, LeFred takes you through that one nicely here.
Preparing for backup
On our single primary server, the one that allows write, which was ic2/10.0.0.12 in my case:
mysql -uroot -poracle << EOF SET sql_log_bin = OFF; create user 'backup'@'%' identified by 'oracle'; grant all on *.* to 'backup'@'%'; SET sql_log_bin = ON; EOF
Let’s create something to backup (if you haven’t already done so of course):
mysqlsh --uri root:oracle@localhost:3306 --sql create database nexus;
Let’s run a full backup to start with:
mysqlbackup -ubackup -poracle --socket=/var/lib/mysql/mysql.sock \ --backup-dir=/home/vagrant/backup/full \ --backup-image=full_backup.img  backup-to-image ls /home/vagrant/backup/full/full_backup.img mysqlbackup --backup-image=/home/vagrant/backup/full/full_backup.img validate
Ok, everything ok.
Enterprise Backup needs to create 2 tables, in CSV format, which is why we commented out the disabled_storage_engine parameter in the my.cnf before. If we hadn’t commented it out, there would be warnings saying that these tables can’t be created, and hence, we’d need to use end_lsn in the incremental backup instead of “history:last_backup”. A more manual process.
Let’s create something to use our incremental backup:
create table nexus.replicant ( id INT(11) NOT NULL AUTO_INCREMENT , `First name` varchar(40) not null default '', `Last name` varchar(40) not null default '', `Replicant` enum('Yes','No') not null default 'Yes', PRIMARY KEY (id) ) engine=InnoDB row_format=COMPACT;
Let’s back it up:
mysqlbackup --user=backup -poracle --socket=/var/lib/mysql/mysql.sock \ --incremental=optimistic \ --incremental-base=history:last_backup \ --backup-dir=/home/vagrant/backup/incre \ --backup-image=incr_backup1.mbi backup-to-image
And validate it too:
mysqlbackup --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi validate
Disaster Time
On all 3 nodes, it’s time to create a disaster:
sudo -i systemctl stop mysqld.service rm -rf /var/lib/mysql/* rm -rf /var/log/mysqld.log
Now, on ic2, where we took the backup (from the master) restore the full optimistic backup :
mysqlbackup --defaults-file=/etc/my.cnf \ --backup-image=/home/vagrant/backup/full/full_backup.img \ --backup-dir=/tmp/mysql_backup_pitr --datadir=/var/lib/mysql \ --with-timestamp copy-back-and-apply-log
And then, restore the incremental:
mysqlbackup --defaults-file=/etc/my.cnf \ --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi \ --incremental-backup-dir=/tmp/mysql_backup_pitr_incr \ --datadir=/var/lib/mysql --incremental --with-timestamp \ copy-back-and-apply-log
Still on ic2 (from where the backup was run), and as the o.s. root user:
chown -R mysql:mysql /var/lib/mysql/* systemctl start mysqld.service mysql -uroot -poracle << EOF SET sql_log_bin = OFF; reset master; # How To Manually Remove InnoDB Cluster Metadata Directory (Doc ID 2274320.1) drop database mysql_innodb_cluster_metadata; SET sql_log_bin = ON; EOF
The previous step is done, as when we start the instance, it still has information on the metadata but everything has been lost, so in order to return to a safe InnoDB Cluster environment, we drop the schema and will rebuild. The slight difference here is that we still have the /etc/my.cnf file.
Now to configure our InnoDB Cluster again:
mysqlsh --uri ic:oracle@ic2:3306 dba.configureLocalInstance(); dba.checkInstanceConfiguration('ic:oracle@ic2:3306') var cluster = dba.createCluster( 'myCluster'); cluster.status();
Now we have 1 instance, we can reuse the same backup (full & incremental) to restore the other 2 nodes. We reuse the same commands as before, on both ic1 & ic3:
mysqlbackup --defaults-file=/etc/my.cnf \ --backup-image=/home/vagrant/backup/full/full_backup.img \ --backup-dir=/tmp/mysql_backup_pitr --datadir=/var/lib/mysql \ --with-timestamp copy-back-and-apply-log mysqlbackup --defaults-file=/etc/my.cnf \ --backup-image=/home/vagrant/backup/incre/incr_backup1.mbi \ --incremental-backup-dir=/tmp/mysql_backup_pitr_incr \ --datadir=/var/lib/mysql --incremental --with-timestamp \ copy-back-and-apply-log
Now to do some specifics.
On ic1:
cd /var/lib/mysql rm ic2-bin* chown -R mysql:mysql /var/lib/mysql/* systemctl start mysqld.service mysql -uroot -poracle << EOF SET sql_log_bin = OFF; reset master; drop database mysql_innodb_cluster_metadata; SET sql_log_bin = ON; EOF mysqlsh --uri ic:oracle@ic1:3306 dba.configureLocalInstance(); dba.checkInstanceConfiguration('ic:oracle@ic1:3306')
And on ic3:
cd /var/lib/mysql rm ic2-bin* chown -R mysql:mysql /var/lib/mysql/* systemctl start mysqld.service mysql -uroot -poracle << EOF SET sql_log_bin = OFF; reset master; drop database mysql_innodb_cluster_metadata; SET sql_log_bin = ON; EOF mysqlsh --uri ic:oracle@ic3:3306 dba.configureLocalInstance(); dba.checkInstanceConfiguration('ic:oracle@ic3:3306')
Now, of course, this seems like a lot of manual process. We could also have just as simply taken a full backup from the restored & reestablished ic2, and then have used “full/meta/backup_gtid_executed.sql” to start the instances in the cluster. This way we reuse the original full and incremental backups.
Back on ic2:
mysqlsh --uri ic:oracle@ic2:3306 cluster.addInstance('ic@ic1:3306') cluster.addInstance('ic@ic3:3306') cluster.status()
We’re back in business.
Pingback: InnoDB Cluster: setting up Production… for disaster! (2/2) - SSWUG.ORG
Hi,
First of all, thanks very much for sharing this. What I am unclear about and you can hopefully help me with is why are you creating the backup user only on ic2? (SET sql_log_bin = OFF; ..create user.. SET sql_log_bin = ON;).
Would you not want to have that backup user across all the cluster members in the event that ic2 is not the master (auto switch due to ic2 reboot etc)? Also, what is your take on running the mysqlbackup against the router address on the R/W port rather than on a fixed R/W node that could very well not be R/W at any point in the future?
Thanks again and cheers!
Alex
the backup user was created only on ic2 for pure demo sake. You’re right, it would be simpler to just allow that user to exist cluster-wide.
Regarding using Router to run the backup against, it would be ideal, but MEB does a local connection to the instance, so I’m afraid this is a limitation MEB has.
Thx
K.