InnoDB Cluster: setting up Production… for disaster! (1/2)

Want to setup InnoDB Cluster and be prepared for a Disaster Recovery scenario? Get ready:

Here’s a way to set up InnoDB Cluster using the 3 environments, on Oracle Linux 7.2, 5.7.19 MySQL Commercial Server, MySQL Shell 8.0.3 DMR, MySQL Router. As this is the first blog post for a complete disaster recovery scenario of InnoDB Cluster, we’ll also be installing MySQL Enterprise Backup.

If you’re new to InnoDB Cluster then I’d highly recommend looking at the following to understand how it works and what Group Replication, Shell & Router are.:

So, to start installing, we’ll need to include a few things first:

  • Python is a must, if it’s not already installed, so:
sudo yum install python
  • And numa awareness is needed for 5.7 MySQL Server:
sudo yum install numactl

As I’m installing the Enterprise software, as I will need Enterprise Backup, I’ve previously downloaded the rpm’s for OLinux 7.2 from http://edelivery.oracle.com, and left them in the directory on each of the 3 servers I’m installing from.

Therefore I can run:

sudo yum install -y mysql-commercial-common-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-client-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-libs-5.7.19-1.1.el7.x86_64.rpm \
mysql-commercial-server-5.7.19-1.1.el7.x86_64.rpm \
mysql-router-commercial-2.1.4-1.1.el7.x86_64.rpm

And, as I downloaded the following from http:/www.mysql.com/downloads:

sudo yum install -y mysql-shell-8.0.3-0.1.dmr.el7.x86_64.rpm \
meb-4.1.0-el7.x86_64.rpm

Remember, this is done on all 3 servers (in my case ic1, ic2 & ic3 / 10.0.0.11, 10.0.0.12 & 10.0.0.13).

Start the mysql servers. As this is the first time, we instantiate at the same time (i.e. the randomly generated root password will be needed).

sudo systemctl start mysqld.service

Enable the service to automatically start on server reboot:

sudo systemctl enable mysqld.service

And, in my scenario, as you will see, it’s really not a Production environment, so I’m going to take it easy with my password & security convention. I would NOT recommend this for your production environment.

Insert the following at the end of the config file, to avoid having to create complex passwords:

sudo vi + /etc/my.cnf
 ..
 ..
 validate-password=OFF

Then restart mysql so the parameter is taken:

sudo systemctl restart mysqld.service

Change the root password (if you really want to):

sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1
mysql -uroot -p

SET sql_log_bin = OFF; 
alter user 'root'@'localhost' identified by 'oracle';
SET sql_log_bin = ON; 
exit

I don’t want these changes being replicated across the nodes, because I’m actually doing this on all servers, hence I use “SET sql_log_bin = OFF” so the changes aren’t reflected in the bin logs.

Now, change to root and run MySQL Shell to start configuring the InnoDB Cluster environment:

sudo -i
mysqlsh
MySQL JS > dba.configureLocalInstance();
MySQL JS > \q

This is where we get asked a couple of questions:

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: [Y|n]: 
MySQL user 'root' cannot be verified to have access to other hosts in the network.

1) Create root@% with necessary grants
2) Create account with different name
3) Continue without creating account
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.

Account Name: ic
Password for new account: ******
Confirm password: ******
Validating instance...

As you can see, I chose option 2, to create the user ic/oracle for my cluster. You can choose the name you wish here.

It’s quite normal for the output to tell you to restart the mysql server for the changes in the config to take place. We’ll do it after the next step.

Now, because MEB needs a couple of CSV type tables to run the full and incremental backup (mysql.backup_history & mysql.backup_progress), and if we want MEB to run automatically and not have to script the incremental backup (using end_lsn) and also well want these tables to monitor them from Enterprise Monitor and as well as reduce the amount of warnings being produced on each backup run, we’ll have to comment out the “disabled_storage_engines” entry in the config file:

sudo vi + /etc/my.cnf
 ..
 ..
 #disabled_storage_engines = MyISAM,BLACKHOLE,FEDERATED,CSV,ARCHIVE
 ..

And restart (remember, we’re still doing this on all 3 nodes):

systemctl restart mysqld.service
mysqlsh
 dba.checkInstanceConfiguration('ic:oracle@ic2:3306');

We should get an “OK” here. Using older versions of Shell might give you otherwise.

Once the mysql server has restarted, run the following. Specific privileges are required for our “ic” user:

mysql -uroot -poracle << EOF 
SET sql_log_bin = OFF; 
 GRANT SELECT ON *.* TO ic@'%' WITH GRANT OPTION;
 FLUSH PRIVILEGES;
SET sql_log_bin = ON; 
EOF

So, everything we’ve done up until now, has been done on all 3 OL servers.

Now, on just one server, doesn’t matter which, let’s create our InnoDB Cluster:

mysqlsh --uri ic:oracle@ic2:3306

var cluster = dba.createCluster( 'myCluster');

cluster.status();

Now we’ve got 1 instance in our cluster, let’s add the other 2 that we’ve configured at the same time as this one.

cluster.addInstance('ic:oracle@ic1:3306');
cluster.addInstance('ic:oracle@ic3:3306');

Remember, by default we’re in single primary mode, i.e. there will only be 1 RW server and the other 2, the ones we just added, will be in SUPER READ ONLY mode, i.e. no writes will be allowed. If you want to go to multi master mode, then that’s a different createCluster option.

Now to check that all 3 are in the same cluster:

cluster.status();

Ok, a slight word of warning, and if you haven’t paid attention to the other referenced documentation mentioned at the beginning, if you come out of the mysqlsh session, and want to go back in and get the cluster status, you’ll need to run this first:

var cluster = dba.getCluster("myCluster")

Otherwise shell will just complain saying that it doesn’t understand what “cluster.” is.

And so, after that, we should see something like the following:

{
   "clusterName": "myCluster", 
   "defaultReplicaSet": {
       "name": "default", 
       "primary": "ic2:3306", 
       "ssl": "REQUIRED", 
       "status": "OK", 
       "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
       "topology": {
           "ic1:3306": {
               "address": "ic1:3306", 
               "mode": "R/O", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"
           }, 
           "ic2:3306": {
               "address": "ic2:3306", 
               "mode": "R/W", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"
           }, 
           "ic3:3306": {
               "address": "ic3:3306", 
               "mode": "R/O", 
               "readReplicas": {}, 
               "role": "HA", 
               "status": "ONLINE"
           }
   }
 }
}

Looking good! Before we jump for joy, we have to make the changes persistent in the my.cnf. This has to be done locally on each machine:

On ic2:

mysqlsh
dba.configureLocalInstance('ic:oracle@ic2:3306');

On ic1:
Say Y to when it asks you to override SUPER READ ONLY mode, otherwise no changes will occur.

mysqlsh
dba.configureLocalInstance('ic:oracle@ic1:3306');

On ic3:
Again, say Y to when it asks you to override SUPER READ ONLY mode, otherwise no changes will occur.

mysqlsh
dba.configureLocalInstance('ic:oracle@ic3:3306');

And, now, just to make sure, back on the master server, or even from any of the 3 nodes:

mysqlsh --uri ic:oracle@ic2:3306
var cluster = dba.getCluster( 'myCluster');
cluster.status();

With our InnoDB Cluster up and running, all we need to do now is to let the application connect seamlessly using MySQL Router.

I’m installing on the same servers as the instances but the normal operation would be to install each mysqlrouter process on the app server.

And to make things more clearer, I’ll only install 2 mysqlrouters on ic1 & ic3:

On ic1:

sudo mkdir /opt/mysql/router
chown -R mysql:mysql /opt/mysql/router
mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --user=mysql 
/opt/mysql/router/start.sh

On ic3:

sudo mkdir /opt/mysql/router
chown -R mysql:mysql /opt/mysql/router
mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --user=mysql --force
/opt/mysql/router/start.sh

As we’re on the same servers as the MySQL instance, we could even forget tcp/ip access and use sockets:

mysqlrouter --bootstrap ic:oracle@ic2:3306 --directory /opt/mysql/router --conf-use-sockets

In fact, maybe installing a 3rd mysqlrouter process away from the app servers isn’t a bad idea.. and the connect string from the app could choose the hostnames & sockets of the first 2 mysqlrouters, and if they fail, there’s a fall back mysqlrouter connection.

So, now they’re configured, let’s test them. Try this on both ic3 (or ic1):

mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic1 -e "select @@hostname"

Remember, by default, port 6446 is for RW and port 6447 read-only, and that is just read-only.

mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic1 -e "select @@hostname"

# And on ic1 or ic3:

mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6446 -hic3 -e "select @@hostname"

mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"
mysql -uic -poracle -P6447 -hic3 -e "select @@hostname"

Obviously, you can personalize mysqlrouter. Let’s read from all 3 instances independently of whomever is primary or secondary in the innodbcluster_metadata_schema cache:

vi /opt/mysql/router/mysqlrouter.conf

..
[metadata_cache:myCluster]
router_id=2
bootstrap_server_addresses=mysql://ic2:3306,mysql://ic1:3306,mysql://ic3:3306
user=mysql_router2_r7wlgz3p4fct
metadata_cluster=myCluster
ttl=300
#ttl=30
..
[routing:myCluster_default_ro]
bind_address=0.0.0.0
bind_port=6447
#destinations=metadata-cache://myCluster/default?role=SECONDARY
destinations=ic1,ic2,ic3
mode=read-only
protocol=classic
..

Thanks for reading. Hopefully this has helped someone…

…. Next up, a complete disaster recovery situation …

Advertisement

About Keith Hollman

Focused on RDBMS' for over 25 years, both Oracle and MySQL on -ix's of all shapes 'n' sizes. Small and local, large and international or just cloud. Whether it's HA, DnR, virtualization, containered or just plain admin tasks, a philosophy of sharing out-and-about puts a smile on my face. Because none of us ever stop learning. Teams work better together.
This entry was posted in Backup, InnoDB, innodb cluster, MEB, MySQL, MySQL Enterprise Backup, MySQL Enterprise Edition, numactl, Oracle, PITR, Recovery and tagged , , , , , , , , , , , . Bookmark the permalink.

1 Response to InnoDB Cluster: setting up Production… for disaster! (1/2)

  1. Pingback: InnoDB Cluster: setting up Production… for disaster! (1/2) - SSWUG.ORG

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s