Taming a ‘wild’ NDB 7.3 with Cluster Manager 1.4.3 & direct upgrade to 7.5.

Well, since working with outdated clusters and upgrade paths that quickly become obsolete, as in my last post, Migrating/importing NDB to Cluster Manager w/ version upgrade. , I wanted to share that we can also use Cluster Manager, mcm, to upgrade NDB Cluster from 7.3 directly to 7.5. So we can start using the mcm new features like autotune that help guide us towards some Cluster tuning, or 7.5 new features like READ_BACKUP or FULLY_REPLICATED tables. Sometimes table comments can be soo important…

So, as with the last scenario, a 7.3.8 ‘wild’ cluster. A reminder of what we had:

sn1:
 ndb_mgmd --configdir=/usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64/conf -f /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64/conf/config.ini --config-cache=false --ndb-nodeid=1
dn1 & dn2:
 ndbmtd --ndb-nodeid=3 -c 10.0.0.10
 ndbmtd --ndb-nodeid=4 -c 10.0.0.10
sn1 & sn2:
 mysqld --defaults-file=/usr/local/mysql/conf/my.cnf --user=mysql --ndb-nodeid=10 &
 mysqld --defaults-file=/usr/local/mysql/conf/my.cnf --user=mysql --ndb-nodeid=11 &

Double checking everythig is up and running (the the ‘mysql’ os user… not root):

ndb_mgm -e show
 Connected to Management Server at: localhost:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)] 2 node(s)
 id=3 @10.0.0.12 (mysql-5.6.22 ndb-7.3.8, Nodegroup: 0, *)
 id=4 @10.0.0.13 (mysql-5.6.22 ndb-7.3.8, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
 id=1 @10.0.0.10 (mysql-5.6.22 ndb-7.3.8)

[mysqld(API)] 5 node(s)
 id=10 @10.0.0.10 (mysql-5.6.22 ndb-7.3.8)
 id=11 @10.0.0.11 (mysql-5.6.22 ndb-7.3.8)
 id=12 (not connected, accepting connect from any host)
 id=13 (not connected, accepting connect from any host)
 id=14 (not connected, accepting connect from any host)

As before, the ‘mcmd’@’localhost’ user exists on both sqlnodes.

PID files change time:

sqlnodes:

cd /opt/mysql/738/data
 more sn1.pid
 more sn2.pid

ps -ef | grep mysqld | grep -v grep | awk '{ print $2 }'

cp sn1.pid ndb_10.pid
cp sn2.pid ndb_11.pid

dnodes:

cd /opt/mysql/738/ndbd_data/
more *.pid

kill -9 6523
kill -9 2608

sed -i 's/6523/6524/' ndb_3.pid
sed -i 's/2608/2609/' ndb_4.pid

more *.pid

With our wild cluster semi-tamed, it’s time for the upgrade.

 

Now time to create the 7.5.7 MCM cluster to import into.

On all hosts:

mkdir -p /opt/mysql/757/mcm_data
cd /opt/mysql/757/mcm_data
chown -R mysql:mysql .

Add paths to MCM binaries to the os user ‘mysql’ .bash_profile:

vi ~/.bash_profile
  export PATH=$PATH:/usr/local/mcm1.4.3/bin

Now to uncompress the latest version of cluster:

cd /usr/local
 tar zxvf mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64.tar.gz
 cd mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64
 chown -R mysql:mysql .

Now to add a directory where we have read-write permissions, and that is also meaningful to us:

cd ../mcm1.4.3/
 vi etc/mcmd.ini
   ...
   log-file = /opt/mysql/757/mcm_data/mcmd.log
   ...
   manager-directory = /opt/mysql/757/mcm_data

Now to get mcm running. Remember to do this on all nodes:

su - mysql
 mcmd --defaults-file=/usr/local/mcm1.4.3/etc/mcmd.ini --daemon

Creating the cluster in mcm to import our wild cluster into, as in the previous scenario:

mcm> create site --hosts=10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 mysite;

Add cluster binaries & create cluster ready for IMPORT:

mcm> add package --basedir=/usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 cluster738;
mcm> create cluster --import --package=cluster738
 --processhosts=ndb_mgmd:1@10.0.0.10,ndbmtd:3@10.0.0.12,ndbmtd:4@10.0.0.13 mycluster;

mcm> show status -r mycluster;

mcm> add process --processhosts=mysqld:10@10.0.0.10,mysqld:11@10.0.0.11 mycluster;
mcm> add process --processhosts=ndbapi:12@*,ndbapi:13@*,ndbapi:14@* mycluster;

Ok, looking good. “What about the previous config?” I hear you say:

mcm> import config --dryrun mycluster;

Ok, MCM complains about the mysqld’s processes running as root at the os level, so we can make this change via mcm on both mysqld’s:

mcm> set user:mysqld=mysql mycluster;

And re-run the dryrun config check:

+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Command result                                                                                                                                                         |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Import checks passed. Please check /opt/mysql/757/mcm_data/clusters/mycluster/tmp/import_config.e40f3e52_97_3.mcm on host 10.0.0.13 for settings that will be applied. |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 1 row in set (5.91 sec)

As you can see, this message is slightly different to the one received in MCM 1.4.0. Makes it easier to check.

So, if we’re happy, an importing we go:

mcm> import config mycluster;
 +--------------------------------------------------------------------------------------------+
 | Command result                                                                             |
 +--------------------------------------------------------------------------------------------+
 | Configuration imported successfully. Please manually verify the settings before proceeding |
 +--------------------------------------------------------------------------------------------+
 1 row in set (5.90 sec)

That went well. Now for the real McCoy:

mcm> import cluster mycluster;
 +-------------------------------+
 | Command result                |
 +-------------------------------+
 | Cluster imported successfully |
 +-------------------------------+
 1 row in set (3.32 sec)

Making sure all’s up and running:

mcm> show status -r mycluster;
 +--------+----------+-----------+---------+-----------+------------+
 | NodeId | Process  | Host      | Status  | Nodegroup | Package    |
 +--------+----------+-----------+---------+-----------+------------+
 | 1      | ndb_mgmd | 10.0.0.10 | running |           | cluster738 |
 | 3      | ndbmtd   | 10.0.0.12 | running | 0         | cluster738 |
 | 4      | ndbmtd   | 10.0.0.13 | running | 0         | cluster738 |
 | 10     | mysqld   | 10.0.0.10 | running |           | cluster738 |
 | 11     | mysqld   | 10.0.0.11 | running |           | cluster738 |
 | 12     | ndbapi   | *         | added   |           |            |
 | 13     | ndbapi   | *         | added   |           |            |
 | 14     | ndbapi   | *         | added   |           |            |
 +--------+----------+-----------+---------+-----------+------------+
 8 rows in set (0.06 sec)

I just wanted to set expectations here. My restart / upgrade / import times are because my cluster data set is almost non-existent. The more DataMemory & IndexMemory we use, in addition to the cores we have available as well as disk speed will all determine how fast each node restart is, hence, how long the import & upgrade process is.
Upgrade time again: Direct from 7.3.8 to 7.5.7.

This is how we check we’re still ok, i.e., only have 1 binary package known to mcm:

mcm> list packages mysite;

And now add the new 7.5 binaries:

mcm> add package --basedir=/usr/local/mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64 cluster757;

ERROR 7018 (00MGR): Sync spawning '/usr/local/mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64/bin/mysqld' exited with 127. Started in , with stdout='' and stderr='/usr/local/mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64/bin/mysqld: error while loading shared libraries: libnuma.so.1: cannot open shared object file: No such file or directory'

Make sure you’ve got your version of libnuma installed (hint: https://www.rpmfind.net/linux/rpm2html/search.php?query=libnuma.so.1()(64bit)). This is needed for MySQL Server 5.7, so it’s not really a MySQL Cluster nor Cluster Manager error in itself.

I’m on CentOS 6.5 so:

# sudo yum install numactl

And retry:

mcm> add package --basedir=/usr/local/mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64 cluster757;
 +----------------------------+
 | Command result             |
 +----------------------------+
 | Package added successfully |
 +----------------------------+
 1 row in set (0.66 sec)

mcm> list packages mysite;
 +------------+----------------------------------------------------------------+-----------------------------------------+
 | Package    | Path                                                           | Hosts                                   |
 +------------+----------------------------------------------------------------+-----------------------------------------+
 | cluster738 | /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64       | 10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 |
 | cluster757 | /usr/local/mysql-cluster-advanced-7.5.7-linux-glibc2.12-x86_64 | 10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 |
 +------------+----------------------------------------------------------------+-----------------------------------------+
 2 rows in set (0.05 sec)

Ok, so now we can upgrade:

mcm> upgrade cluster --package=cluster757 mycluster;

 +-------------------------------+
 | Command result                |
 +-------------------------------+
 | Cluster upgraded successfully |
 +-------------------------------+
 1 row in set (2 min 3.69 sec)

Double check mcm is now using the latest binaries. You can also check at o.s. level via ‘ps -ef | grep mysqld’ / ‘ps -ef | grep ndbmtd’.

mcm> show status -r mycluster;

So, now to use one of mcm’s new features:

mcm> autotune --dryrun --writeload=low realtime mycluster;

 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 | Command result                                                                                                                                                                |
 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 | Autotuning calculation complete. Please check /opt/mysql/757/mcm_data/clusters/mycluster/tmp/autotune.e40f3e52_314_3.mcm on host 10.0.0.13 for settings that will be applied. |
 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 1 row in set (1.28 sec)

And we cat that file to see if contains:

# The following will be applied to the current cluster config:
 set HeartbeatIntervalDbDb:ndbmtd=1500 mycluster;
 set HeartbeatIntervalDbApi:ndbmtd=1500 mycluster;
 set RedoBuffer:ndbmtd=32M mycluster;
 set SendBufferMemory:ndbmtd+ndbmtd=2M mycluster;
 set ReceiveBufferMemory:ndbmtd+ndbmtd=2M mycluster;
 set SendBufferMemory:ndb_mgmd+ndbmtd=2M mycluster;
 set ReceiveBufferMemory:ndb_mgmd+ndbmtd=2M mycluster;
 set SendBufferMemory:mysqld+ndbmtd=2M mycluster;
 set ReceiveBufferMemory:mysqld+ndbmtd=2M mycluster;
 set SendBufferMemory:ndbapi+ndbmtd=2M mycluster;
 set ReceiveBufferMemory:ndbapi+ndbmtd=2M mycluster;
 set SharedGlobalMemory:ndbmtd=20M mycluster;
 set FragmentLogFileSize:ndbmtd=256M mycluster;
 set NoOfFragmentLogFiles:ndbmtd=3 mycluster;

Obviously now we can also look at using READ_BACKUP and other NDB 7.5 new features. An upgrade process far from painless.

Hope this helps someone.

Advertisement

About Keith Hollman

Focused on RDBMS' for over 25 years, both Oracle and MySQL on -ix's of all shapes 'n' sizes. Small and local, large and international or just cloud. Whether it's HA, DnR, virtualization, containered or just plain admin tasks, a philosophy of sharing out-and-about puts a smile on my face. Because none of us ever stop learning. Teams work better together.
This entry was posted in CGE, Cluster 7.3, Cluster Manager, libnuma.so.1, MySQL, MySQL Cluster, numactl, Oracle, Uncategorized, upgrade and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s