You may recall from an earlier post that one of the advanced methods for database syncing between two machines for very large databases is to set up replication, or slaving. That’s all well and good, but how exactly does one do that? And, more importantly, how does one do that WITHOUT excess downtime?
When setting up a master slave relationship, its important to get the slave in sync with the master from the start. Traditionally, the easiest way to do that is to start a read lock (which is essentially downtime for an active database), get the master position, make a sql dump (which can take hours or days depending on the database size), and remove the read lock. This dump and the master position are then used to start the slave in sync with the master.
How do we avoid this downtime? If we have InnoDB as a table engine, the hard part is already done.
InnoDB supports transactions, which means that we can start a point-in-time transaction for a read operation for our dump while other live activity continues to write to the database, unaware of the ongoing migration.
This article will explain how to set up your new slave server in replication with the live master server with minimal restarts, load, or downtime.
Before we start
As I mentioned, this method is ideal to use when all of your tables are using InnoDB. If any of your tables are MyISAM, this can cause inconsistency in the dump because of the transactional method.
Secondly, in relationship to migrations, because the target server will become a slave to the master, writes to the new machine will not be possible. Any tests you may have wanted to perform with writes to a database, such as a new product or post, will not be possible in this configuration. Therefore, you should be very thorough in inspecting moving parts on both sides of the database relationship to make sure that when writes are possible, they start smoothly, or you have developers on hand to investigate any issues when the target database server goes read/write.
Step the first: Set up your source
One of the important things you need for this to work, particularly on a busy database, is plenty of disk space. You will need disk space for the secondary transactions while the dump is taking place, space for ongoing binary logs for new transactions while the import is happening on the target server (so the slave can catch up), space for the dump itself if needed, and if you need to convert any tables, space for temporary table sorting and indexing.
Let’s check all of our tables first, and if needed, get everything converted to InnoDB.
WARNING: The second command listed will convert all of the tables in the live database to InnoDB. THIS CAN TAKE A VERY LONG TIME AND CAUSE A LOT OF LOAD. Check with your DBAs and developers to see if this is kosher first, as this in itself can cause lots of load and downtime. There may have been a reason that specific tables are MyISAM.
In this instance, all of my tables are MyISAM, and I don’t mind biting the bullet and converting to InnoDB (I have to for the purposes of this article). They should have been so in the first place, but the configuration for mysql was set up incorrectly upon creation. Again, this conversion could cause load (which could be mitigated through configuration adjustment). But, MyISAM will not support the transactional dumps we want, and converted they must be.
# mysql -e 'select table_name, engine\ from information_schema.TABLES\ where table_schema="bigdb"' +--------------------------------+--------+ | table_name | engine | +--------------------------------+--------+ | wp9a_commentmeta | MyISAM | | wp9a_comments | MyISAM | | wp9a_itsec_distributed_storage | MyISAM | | wp9a_itsec_lockouts | MyISAM | | wp9a_itsec_logs | MyISAM | | wp9a_itsec_temp | MyISAM | | wp9a_links | MyISAM | | wp9a_options | MyISAM | | wp9a_postmeta | MyISAM | | wp9a_posts | MyISAM | | wp9a_smush_dir_images | MyISAM | | wp9a_term_relationships | MyISAM | | wp9a_term_taxonomy | MyISAM | | wp9a_termmeta | MyISAM | | wp9a_terms | MyISAM | | wp9a_usermeta | MyISAM | | wp9a_users | MyISAM | | wp9a_yoast_seo_links | MyISAM | | wp9a_yoast_seo_meta | MyISAM | +--------------------------------+--------+ # for table in `mysql bigdb -Nse 'show tables;'` do echo $table mysql bigdb -e 'alter table $table engine="innodb"' done wp9a_commentmeta wp9a_comments ...
Once the conversions are done, we can set up the source as a master. Make sure the following lines are set up in your my.cnf file, and add them if they are missing:
server_id=1 log_bin=mysql-bin binlog_format=mixed sync_binlog=1 innodb_flush_log_at_trx_commit=1
The server-id line allows multiple clustered servers to identify themselves compared to each other, in case they have the same hostname. The next few lines turn on mixed binary logging. Restart mysql after changing the configuration to get the new lines in effect.
Now, let’s add a user allowed to replicate from the source server. Assuming you have both database servers on a private network and the new database server has 192.168.0.2:
# mysql -e 'grant replication slave on *.*\ to repluser@192.168.0.2\ identified by "reallygoodpassword";'
Change the password, of course. Even your temporary passwords should be really, really, really ridiculously good looking. However, replication users are only allowed passwords up to 32 characters. So, not too ridiculous.
If you have to make any firewall changes to allow mysql to communicate, do so now as well.
Step the second: Make your database dump
If you recall, we need to know the position of the master in the binary logs as well as the contents of the database at that log position. There is a dump flag that can extract both of these at once: –master-data .
In this example, I’ll be streaming the database dump directly to a file on the target database engine, since that is where my free space is, and where the dump will need to be in order to import it.
# mysqldump\ --skip-lock-tables\ --single-transaction\ --flush-logs\ --hex-blob\ --master-data=2\ bigdb | ssh 192.168.0.1 "cat - > /home/temp/bigdb.sql"
Break it down now.
First, keep in mind that –opt is set by default, which locks tables. The flags for –skip-lock-tables and –single-transaction are the ones that will allow reads and writes to continue on the database transactionally, and still give us a nice point-in-time view of the database. This ONLY works if ALL tables are InnoDB. Otherwise, you will get inconsistent data from your MyISAM tables.
–flush-logs makes sure all transactions are recorded to logs before we start the dump. –hex-blob is important if you have BLOBs in your table that might have special characters. Finally, –master-data=2 gets the position of the server in the binlogs right into the head of the dump.
Step the third: Configure the target server
Let’s get the server configuration all squared first:
server_id=2 binlog_format=mixed log_bin=mysql-bin relay_log=mysql-relay-bin log_slave_updates=1 read_only=1
We set up a unique server-id for the slave, and the same binlogging options. We also turn on a relay log, which is needed on the slave side to organize transactions. We finally set the server to read-only, which prevents actions other than those from root and from slave updates from changing database contents. This of course is turned off after go-live.
Now, we read from the database dump we made earlier the position of the master:
# head -80 /home/temp/bigdb.sql | grep "MASTER_LOG_"
We’ll need to know the MASTER_LOG_FILE and MASTER_LOG_POS for enabling slaving.
Finally, import the database dump:
# mysqladmin create bigdb # mysql bigdb < /home/temp/bigdb.sql
Step the fourth: Turn on slaving
Remember all of the information we saved earlier? This is what we were saving it for!
My example assumes that the host is set up on private network with 192.168.0.1 as it’s IP, port 3306 is open towards this IP in your firewall already, and our file and position in the file are as entered.
# mysql -e 'change master to master_host="192.168.0.1",\ master_user="repluser",\ master_password="reallygoodpassword",\ master_log_file="mysql-0002.bin",\ master_log_pos=1054;\ START SLAVE;'
This gets our connection to the main database server going, and replication begins. Let’s check the status of syncing:
# mysql -e 'show slave status'
One of the important portions of this output is seconds_behind_master, an estimate of how long it will take for the slave to catch up to the master’s transaction position. If you see this at or above 0, as well as all the other indicators of connection to the master (such as slave_IO_state and read_master_log_pos), you are replicating!
Conclusion and Caveats
If all is well, your slave should now be “waiting for master to send event”, and your new database server is in line with the master!
Let’s recall the important caveat from the beginning of this article: the target server is read-only, and testing writes to the database will not be possible. The trade-off here is that the final sync will be much quicker than other sync options for very large databases (in essence, there is no final sync; the database is continuously kept in step with master).
What if theres a disk space issue? Such as not having enough room to store a full copy of the database? There are tricksy tricks we can try to get the data directly into mysql on the slave and read the master position. This potential command is run from the target slave machine after proper grants have been added for remote root access:
# mysqldump -h 192.168.0.1 -p -u root\ --skip-lock-tables\ --single-transaction\ --flush-logs\ --hex-blob\ --master-data=2\ bigdb | tee >(mysql bigdb) | head -n80 | grep "MASTER_LOG_"
This type of shell redirection as is showcased with tee only works on bash, so make sure you invoke that as your shell before you start. Then, take the log info from stdout and use that to start your slave.
If this doesn’t work out, you can always try piping your dumps and imports through gzip. Database dumps are just text, which compresses very nicely.
The replication task also syncs all databases on a master server; if you only need to sync one database, add replicate-do-db=bigdb to your my.cnf on the slave.
Finally, there is a security implication; you are sending your entire database over the ethernet. Keep in mind that you would be sending this info over the wire anyway in the case of a regular dump, but it is encrypted with your SSH session. By default, replication connections are not encrypted. But, you can set up communication over SSL if you so desire. In our example, both machines are internal to our intranet, so this is not a concern.
Ready for the next step? Check out part 2!