Consistent backup – snapshot

Send to Kindle

Hi guys, today let’s learn about how to have a consistent backup (snapshot)
First of all, in what situations do we use a snapshot?

1. Lets say that your production server now will have a replica, how do you do the first load of data in this slave? what was the master bin log position when you started the backup, during the backup process, does anyone wrote any query to db?

2. In case you want to implement an incremental backup strategy, you can take a snapshot once a week and in case you need to restore you server, you just restore the snapshot and apply the binary logs.

Then, let’s start.

To grantee this data integrity we will need 2 sessions open on master, first one to lock all databases, second one to do the copy

Then, let’s go:

Session 1:

FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

You will receive an output like this:

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      107 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

IMPORTANT: LEAVE THIS SESSION OPEN DURING ALL PROCESS, IF YOU CLOSE THIS SESSION, THE DATABASE LOCK WILL BE RELEASED

To do the backup, we will use mysqldump:
Session 2:

mysqldump -u usuario -p --all-databases > all_dbs.sql

After the backup is completed, make sure it ends with:
– Dump completed on xxxx-xx-xx xx:xx:xx

Done, we now ca release the lock on session 1:

UNLOCK TABLES;

Is this, we have a snapshot of our database.

Send to Kindle
This entry was posted in backup. Bookmark the permalink.

2 Responses to Consistent backup – snapshot

  1. James says:

    This technique has the disadvantage of disabling writes to the database during the backup, which could be a significant problem for some sites. I would suggest using the –single-transaction option to ensure a consistent snapshot while allowing writes to continue to the database.

    Also, we can avoid using an additional session by using the –master-data option to write the binary log file name and position to the dump file.


    mysqldump -u USER -p --single-transaction --quick --lock-tables=false --master-data DATABSE > output

    • marceloaltmann says:

      Hi James, thanks for you comment.
      This technique is not to be used on a daily backup, this is to be used when you need a consistent copy of your master, and you don’t want any other session writing on it.
      Also, if you only use –single-transaction, it won’t block non-transaction storage engines to receive writes.

Leave a Reply

Your email address will not be published. Required fields are marked *


five − = 2

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>