In our daily IT business we often have to clone databases. Either to port the current state of a production system into a development environment or to create backups. In the case of relational databases, the clean and common method is to use appropriate tools provided by the database vendors. As an example, this would be mysqldump for MySQL or pg_dump for PostgreSQL. While they do a good job, they need quite some time on bigger databases and also use up a lot of resources. This is especially true for mysqldump with the --single-transaction option which you probably want to use on InnoDB tables to get consistent states.
If you skip these tools and simply copy the file base of your database, you will get in trouble in case the database is live - at least if it is very active. Obviously, the problem with this approach is that, while you are copying files, a lot of queries write or delete entries which will lead to a very inconsistent copy. If you are lucky, the disaster recovery of your DBMS can fix this inconsistency when you start your clone for the first time. However, it can also happen that it cannot even start and, in this case, you would have to spend a lot of time fixing it manually in order to get it back to a startable state while still experiencing some data loss.
Meanwhile, we (just like many other IT service providers, probably) have moved to Docker and all important client databases run in isolated Docker containers which makes a lot of things easier. However, if you would like to clone the data volume of such a container while it is running, the above described problem will still appear.
Luckily, we use Linux with LVM and, years ago already, Benjamin Schweizer provided the great tool dsnapshot on GitHub with which you can create snapshots of certain directories within seconds (and, being a real snapshot tool, they have absolute consistency). In our Docker-influenced time, this tool is even more useful than ever before. Almost every bigger virtualization technology comes with the ability to create snapshots of live systems (VMWare, Hyper-V, VirtualBox...). However, these snapshots always capture the whole system. Docker has a similar built-in function which is commit. With that function you can create snapshots of containers but, due to the architecture of Docker, these container snapshots will not include data volumes (Basically, this is a good thing but I will not go into further detail on this in this post). Maybe there will be a built-in way for data volume snapshots, too, some day. As for now, however, there is none. Available Docker extensions like Convoy or Flocker are quite new and unstable as well as much too complex for a simple task such as creating a consistent copy of a live volume. So dsnapshot is perfect.
If you want to use dsnapshot , all you need is Linux with python installed (which probably will be the case per default) and which uses LVM and has a couple of free PEs (we suggest 32 GB or more). To check if you match this requirement, you can run vgdisplay. In the output, you should check the line "Free PE / Size" and see if it is enough (per default dsnapshot requires 8 GB of free PEs for each snapshot).
If you have no or too little free PEs, we suggest plugging in a dedicated physical hard drive (or adding another one in your hypervisor if you are in a VM and not on bare metal) and adding it to the volume group in question. If your volume group is named server-one-vg and your newly added drive is /dev/sdb, this is done with the command vgextend server-one-vg /dev/sdb.
Now you find out the location of the directory you want to clone. If, for example, it is a volume of a Docker MySQL container, this could be done by running docker inspect mysql_db_container_name |grep Source |grep _data.
When you know the directory, create a snapshot of it with dsnapshot --create /var/lib/docker/volumes/hash/_data/.
dsnapshot will return the location of the snapshot.
Now create your copy of the snapshot for whatever you need it. In the case of our MySQL container, you could mount this copy in a new MySQL container which you now can use for testing and development. You could also use the snapshot directly for that but in order to free our PEs again we use a copy and destroy the snapshot directly afterwards with dsnapshot --remove /path/to/the/snapshot/not/the/original/directory/.
That is it! Obviously, you can use this to forge your custom clone and backup scripts. We currently only use it to create debugging and development environments on the fly real fast but probably this approach is even stable enough to be used in serious backup strategies. However, you still have to consider that the database will not shut down properly with this approach. Due to this, the clone will boot up in recovery mode on the first start and you may experience some minor data loss for queries which have already been in the memory but have not been written to the file system. But this should only affect a very short period of time and because of the consistent file state, the databases recovery mode is almost guaranteed to succeed.