Tag MySQL

Dump and backup a database on shutdown

I’m using Multipass as the virtualisation tool for quickly setting up virtual development machines on my Mac Studio using cloud-init for configuration and setting everything up. This really works great and has saved me several times where stuff crashed and burned, it was really easy just to tear everything down and re-run the setup scripts. (You can read more on my setup in the repository I use for this. This works fine as my development stuff is mostly in stored in Git and the data in a shared MySQL virtual server but as I recently found out this is not lways the case. Sometimes there is local data on the virtual server that you would like to keep.

The solution I came up with to prevent the loss of data is to trigger a script on the shutdown of the server that would copy the relevant data to a safe location. In my case that would be an S3 bucket. I took some digging, searching and testing but I got it working. So if you are looking for something similar, here how I did it:

We use a system service that runs at the start of the shutdown proces, so that other services that we rely on are still running. I’ve named it my S3shutdown.service which is the name of a file which you need to create in /etc/systemd/system/ with the follwing content:

[Unit]
Description=Save database to S3
Before=shutdown.target reboot.target halt.target

[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/ubuntu/projects/dumpandstore.sh

[Install]
WantedBy=multi-user.target

Where the first line is a descriptive title which you will see used in syslog when it is executed. The last line defines the runtime, so before the multi user mode ends. Referenced by ExecStop you reference the shell script that should be run at the moment the server is going down.

My dumpandstore.sh script looks like:

#! /bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

/usr/bin/mysqldump -uuser -ppassword databasename > /home/ubuntu/projects/databasedump.sql;
today=$(date +%Y%m%d);
cp /home/ubuntu/projects/databasedump.sql.sql /home/ubuntu/projects//databasedump$today.sql
/usr/bin/gzip /home/ubuntu/projects//databasedump.sql$today.sql

/usr/local/bin/aws s3 cp  /home/ubuntu/projects/databasedump.sql$today.sql.gz s3://mybucketname/
/usr/local/bin/aws s3 cp  /home/ubuntu/projects/databasedump.sql s3://mybucketname/

I’ve used a dump with a data to build some historic perspective, the other file without data is so to speak the last copy and is also referenced in the build script of the server. So that when I rebuild the server the database is filled with the last used dataset.

To activate the service you’ll need to run the command: sudo systemctl enable S3shutdown.service Reboot the machine and everything should be working as intended. Some problem struggled with was the aws comfiguration. I had setup the aws configuration including credentials as a normal user but the shutdown service runs as root and therefore the aws command cloud not locate the proper credentials. This was quicky solved by copying the ~/.aws directory to /root Not ideal but it made it work for the moment, I need to do more research for a more elegant and safer solution.

Using a central virtual MySQL server

For all my projects I’ve been using dedicated virtual machines which I manage and configure using Vagrant. In this manner it was easy to manage a dedicated environment where you won’t have conflicting settings or libraries that was easily recreated on the fly. Every project with it’s own virtual machine get’s all the components installed it needs. With at least 5 or 6 virtual machines running on my personal iMac (which is an older model from 2013) it was getting a bit busy. One common component installed on all my machines was MySQL, which is still my go to database for simple projects. So I’ve been toying with the idea of creating a single virtual machine that only runs MySQL for all my projects. I could even host this virtual database server on an even older Mac Mini (from 2010) which I still keep around. It used to be my generic media machine untill an Apple TV took over it’s role.

At first setup everything looked great, it all went well when running on the same host (the iMac). But when I hosted the virtual database server on the Mac Mini things started to go wrong and I couldn’t make a connection to the database. While locally everything went well, going over the network was the problem. Several things to check: Was my virtual machine accepting remote connections. Yes, I’ve enabled the option: config.vm.network "public_network"

Next was connectivity to MqSQL. I learned that the skip-networking option which one usually used to secure your connectivity to the outside world has been deprecated. Instead the network connectivity is linked to the network interface of your (virtual) server. It’s got three options:

  • Only acces from the local host
  • Access from all networks
  • Access only from one network

Only acces from the local host
Here, the bind-address takes a value of 127.0.0.1, the loopback IP address. MySQL can only be accessed by applications running on the same host.

Access from all networks
MySQL listening to all networks then the bind-address configuration is IP as 0.0.0.0. With this setting MySQL listens from all networks. Furthermore, to permit both IPv4 and IPv6 connections on all server interfaces,

Access only from one network
MySQL is allowed to listen only to a specific network interface. The value, in this case, will be the ip-address of the specific network interface for instance: 192.168.1.1

So when I adjusted the settings for mysqld in /etc/mysql/mysql.conf.d/mysqld.cnf and changed bind-address=127.0.0.1 into bind-address=0.0.0.0 and restarted mysqld everything connected and started working properly!

Next step is migrating all active projects to the virtual central MySQL server and see if there are any performance benefits.

Sidenote: I’ve learned that to make sure you can rebuild your database server on the fly you’ll have to make a backup of your data before you halt or destroy the virtual server. I’ve done this via a trigger configuration in my Vagrantfile that makes a dump of the database on demand to a shared folder. Just add the following lines to your Vagrantfile:

config.trigger.before [:halt, :destroy] do |trigger|   
   trigger.warn = "Dumping database to /vagrant/Code/dbserveroutfile.sql"    
   trigger.run_remote = {inline: "mysqldump -u<username> -p<password> --all-databases --single-transaction --events > /vagrant/Code/dbserveroutfile.sql; "}   
end