Tag Multipass

Dropping Multipass for OrbStack

I’ve had it with Multipass, again after upgrading to a newer version of OSX (Tahoe in this instance) Multipass isn’t starting up again with the dreaded “can’t connect to socket” message. I could not resolve it, even restarting daemon and re-installing didn’t resolve the issue. Whilst next to it I had OrbStack running with an X86 ubuntu VM and a local-stack docker image for my Chalice project which was still running as expected.

This made me rethink Multipass, after some thinking and a small experiment I decided to drop Multipass completely and switch all my VM projects over to OrbStack, which was easier then expected as it also supports cloud-init which I was already using for creating VM’s. The only change was needed to my setup scripts in ‘bash’ as the command line for OrbStack is different.

The command line for OrbStack has some peculiarities which I will document here as I had to discover them by trial and error and digging through forums, etc…

Orb will create a user in your VM which is identical to your Mac username. So to use the regular ubuntu user as you main user you have to specify this when you create your VM.

orb create -a $ARCG -c $CLOUDINIT.yaml -u $USERNAME ubuntu:$VERSION $VMNAME

where

  • $ARCH is either arm64 (for native Mx) or amd64 (for X86 compatible VM’s)
  • $CLOUDINIT is your cloudinit configfile
  • $USERNAME is the primary username to be installed on this VM, likely ubuntu
  • $VERSION is the version of Ubunu to install but is optional, if left out the latest version will be installed
  • $VMNAME is the name of your VM, will be accessible as $VMNAME.orb.local on your network.

It is not possible to set the amount of CPU, Memory or disk space per VN. You can only set the maximum amount of memory and cpu for the complete OrbStack environment.

Copying files to and from the VM is simple, using orb push or orb pull commands:

orb push -m $VMNAME source destination

Executing commands is simple but has some intricacies as where they are executed is not always clearcut, especially if your command is longer and uses piping for instance. You might end up piping the output of a command on your VM to a file on your Mac. For instance: orb -m $VMNAME sudo service mysql restart is pretty straightforward but:

orb -m $VMNAME mysql -uroot -psecret dbname < /home/ubuntu/projects/outfile.sql'

will let you know that the file isn’t found. To solve this you have to use quotes and the -s option

orb -s -m $VMNAME 'mysql -uroot -psecret dbname < /home/ubuntu/projects/outfile.sql'

Something I havent solved yet is that using cloud-init to set hostname or FQDN does not work yet.

I’m abandoning my multipasssetup project as I won’t be using it anymore. I will create something similar for OrbStack.

Some small updates

Last weekend I updated my Mail-in-aB-ox server because my old server was still running Ubuntu 18 and the current version requires Ubuntu 22.04. I was several versions behind because I dreaded the upgrade process. In the end it almost went smoothly, I had some struggles with restoring my data but that was caused by my inexperience. I’m stil in awe of how simple it has been made.

By re-installing my box I also had to go through my own instructions of running AWStats for the static websites that are hosted on the mailserver. I noticed som small inconsistencies and made some steps a bit clearer on what to do.

The other change I made is the addition of another configuration file in my multipass setup scripts. I’ve been playing with AWS Chalice. It’s a framework on top of AWS Lambda functions and DynamoDB. The installation includes a docker image of a local DynamoDB server to use for local development. The file is called chalicedev.yaml. Have fun.

Multipass update, using config file

I’ve been quite busy at my regular job sand didn’t have the time for personal projects or blogging, sorry for that. As you might remember from earlier posts that I don’t like to develop on my machine itself, I like to create purpose build virtual machines to develop specific projects. This helps me separate requirements and conflicting libraries and versions of software which might break projects. I use Multipass as the virtualisation tool for managing virtual development machines on my Mac Studio using cloud-init for automation.

Up until now I was happy using just one configuration file for each VM as my personal projects didn’t differ that much from programming tools perspective. At my latest project I found out that they will and I needed an option to create different configurations per VM. So I’ve added a command line option to make the configuration of the VM flexible. It’s the YAML file that cloud-init uses to install all the required packages and set the proper configuration items.

It required me to adjust the documentation and to update the remote multipasssetup repository on BitBucket. I quickly ran into issues with Git as I had made a correction, resolved a typo, remotely without merging so my local and the remote repo where out of sync. Took some effort to getting it resolved as I’m no git expert ;-).

Dump and backup a database on shutdown

I’m using Multipass as the virtualisation tool for quickly setting up virtual development machines on my Mac Studio using cloud-init for configuration and setting everything up. This really works great and has saved me several times where stuff crashed and burned, it was really easy just to tear everything down and re-run the setup scripts. (You can read more on my setup in the repository I use for this. This works fine as my development stuff is mostly in stored in Git and the data in a shared MySQL virtual server but as I recently found out this is not lways the case. Sometimes there is local data on the virtual server that you would like to keep.

The solution I came up with to prevent the loss of data is to trigger a script on the shutdown of the server that would copy the relevant data to a safe location. In my case that would be an S3 bucket. I took some digging, searching and testing but I got it working. So if you are looking for something similar, here how I did it:

We use a system service that runs at the start of the shutdown proces, so that other services that we rely on are still running. I’ve named it my S3shutdown.service which is the name of a file which you need to create in /etc/systemd/system/ with the follwing content:

[Unit]
Description=Save database to S3
Before=shutdown.target reboot.target halt.target

[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/ubuntu/projects/dumpandstore.sh

[Install]
WantedBy=multi-user.target

Where the first line is a descriptive title which you will see used in syslog when it is executed. The last line defines the runtime, so before the multi user mode ends. Referenced by ExecStop you reference the shell script that should be run at the moment the server is going down.

My dumpandstore.sh script looks like:

#! /bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

/usr/bin/mysqldump -uuser -ppassword databasename > /home/ubuntu/projects/databasedump.sql;
today=$(date +%Y%m%d);
cp /home/ubuntu/projects/databasedump.sql.sql /home/ubuntu/projects//databasedump$today.sql
/usr/bin/gzip /home/ubuntu/projects//databasedump.sql$today.sql

/usr/local/bin/aws s3 cp  /home/ubuntu/projects/databasedump.sql$today.sql.gz s3://mybucketname/
/usr/local/bin/aws s3 cp  /home/ubuntu/projects/databasedump.sql s3://mybucketname/

I’ve used a dump with a data to build some historic perspective, the other file without data is so to speak the last copy and is also referenced in the build script of the server. So that when I rebuild the server the database is filled with the last used dataset.

To activate the service you’ll need to run the command: sudo systemctl enable S3shutdown.service Reboot the machine and everything should be working as intended. Some problem struggled with was the aws comfiguration. I had setup the aws configuration including credentials as a normal user but the shutdown service runs as root and therefore the aws command cloud not locate the proper credentials. This was quicky solved by copying the ~/.aws directory to /root Not ideal but it made it work for the moment, I need to do more research for a more elegant and safer solution.