Tag Project
Deploying Hugo site using Bitbucket pipelines
I have a spare Mac mini running which has all sorts of projects running on it. One of them had a hook into a git repository of a Hugo website which would build it once I committed it to the master and coopy it over to my webserver. It was tedious and sometimes it broke, last week I had enoughof it and looked for a solution that is stable and has less maintenace. The solution I ended up with are the Bitbucket Pipelines
The setup in the end was quite easy, I only had to gather the required examples, configuration and settings from several diffent places. So for prosperity and for other people in the same situation I’m describing my solution here.
First in bitbucket you have to login and go to the repository where you store your Hugo website (source and such) then on the left and bottom go to “Repository settings” in the new itemlist go to the bottom en select “Settings” where you can “Enable Pipelines”.
Then goto “Repository variables” where you can define variables that can be used in your later scripts. I use a variable for the username, server address and the version of Hugo required.
USER my username on the destination machine
SERVER name of the destination server.
HUGO_VERION the version of Hugo, in my case 0.108.0
Then goto “SSH Keys” where you generate a keypair and copy the public key for later use. At the bottom add you destination server to the list of “Known hosts”
The public key is something you should add to the destination server. Log in with the user you set in your variable earlier and add the public key to the file ~/.ssh/authorized_keys
. This will enable access to the destination server.
The last item is to add the file bitbucket-pipelines
to the root of your repository. The content will look like:
bitbucket-pipelines.yml
image: atlassian/default-image:3
options:
# run the script for a maximum of 5 minutes
max-time: 5
pipelines:
default:
- step:
name: Build Hugo
script:
- apt-get update -y && apt-get install wget
- apt-get -y install git
- echo Hugo version is $HUGO_VERSION
- export HUGO_ENV=production
- wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_Linux-amd64.deb
- dpkg -i hugo*.deb
- git submodule update --init --remote
- hugo --minify
artifacts:
- public/**
- step:
name: Deploy artifacts using SCP to PROD
deployment: production
script:
- pipe: atlassian/scp-deploy:1.2.1
variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: '/destination'
LOCAL_PATH: 'public/*'
That’s it, then commit these files to your repository and things should start moving.
Installing AWStats on MIAB
Having only static pages available makes it harder to integrate tracking solutions to analyse website visitors. In line with my philosophy to privacy concerns I’ve chosen to implement a simple solution that runs on the server itself that isn’t very intrusive to the users as well: AWStats.
The setup is to generate static html reports on the usage of the websites you host by analysing the logfiles generated by nginx. The static websites are hosted on the same box in a separate directory or a subdomain. Optionally you can restrict access to the statistics, I’ve included a basic authentication configuration to access. Feel free to use it, leave it out or even include a better solution. I’ve not included any Geo-ip tracking which is possible with additional configuration and packages.
As far as I can tell at the moment this setup will not interfere with MAIB configuration for 99%. The only affected area might be the logrotate configuration which could be affected by an update from nginx.
install AWStats
This is the simplest part of the setup, just run: sudo apt install awstats
Configure Nginx
To proces the logfile for each site we need to split them out. MIAB has configured nginx to log everything to a single file which does not work for AWStats. You only need to configure the domains you want to include in your AWStats reporting.
Create an example.com.conf
file (where example.com should be replaced by the domain name you would like to include) in the location /home/user-data/www
with the following content:
access_log /var/log/nginx/example.com.access.log;
(again, replace example.com with your own domain name). Repeat the previous for all domains you would like to monitor. To check your configuration run the following commands:
/root/mailinabox/tools/web_update
sudo nginx -s reload
This should run without problems if you haven’t made any mistakes. You should see logfiles appear for each configured domain in /var/log/nginx
.
Configure AWStats
You need to create a separate configuration file for each domain, like in the nginx configuration. Somehow AWStats uses this file instead of the generic file for the static generation proces therefore we need to include those config options as well.
Create a file named /etc/awstats/awstats.example.com.conf
with the following content:
LogFile="/var/log/nginx/example.com.access.log"
SiteDomain="example.com"
DirData="/var/lib/awstats/"
HostAliases="www.example.com"
LogFormat = 1
ShowSummary=UVPHB
ShowMonthStats=UVPHB
ShowDaysOfMonthStats=VPHB
ShowDaysOfWeekStats=PHB
ShowHoursStats=PHB
ShowDomainsStats=PHB
ShowHostsStats=PHBL
ShowRobotsStats=HBL
ShowSessionsStats=1
ShowPagesStats=PBEX
ShowFileTypesStats=HB
ShowOSStats=1
ShowBrowsersStats=1
ShowOriginStats=PH
ShowKeyphrasesStats=1
ShowKeywordsStats=1
ShowMiscStats=a
ShowHTTPErrorsStats=1
ShowFlagLinks=""
ShowLinksOnUrl=1
Repeat this for all the domains you have configured in nginx and want to actively monitor.
Create a location for publication
I’ve chosen to host the stats on a subdomain of my MIAB box. You create the domain stats.example.com (by creating a dummy email user in the MIAB admin page). Next, in the web section of the MIAB admin site change the directory for the static site to: /home/user-data/www/stats.example.com
. In the TLS/SSL section provision the certificates for this new domain.
Copy all the images files from the AWStats package using the following commands:
cd /home/user-data/www/stats.example.com
cp -R /usr/share/awstats/icon .
Automation
To generate everything automatically I’ve chosen to use the logrotation moment and added everything to the nginx script. You do this by editing /etc/logrotate/nginx
and change it so it looks like to following example.
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
/usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi \
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
/usr/share/awstats/tools/awstats_buildstaticpages.pl -config=example.com -dir=/home/user-data/www/stats.example.com
endscript
}
You’ll see the changes made in the pre and postrotate script, configure all domains separately in the postrotatescript by copying the line and change the domain name. To test your configuration and setup you can run lograte manually by using the command sudo logrotate -f /etc/logrotate.d/nginx
This should run with lots of output and you should see files appearing in /home/user-data/www/stats.example.com
You can point your browser to stats.example.com/awstats.example.com.html and see how it looks.
Please remove the file awstats
that might be installed in /etc/cron.d
, this runs way to often and doesn’t do it like we prefer.
To make access easier and not to have to remember all the links you could create a simple index.html
file located in /home/user-data/www/stats.example.com
with in it the linkst to all the configured domains like:
<a href="http://stats.example.com/awstats.example.com.html">Example</a>
<a href="http://stats.example.com/awstats.other.com.html">Other</a>
And point your browser to the http://stats.example.com.conf
Security
If you don’t want to make the information publicly available we can introduce some basic security measure to have a user/password combination for basic authentication. As we don’t have apache
or httpd-tools
installed I used an online method of generating the hash password information: https://wtools.io/generate-htpasswd-online
Use this site to enter a user and password combination (for instance when using admin/admin something similar to this should appear admin:$apr1$y3uha0wx$EgVwp9d2c24zAJdU5bVK1/ )
Copy the result into a new file: /etc/nginx/htpasswd
To configure nginx create a stats.example.com.conf
file (where stats.example.com should be replaced by the domain name you use) in the location /home/user-data/www
with the following content:
location / {
auth_basic "Administrator’s Area";
auth_basic_user_file /etc/nginx/htpasswd;
}
To enable this run the following commands:
/root/mailinabox/tools/web_update
sudo nginx -s reload
Next time you go to your statistics page you’ll need to enter the username and password to gain access.
Using a central virual MySQL server
For all my projects I’ve been using dedicated virtual machines which I manage and configure using Vagrant. In this manner it was easy to manage a dedicated environment where you won’t have conflicting settings or libraries that was easily recreated on the fly. Every project with it’s own virtual machine get’s all the components installed it needs. With at least 5 or 6 virtual machines running on my personal iMac (which is an older model from 2013) it was getting a bit busy. One common component installed on all my machines was MySQL, which is still my go to database for simple projects. So I’ve been toying with the idea of creating a single virtual machine that only runs MySQL for all my projects. I could even host this virtual database server on an even older Mac Mini (from 2010) which I still keep around. It used to be my generic media machine untill an Apple TV took over it’s role.
At first setup everything looked great, it all went well when running on the same host (the iMac). But when I hosted the virtual database server on the Mac Mini things started to go wrong and I couldn’t make a connection to the database. While locally everything went well, going over the network was the problem. Several things to check: Was my virtual machine accepting remote connections. Yes, I’ve enabled the option: config.vm.network "public_network"
Next was connectivity to MqSQL. I learned that the skip-networking option which one usually used to secure your connectivity to the outside world has been deprecated. Instead the network connectivity is linked to the network interface of your (virtual) server. It’s got three options:
- Only acces from the local host
- Access from all networks
- Access only from one network
Only acces from the local host
Here, the bind-address takes a value of 127.0.0.1, the loopback IP address. MySQL can only be accessed by applications running on the same host.
Access from all networks
MySQL listening to all networks then the bind-address configuration is IP as 0.0.0.0. With this setting MySQL listens from all networks. Furthermore, to permit both IPv4 and IPv6 connections on all server interfaces,
Access only from one network
MySQL is allowed to listen only to a specific network interface. The value, in this case, will be
the ip-address of the specific network interface for instance: 192.168.1.1
So when I adjusted the settings for mysqld in /etc/mysql/mysql.conf.d/mysqld.cnf
and changed bind-address=127.0.0.1
into bind-address=0.0.0.0
and restarted mysqld everything connected and started working properly!
Next step is migrating all active projects to the virtual central MySQL server and see if there are any performance benefits.
Sidenote: I’ve learned that to make sure you can rebuild your database server on the fly you’ll have to make a backup of your data before you halt or destroy the virtual server. I’ve done this via a trigger configuration in my Vagrantfile that makes a dump of the database on demand to a shared folder. Just add the following lines to your Vagrantfile:
config.trigger.before [:halt, :destroy] do |trigger|
trigger.warn = "Dumping database to /vagrant/Code/dbserveroutfile.sql"
trigger.run_remote = {inline: "mysqldump -u<username> -p<password> --all-databases --single-transaction --events > /vagrant/Code/dbserveroutfile.sql; "}
end