Setting Up Ghost on a VPS

If you find any bugs in this howto, please email me at rob@gigafable.com.

I had a WordPress personal blog and have had it for a short period of time. It didn't look great and code syntax highlighting was impossible. Due to Ghost reportedly (according to AI) being up to 1900% times faster than WordPress, I decided to self-host that instead.

Before I began doing this I designed a personal theme locally originally based off the Ghost Starter Theme. If you're reading this, you're looking at it. How you might do that is beyond the scope of this article. The best place to get started with that is here.

Steps Involved

  1. Setup the VPS
  2. Security Upgrades
  3. sshd Configuration
  4. Changing the Hostname
  5. Changing the Timezone
  6. Setup Livepatch
  7. Setup a Non-Root User
  8. Setup Fail2ban
  9. Configure Postfix as an SMTP Relay
  10. Install Logwatch
  11. Setup Prometheus
  12. Install Docker
  13. Install MySQL
  14. Bonus: Docker Uptime Monitoring
  15. Have Systemd Manage Docker Containers
  16. Installing Ghost
  17. Setup Nginx and Get an SSL Certificate
  18. Install Matomo for Analytics (Optional)
  19. Get Notifications When Docker Packages Have Updates
  20. Setup Backups

Setup the VPS

First setup your SSH key(s) in Hetzner then purchase a VPS with at least 4GB of RAM. Install a custom image (of the 24.04 LTS Ubuntu if you want to follow this guide and know it won't take any tweaking) with LVM, leaving enough space in your volume group (10% perhaps, depends on your workload) for taking snapshots with.

You do this by launch the VPS into rescue mode and launching the installimage tool. Select your distro then you will be presented with an editor where you can make LVM groups and volumes with the following syntax (it's documented in the editor too):

PART /boot ext4 512M
PART lvm vg0 all
LV vg0 root / ext4 33G

That will create you a 33G volume mounted to the root directory /. If you have 40GB available it's a guess for requirements. /boot has to be on ext4.

Login as root using your SSH key(s).

Security Upgrades

Call unattended-upgrades to do a security update

apt-get update
unattended-upgrades

sshd Configuration

Create a local sshd config override and restart sshd. Change 41235 to a port you can use instead.

cat <<HERE >/etc/ssh/sshd_config.d/local_config.conf
Port 41235
PasswordAuthentication no
HERE
systemctl daemon-reload
systemctl restart ssh.socket

At this point I logged out and reconnected to check the new options were in effect.

Changing the Hostname

Change your systems hostname (replace blog.example.com):

sudo hostnamectl set-hostname blog.example.com

Edit the hosts file /etc/hosts:

# Use nano if you don't know how to use vi
sudo nano /etc/hosts
# sudo vi /etc/hosts

Change this to reflect your own hostname and domain name by replacing blog and blog.example.com:

# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 blog.example.com blog
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Log back out and log back in again to see the change.

Changing the Timezone

For a list of possible timezones:

timedatectl list-timezones

Change your timezone with:

timedatectl set-timezone <Region/City>

Setup Livepatch

Ubuntu offers kernel patching that can harden the kernel against security weaknesses without you having to reboot immediately.

  1. Sign up for an Ubuntu One account if you don't already have one.
  2. Visit the Ubuntu Pro subscription page. It's free for up to 5 machines.
  3. Attach your token:
pro attach <token>

Setup a Non-Root User

If you have setup Hetzner with your SSH keys from their console, you can now create a non-root user and add them to sudoers.

useradd -m -s /bin/bash yourusername
mkdir -p /home/yourusername/.ssh
chown yourusername:yourusername /home/yourusername/.ssh
install -m 600 -o yourusername -g yourusername ~/.ssh/authorized_keys /home/yourusername/.ssh

At this point I configured sudo to work with an alternative verification mechanism. I suggest you create a sudo user now too. Unless you too have a different authentication method, you need to set a password on your user account.

passwd yourusername

Enter a password

echo "yourusername ALL=(ALL:ALL) ALL" >/etc/sudoers.d/yourusername

Now log out of root and login as your sudo user. Optionally, stop root logging in via SSH, it's good practice but I'm not going into that here.

Setup Fail2ban

Install fail2ban:

sudo apt install fail2ban

Make a file called /etc/fail2ban/jail.local with the following in it (you can modify these options if you want but I found they help keep the number of failed users on my sshd port down):

[DEFAULT]
# "bantime.increment" allows to use database for searching of previously banned ip's to increase a
# default ban time using special formula, default it is banTime * 1, 2, 4, 8, 16, 32...
bantime.increment = true

# "bantime.rndtime" is the max number of seconds using for mixing with random time
# to prevent "clever" botnets calculate exact time IP can be unbanned again:
bantime.rndtime = 27

# "bantime.factor" is a coefficient to calculate exponent growing of the formula or common multiplier,
# default value of factor is 1 and with default value of formula, the ban time
# grows by 1, 2, 4, 8, 16 ...
bantime.factor = 1

# "bantime.formula" used by default to calculate next value of ban time, default value below,
# the same ban time growing will be reached by multipliers 1, 2, 4, 8, 16, 32...
bantime.formula = ban.Time * (1<<(ban.Count if ban.Count<20 else 20)) * banFactor

# "bantime.multipliers" used to calculate next value of ban time instead of formula, corresponding
# previously ban count and given "bantime.factor" (for multipliers default is 1);
# following example grows ban time by 1, 2, 4, 8, 16 ... and if last ban count greater as multipliers count,
# always used last multiplier (64 in example), for factor '1' and original ban time 600 - 10.6 hours
#bantime.multipliers = 1 2 4 8 16 32 64
# following example can be used for small initial ban time (bantime=60) - it grows more aggressive at begin,
# for bantime=60 the multipliers are minutes and equal: 1 min, 5 min, 30 min, 1 hour, 5 hour, 12 hour, 1 day, 2 day
bantime.multipliers = 1 5 30 60 300 720 1440 2880

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 30m

[sshd]
# To use more aggressive sshd modes set filter parameter "mode" in jail.local:
 # normal (default), ddos, extra or aggressive (combines all).
 # See "tests/files/logs/sshd" or "filter.d/sshd.conf" for usage example and details.
filter = sshd
enabled = true
port    = 22

Replace port = 22 with your custom sshd port.

Restart fail2ban

sudo systemctl restart fail2ban

Configure Postfix as an SMTP Relay

First install postfix.

sudo apt install postfix

You will be prompted to answer a series of questions. I not to configure it, and as soon as setup finished I ran:

sudo systemctl stop postfix

Because it's not setup correctly yet.

Execute the following, replacing host.example.com with your fully qualified domain name:

echo "host.example.com" > /etc/mailname

Edit /etc/postfix/main.cf and make the following changes to it, replacing smtp.example.com with the relay you are using. I use Brevo.

# Basic settings
myorigin = /etc/mailname
myhostname = $myorigin
mydestination = $myhostname localhost
inet_interfaces = loopback-only
mydestination =

# Relay to external SMTP server
relayhost = [smtp.example.com]:587
smtp_tls_security_level = encrypt
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous

# Restrict incoming connections to localhost
smtpd_client_restrictions = permit_mynetworks, reject

# Enable sender address rewriting
sender_canonical_maps = regexp:/etc/postfix/sender_canonical
header_checks = regexp:/etc/postfix/header_checks
canonical_classes = envelope_sender

# Forward local email
virtual_alias_domains = $mydestination
virtual_alias_maps = regexp:/etc/postfix/virtual

Create /etc/postfix/sasl_passwd to setup the relay password.

[smtp.example.com]:587 username:password

Convert the file to a Postfix lookup table:

sudo postmap /etc/postfix/sasl_passwd
sudo chmod 600 /etc/postfix/sasl_passwd

Create /etc/postfix/sender_canonical to rewrite all outgoing email address to user@example.com (replace with an actual email address of yours):

/.*/ user@example.com

Create the header rewriting file /etc/postfix/header_checks:

/^From:.*$/ REPLACE From: Friendly Name <user@example.com>
/^$/ PREPEND From: Friendly Name <user@example.com>

Create the recipient rewriting file /etc/postfix/virtual, replacing:

/@(localhost|hostname|hostname\.example.com)$/ user@example.com

This will re-write all outgoing mail for your local machine to user@example.com, which is probably what you want on a private server like this.

Bonus tip

Some e-mail addresses (including gmail) allow you to put user+anything@example.com and it will be delivered to user@example.com. This allows you to create powerful filtering rules and triggers.

Try it out

Restart postfix:

sudo systemctl restart postfix

Test out your configuration using sendmail, once again replacing mydomain with your domain and me with the part of your email address before the @ symbol:

echo -e "Subject: Test pager\n\nThis email should arrive with me+pager@mydomain.com " | \
/usr/sbin/sendmail me+pager@mydomain.com && \
echo -e "Subject: Test other\n\nThis email should arrive with me@mydomain.com" | \
/usr/sbin/sendmail root && sudo tail -f /var/log/mail.log

If I (or you) made a mistake with replicating this process /var/log/mail.log will show you what actually happened. I'd also recommend testing sending mail to a different domain such as a gmail to see if it arrives correctly.

Debugging

I had to debug this process as I worked this out. If it's not working, you can try editing /etc/postfix/master.cf and adding -v to the end of following lines:

smtp      unix  -       -       y       -       -       smtp
pickup    unix  n       -       y       60      1       pickup
cleanup   unix  n       -       y       -       0       cleanup

So it becomes (with the other lines still present):

smtp      unix  -       -       y       -       -       smtp -v
pickup    unix  n       -       y       60      1       pickup -v
cleanup   unix  n       -       y       -       0       cleanup -v

Then do sudo postfix reload. This will make /var/log/mail.log very chatty. You can then use sendmail or mail to craft tests.

Install Logwatch

sudo apt install logwatch

Then create /etc/logwatch/conf/services/fail2ban.conf with the following in:

Detail = Medium

Setup Prometheus

Prometheus is a node monitoring system and somewhat complicated and has a lot of options. I suggest you do some research before setting it up, but I will share my config.

First install the required packages:

sudo apt install prometheus prometheus-alertmanager

First job is to get it restricted to localhost.

Edit /etc/default/prometheus

Replace the ARGS line with this one:

ARGS="--web.listen-address=127.0.0.1:9090"

Edit /etc/default/prometheus-node-exporter

Replace the ARGS line with this one:

ARGS="--collector.disable-defaults --collector.textfile --collector.cpu --collector.filesystem --collector.meminfo --collector.diskstats --collector.netdev --web.listen-address=127.0.0.1:9100"

Edit /etc/default/prometheus-alertmanager

Replace the ARGS line with this one.

ARGS="--web.listen-address=127.0.0.1:9093 --cluster.listen-address="

Restart the services:

sudo systemctl restart prometheus prometheus-node-exporter prometheus-alertmanager

You can now test out by opening an SSH tunnel

ssh -L 127.0.0.1:9090:127.0.0.1:9090 user@example.com

Now you can connect to Prometheus on local port 9090 with a web browser and explore.

My minimal config

Here's my /etc/prometheus/prometheus.yml. Replace example with the hostname of your server

# Sample config for Prometheus.

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'example'

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['localhost:9093']
      labels:
        instance: 'example'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "alerts.yml"
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    scrape_timeout: 5s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']
        labels:
          instance: 'example'

  - job_name: node
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
    static_configs:
      - targets: ['localhost:9100']
        labels:
          instance: 'example'

Here's my /etc/prometheus/alertmanager.yml. Replace machine@example.com with a relevant email address for your server and user@example.com (or user+something@example.com if your email provider supports that syntax).

# Sample configuration.
# See https://prometheus.io/docs/alerting/configuration/ for documentation.

global:
  # The smarthost and SMTP sender used for mail notifications.
  smtp_smarthost: 'localhost:25'
  smtp_from: 'machine@example.com'
  smtp_require_tls: false

route:
  group_by: ['alertname', 'instance']  # group alerts by alert and instance
  group_wait: 30s
  group_interval: 5m
  receiver: email_general

  routes:
    # Security update alerts route - throttle once per day
    - match_re:
        alertname: "SecurityUpdatesHeld|RebootRequired"
      receiver: email_security
      repeat_interval: 24h

    # Other alerts route - throttle once per hour
    - receiver: email_general
      repeat_interval: 1h

receivers:
- name: email_security
  email_configs:
  - to: user@example.com

- name: email_general
  email_configs:
  - to: user@example.com

And this is my /etc/prometheus/alerts.yml. The description fields should tell you what each rule is for, the config is self-explanatory.

groups:
- name: node_exporter_alerts
  rules:
  - alert: NodeExporterDown
    expr: up{job="node"} == 0
    for: 1m
    labels:
      severity: critical
    annotations:
      summary: "Node Exporter down"
      description: "Node Exporter has been down for more than 1 minute."
  - alert: HostOutOfMemory
    expr: ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100) < 10
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: Host out of memory (instance {{ $labels.instance }})
      description: "Node memory is filling up (< 10% left) ({{ $value }}%)"
  - alert: HostOutOfDiskSpace
    expr: ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"}) * 100) < 10
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "Host out of disk space on {{ $labels.instance }})"
      description: "Disk is almost full (< 10% left) ({{ $value }}%)"
  - alert: HighCPUUsage
    expr: 100 - (avg by (instance)(irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "High CPU usage on {{ $labels.instance }}"
      description: "CPU usage is above 80% ({{ $value }}%) on {{ $labels.instance }} for more than 2 minutes."
  - alert: HighDiskUtilization
    expr: irate(node_disk_io_time_seconds_total[5m]) * 100 > 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "High disk utilization on {{ $labels.instance }} (device {{ $labels.device }})"
      description: "Disk {{ $labels.device }} on {{ $labels.instance }} has been over 80% busy for more than 5 minutes."
  - alert: SecurityUpdatesHeld
    expr: apt_upgrades_held{origin=~".*security.*"} > 0
    for: 0s
    labels:
      severity: warning
    annotations:
      summary: "Held security updates on {{ $labels.instance }}"
      description: "{{ $labels.arch }} security updates from {{ $labels.origin }} are held back on {{ $labels.instance }}."
  - alert: RebootRequired
    expr: node_reboot_required > 0
    for: 0s
    labels:
      severity: warning
    annotations:
      summary: "Reboot required on {{ $labels.instance }}"
      description: "A reboot is required on {{ $labels.instance }}."      

That's enough to get started. After making the relevant changes you can restart the services:

sudo systemctl restart prometheus prometheus-node-exporter prometheus-alertmanager

I suggest you check journalctl for errors.

Install Docker

sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin

Edit /etc/postfix/main.cf again and find the lines:

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
inet_interfaces = loopback-only

Change them by adding the private IP range docker uses:

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.16.0.0/12
inet_interfaces = all

But There's a Catch

This allows the whole world to connect to Postfix which is not what you wanted. I tried to get it to play nice with binding to the docker bridge IP address as an alternative but even by creating a systemd unit that binds the bridge IP and having postfix depend on it I could not get postfix to bind the bridge IP on reboot. So, we will enable the ufw firewall and disallow global access that way.

Execute the following commands, replacing 22 in the first line with your custom sshd port. This is important or you will lock yourself out. It seems to warn you that these rules might lock you out of ssh (presumably it only looks for rules about port 22) but if you specify the right sshd port, it won't:

sudo ufw allow 22/tcp
sudo ufw deny in to any port 25 proto tcp
sudo ufw allow in on lo to any port 25 proto tcp
sudo ufw allow from 172.16.0.0/12 to any port 25 proto tcp
sudo ufw deny in to any port 25 proto tcp
sudo ufw enable

Reload postfix

sudo systemctl reload postfix

Install MySQL

Run the following:

sudo docker network create blog-net
sudo docker volume create blog-mysql-data
sudo mkdir -p /usr/local/etc/docker
sudo chown root:docker /usr/local/etc/docker
sudo chmod 750 /usr/local/etc/docker

Edit the file /usr/local/etc/docker/blog-env-mysql, replacing the options with sensible choices:

MYSQL_ROOT_PASSWORD=mysqlrootpass
MYSQL_DATABASE=ghostdb
MYSQL_USER=ghostusername
MYSQL_PASSWORD=ghostpassword

Spin up the MySQL instance:

sudo docker run -d \
        --name blog-mysql \
        --network blog-net \
        --env-file /usr/local/etc/docker/blog-env-mysql \
        -v blog-mysql-data:/var/lib/mysql \
        mysql:8.0

Bonus: Docker Uptime Monitoring

I intend to control Docker through systemd and already had a robust email script in place on another server for Docker restart/stop alerts. However, rather than re-use that I thought I'd configure Prometheus to monitor for these changes. This is how I did it.

cd to your home directory if you're somewhere else.

cd ~

Name this script docker-uptime.sh

#!/bin/bash

# Output file for Prometheus node_exporter textfile collector

echo "# HELP docker_started_time Unix epoch time when the Docker container was started"
echo "# TYPE docker_started_time gauge"
echo "# HELP docker_finished_time Unix epoch time when the Docker container was finished"
echo "# TYPE docker_finished_time gauge"

container_ids=$(docker ps -a --format json | jq -r '.ID' | awk 'NF')

if [ -n "$container_ids" ]; then
    inspect_output=$(echo "$container_ids" | xargs docker inspect)
    echo "$inspect_output" | jq -r '
          .[] |
          select(type=="object" and has("Name") and has("State") and (.State|has("StartedAt")) and (.State|has("FinishedAt"))) |
          [(.Name | ltrimstr("/")), .State.StartedAt, .State.FinishedAt] | @tsv
        ' | while IFS=$'\t' read -r name started finished; do
        start_epoch=$(date -d "$started" +%s 2>/dev/null || echo 0)
        # If FinishedAt is the Docker zero time or empty, use 0
        if [ "$finished" = "0001-01-01T00:00:00Z" ] || [ -z "$finished" ]; then
            finish_epoch=0
        else
            finish_epoch=$(date -d "$finished" +%s 2>/dev/null || echo 0)
            if [ "$finish_epoch" -eq 0 ]; then
                finish_epoch=0
            fi
        fi
        echo "docker_started_epoch_seconds{name=\"$name\"} $start_epoch"
        if [ "$finish_epoch" -ne 0 ]; then
            echo "docker_finished_epoch_seconds{name=\"$name\"} $finish_epoch"
        fi
    done
fi

Create this file called docker-uptime.service

[Unit]
Description=Collect docker uptime metrics for prometheus-node-exporter

[Service]
Type=oneshot
Environment=TMPDIR=/var/lib/prometheus/node-exporter
ExecStart=/bin/bash -c "/usr/local/share/prometheus-node-exporter-collectors/docker-uptime.sh | sponge /var/lib/prometheus/node-exporter/docker-uptime.prom"

Create this file called docker-uptime.timer

[Unit]
Description=Run the docker uptime metrics collector every 15 seconds
After=docker.service

[Timer]
OnUnitActiveSec=15sec

[Install]
WantedBy=timers.target

Execute these commands

sudo mkdir -p /usr/local/share/prometheus-node-exporter-collectors
sudo cp ./docker-uptime.sh /usr/local/share/prometheus-node-exporter-collectors
sudo cp docker-uptime.timer /etc/systemd/system
sudo cp docker-uptime.service /etc/systemd/system
sudo systemctl daemon-reload
sleep 1 # Wondered if this is why I needed the start
sudo systemctl enable --now docker-uptime.timer
sudo systemctl start docker-uptime.timer # Needed this for some reason

Add the following to the end of your /etc/prometheus/alerts.yml (make sure you get the spacing right, YAML is very precious about it)

  - alert: DockerContainerRestarted
    expr: increase(docker_started_epoch_seconds[1m]) > 0
    for: 0s
    labels:
      severity: warning
    annotations:
      summary: "Docker container restarted"
      description: "Container {{ $labels.name }} was restarted (start time increased)."
  - alert: DockerContainerStopped
    expr: docker_finished_epoch_seconds > docker_started_epoch_seconds
    for: 30s
    labels:
      severity: critical
    annotations:
     summary: "Docker container stopped"
     description: "Container {{ $labels.name }} has stopped (finished time > start time)."

Then run:

sudo systemctl reload prometheus

Check the web interface to make sure it's picking up the new metrics after the scrape interval (configured to 15sec by default on Ubuntu). Stop and restart some containers. Test it out, if it doesn't work check journalctl for errors.

Have Systemd Manage Docker Containers

We're going to configure systemd to handle keeping the Docker containers alive. To do this, we will use a systemd template file. It's a bit overkill for 2 containers but since I did something similar for another project I'll reuse it here.

Name this file docker-container@.service

[Unit]
StartLimitIntervalSec=30
StartLimitBurst=5
Requires=docker.service
After=docker.service

[Service]
Type=simple
RuntimeDirectory=%i
Restart=always
RestartSec=3
ExecStart=/usr/bin/docker start -a %i
ExecStop=/usr/bin/docker stop %i

[Install]
WantedBy=multi-user.target
sudo docker stop blog-mysql
sudo cp docker-container@.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable docker-container@blog-mysql.service
sudo systemctl start docker-container@blog-mysql.service

That's all you need to do. Now systemd will attempt to keep Docker running.

Installing Ghost

First make a file called /usr/local/etc/docker/blog-env-ghost. It should contain some options (change mail__from address and all of the database__connection__ options to the same as you used with MySQL earlier):

NODE_ENV=development
url=http://127.0.0.1:12468
database__client=mysql
database__connection__host=blog-mysql
database__connection__user=ghostuser
database__connection__password=yoursecretpassword
database__connection__database=ghostdb
mail__transport=SMTP
mail__options__host=172.17.0.1
mail__options__port=25
mail__options__secure=false
mail__from=Your Blog <blog@example.com>
server__host=0.0.0.0
server__port=12468
logging__transports=["stdout"]
logging__useLocalTime=true
security__staffDeviceVerification=true
mail__options__tls__rejectUnauthorized=false

The undocumented (by Ghost) option at the end is needed because postfix is using a self-signed certificate.

I changed it to run on non-standard port 12468 so it didn't conflict with my development environment.

Now spin up the ghost instance:

sudo docker volume create blog-ghost-data
sudo docker run -d \
  --name blog-ghost \
  --network blog-net \
  -p 127.0.0.1:12468:12468 \
  -v blog-ghost-data:/var/lib/ghost/content \
  --env-file /usr/local/etc/docker/blog-env-ghost \
  ghost:latest

Open an SSL tunnel to your server to test it out (from your local machine), replace sshuser and blog-host-name with your own values:

ssh -L 127.0.0.1:12468:127.0.0.1:12468 sshuser@blog-host-name

You can now connect to Ghost from your local machine to configure it ready for the world. Open a browser and fire up http://127.0.1:12468/ghost. Upload your theme and make any other pre-launch changes. I don't recommend publishing posts as Ghost won't syndicate them in development mode and they would have the wrong URL at this point.

Once you are done, disconnect your SSL tunnel and change the following in /usr/local/etc/docker/blog-env-ghost, replacing blog.example.com with whatever your hostname is:

NODE_ENV=production
url=https://blog.example.com

Now execute the following:

sudo docker stop blog-ghost
sudo docker rm blog-ghost

sudo docker run -d --name blog-ghost \
--network blog-net \
-v blog-ghost-data:/var/lib/ghost/content \
--env-file /usr/local/etc/docker/blog-env-ghost \
ghost:latest

sudo docker stop blog-ghost
sudo systemctl enable docker-container@blog-ghost.service
sudo systemctl edit docker-container@blog-ghost.service

You will be in a text editor (nano by default), add the following between the comment lines:

[Unit]
Requires=docker-container@blog-mysql.service
After=docker-container@blog-mysql.service

Restart ghost:

sudo systemctl start docker-container@blog-ghost.service

Setup Nginx and Get an SSL Certificate

Before beginning this you may need to make a CAA record on your DNS provider matching the A record of your hostname. So for blog.example.com I would set the CAA record of blog.example.com (usually just blog in a DNS server configuration as the domain is implicit) to the following:

0 issue "letsencrypt.org"

I didn't do this at first and it worked fine, but certbot gave me errors when I simulated a renewal without it, although it said my DNS server had failed so perhaps it would have operated fine without one. Either way, there's no harm in adding it.

Begin

First create an initial nginx conf at /usr/local/etc/docker/nginx.conf. Replace blog.example.com with your hostname.

worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	types_hash_max_size 2048;

	include /etc/nginx/mime.types;
	access_log /var/log/nginx/access.log;

	gzip on;
			
			
	server {
			listen 80;
			listen [::]:80;
			server_name blog.example.com;
	
			server_tokens off;
	
			location ^~ /.well-known/acme-challenge/ {
					default_type "text/plain";
					root /var/lib/certbot;
			}
	
			location / {
					return 301 https://[domain-name]$request_uri;
			}
	
	}

}

Create the web root directory and a logs directory

sudo mkdir -p /var/lib/certbot /var/log/letsencrypt

Spin up an nginx instance and allow port 80 through the firewall:

sudo ufw allow 80/tcp
sudo mkdir -p /var/log/nginx
sudo docker run -d --name blog-nginx --network blog-net -p 80:80 -p 443:443 -v /usr/local/etc/docker/nginx.conf:/etc/nginx/nginx.conf:ro -v /var/lib/certbot:/var/lib/certbot:ro -v /var/log/nginx:/var/log/nginx  nginx

Obtain the SSL certificate

Run certbot:

sudo mkdir -p /usr/local/etc/letsencrypt
sudo docker run -it --rm --name blog-certbot -v /usr/local/etc/letsencrypt:/etc/letsencrypt:rw -v /var/lib/certbot:/var/certbot/www:rw -v /var/log/letsencrypt:/var/log/letsencrypt certbot/certbot certonly -n --agree-tos --webroot --webroot-path /var/certbot/www -d blog.example.com

It should successfully download your SSL certificate. Add the following to the http block right after the existing server entry, replacing blog.example.com with your hostname:

server {
  listen 443 ssl;
  listen [::]:443 ssl;
  http2 on;
  server_name blog.example.com;

  access_log /var/log/nginx/ghost.access.log;
  error_log /var/log/nginx/ghost.error.log;
  client_max_body_size 20m;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
  ssl_prefer_server_ciphers on;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:10m;

  ssl_certificate     /etc/letsencrypt/live/blog.example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blog.example.com/privkey.pem;

  location / {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://blog-ghost:12468;
  }
}

After:

sudo ufw allow 443/tcp
sudo docker stop blog-nginx
sudo docker rm blog-nginx
sudo docker run -d --name blog-nginx --network blog-net -p 80:80 -p 443:443 -v /usr/local/etc/docker/nginx.conf:/etc/nginx/nginx.conf:ro -v /var/lib/certbot:/var/lib/certbot:ro -v /var/log/nginx:/var/log/nginx -v /usr/local/etc/letsencrypt:/etc/letsencrypt  nginx

Verify auto-renew works:

sudo docker run -it --rm --name blog-certbot -v /usr/local/etc/letsencrypt:/etc/letsencrypt:rw -v /var/lib/certbot:/var/certbot/www:rw -v /var/log/letsencrypt:/var/log/letsencrypt certbot/certbot renew --dry-run

Assuming it all works okay, execute the following and in the editor that pops up add this (notice no --dry-run this time):

Execute:

sudo systemctl enable docker-container@blog-nginx.service
sudo systemctl edit docker-container@blog-nginx.service

In the file that opens, insert this between the lines:

[Unit]
Requires=docker-container@blog-ghost.service
After=docker-container@blog-ghost.service

Now execute:

sudo crontab -e

Add his to file that opens:

7 13 * * * sudo /usr/bin/docker run -it --rm --name blog-certbot -v /usr/local/etc/letsencrypt:/etc/letsencrypt:rw -v /var/lib/certbot:/var/certbot/www:rw -v /var/log/letsencrypt:/var/log/letsencrypt certbot/certbot renew >/dev/null

Optional - Reboot and test it all comes back online correctly

I'd rather find out now if this doesn't function as intended so I rebooted while I'm at the terminal. If you don't do that, at least running the following (optional if you're rebooting):

sudo docker stop blog-nginx
sudo systemctl start  docker-container@blog-nginx.service

Reboot

sudo shutdown -r now

Fail2ban scan for bots on nginx

Edit /etc/fail2ban/jail.local and add these lines:

[nginx-botsearch]
enabled = true
port = http,https
logpath = /var/log/nginx/access.log
          /var/log/nginx/ghost.access.log
maxretry = 2

Reload fail2ban

sudo systemctl reload fail2ban

Install Matomo for Analytics (Optional)

As this is optional, if you don't do it, skip to the section Get Notifications When Docker Packages Have Updates.

Important: If you don't get consent before doing this, you would be in breach of the UK GDPR. There are other countries with regulations too. I did this by editing the theme though it could be done with code injection but I'm not going into it, at least for now. If you are able to do it yourself, CookieConsent is a free library that does the work. There are easier to use paid options available if you want to search for them.

I was curious about who is vising my site so I opted for Matomo Analytics. I didn't use the blog prefix here as my blog server is over-resourced and I may reuse the analytics for other projects. Install as follows:

Steps:

  • Create MySQL database
  • Launch Matomo via docker
  • Finish configuring Matomo
  • Relaunch Matomo without binding localhost
  • Get SSL cert for Matomo
  • Link nginx SSL to Matomo
  • Add script header to ghost site
  • Configure archival cronjob

First create a MySQL database for Matomo:

sudo docker exec -it  blog-mysql mysql -p

You will be prompted for your MySQL root password. Enter it, then execute the following SQL commands (replacing your_password_here):

CREATE USER 'matomo'@'%' IDENTIFIED WITH caching_sha2_password BY 'your_password_here';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX, DROP, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON matomo.* TO 'matomo'@'%';
GRANT FILE ON *.* TO 'matomo'@'%';
FLUSH PRIVILEGES;
exit

Edit the file /usr/local/etc/docker/env-matomo

Add this to it:

MATOMO_DATABASE_HOST=blog-mysql
MATOMO_DATABASE_ADAPTER=mysql
MATOMO_DATABASE_USERNAME=matomo
MATOMO_DATABASE_PASSWORD=your_matomo_db_password_here
MATOMO_DATABASE_DBNAME=matomo
PHP_MEMORY_LIMIT=2G

For small sites, you can set the PHP memory limit to 512G if you're in a resource constrained environment.

Setup Matomo

sudo docker run -d -p 127.0.0.1:20800:80 --network blog-net -v matomo:/var/www/html --env-file /usr/local/etc/docker/env-matomo --name matomo matomo

Complete local setup by opening a port forward from your local machine:

ssh -L 127.0.0.1:20800:127.0.0.1:20800 sshuser@blog.example.com

And connect to http://localhost:20800 to run through setup (like your superuser password), until you get kicked back out to the login prompt.

Execute:

sudo docker stop matomo
sudo docker rm matomo
sudo docker run -d --network blog-net -v matomo:/var/www/html --env-file /usr/local/etc/docker/env-matomo --name matomo matomo

Get an SSL Certificate

In your DNS provider, use whatever interface you need to add an A record for matomo.example.com (use your domain, not example.com) pointing at the right IP. Also add a CAA record for the host with the following in it:

0 issue "letsencrypt.org"

You may have to wait a while for the changes to come online. You can check when your DNS provider has started to share the changes with:

dig matomo.example.com NS

This will show the authoritative name server with something like this:

;; AUTHORITY SECTION:
yourdomain.com.          300     IN      SOA     ns.example.com. hostmaster.yourdomain.com. 2025070402 16384 2048 1209600 300

Grab whatever appears after SOA and use it to replace ns.example.com in the following command (replace matomo.example.com with your hostname):

dig @ns.example.com matomo.example.com

Add the following block to /usr/local/etc/docker/nginx.conf in the http block, next to your other blocks, replacing matomo.example.com with your own hostname.

	server {
			listen 80;
			listen [::]:80;
			server_name matomo.example.com;
	
			server_tokens off;
	
			location ^~ /.well-known/acme-challenge/ {
					default_type "text/plain";
					root /var/lib/certbot;
			}
	
			location / {
					return 301 https://[domain-name]$request_uri;
			}
	
	}

Execute the following to get an SSL certificate, replacing matomo.example.com with your hostname:

sudo docker exec -it blog-nginx nginx -s reload
sudo docker run -it --rm --name blog-certbot -v /usr/local/etc/letsencrypt:/etc/letsencrypt:rw -v /var/lib/certbot:/var/certbot/www:rw -v /var/log/letsencrypt:/var/log/letsencrypt certbot/certbot certonly -n --agree-tos --webroot --webroot-path /var/certbot/www -d matomo.example.com

Now add another server block to the http section of usr/local/etc/docker/nginx.conf, replacing `matomo.example.com in all places with your matomo hostname:

server {
    listen 443 ssl;
    http2 on;
    server_name matomo.example.com;

	access_log /var/log/nginx/matomo.access.log;
	error_log /var/log/nginx/matomo.error.log;
	client_max_body_size 20m;
	
	ssl_protocols TLSv1.2 TLSv1.3;
	ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
	ssl_prefer_server_ciphers on;
	ssl_session_timeout 1d;
	ssl_session_cache shared:SSL:10m;
	
	ssl_certificate /etc/letsencrypt/live/matomo.example.com/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/matomo.example.com/privkey.pem;

    location / {
        proxy_pass http://matomo:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Host $host;
        proxy_read_timeout 900;
        client_max_body_size 0;
    }
}

We need to update Matomo with the correct hostname now we're not using it through an SSH tunnel, as well as a few other settings. Replace matomo.example.com with your hostname:

sudo docker exec -it matomo sed -i 's/localhost:20800/matomo.example.com/' /var/www/html/config/config.ini.php
sudo docker exec -it matomo sed -i '/^\[General\]/a force_ssl = 1' /var/www/html/config/config.ini.php
sudo docker exec -it matomo sed -i '/^\[General\]/a assume_secure_protocol = 1' /var/www/html/config/config.ini.php
sudo docker exec -it matomo sed -i '/^\[General\]/a proxy_client_headers[] = HTTP_X_FORWARDED_FOR' /var/www/html/config/config.ini.php
sudo docker exec -it matomo sed -i '/^\[General\]/a proxy_host_headers[] = HTTP_X_FORWARDED_HOST' /var/www/html/config/config.ini.php

Now restart nginx:

sudo systemctl restart docker-container@blog-nginx.service

Go to http://matomo.example.com and check it works.

You should be greeted by your Matomo instance!

Adding the Analytics Header

Login and it will give you the JS to add to your ghost instance. So copy it the the clipboard then:

  1. Login to ghosts administration dashboard.
  2. Head on over to the settings bit (click the cog)
  3. Go to code injection at the bottom.
  4. Select site header and paste in the code from Matomo.
  5. Click save.
  6. Now load your main blog page to check everything is running ok
  7. Go back to Matomo.

If all is well you'll now be greeted by an analytics dashboard. Congratulations!

Have systemd Manage Matomo

Now we need to have systemd manage Matano. Execute

sudo systemctl enable docker-container@matomo.service
sudo systemctl edit docker-container@matomo.service

You will be greeted with the text editor. Insert the following:

[Unit]
Requires=docker-container@blog-mysql.service
After=docker-container@blog-mysql.service

Now:

sudo docker stop matomo && sudo systemctl start docker-container@matomo.service

systemd should now keep your process alive and start it on reboot.

Setting up archiving

First disable the browser auto-archiving:

sudo docker exec -it matomo sed -i '/^\[General\]/a browser_archiving_disabled_enforce = 1' /var/www/html/config/config.ini.php

Now check the command works for you:

sudo docker exec -it matomo su -s /bin/sh www-data -g www-data -c '/usr/local/bin/php /var/www/html/console core:archive'

Assuming it's all good, add it as an hourly cronjob:

sudo crontab -e

Add the following:

44 * * * * /usr/bin/docker exec matomo su -s /bin/sh www-data -g www-data -c '/usr/local/bin/php /var/www/html/console core:archive' &>/dev/null

Have fail2ban ban bots on your Matomo

Edit /etc/fail2ban/jail.local and make the [nginx-botsearch] section look like this:

[nginx-botsearch]
enabled = true
port = http,https
logpath = /var/log/nginx/access.log
          /var/log/nginx/ghost.access.log
          /var/log/nginx/matomo.access.log
maxretry = 2

Get Notifications When Docker Packages Have Updates

For this we will use crazymax-diun. First set it up:

sudo mkdir -p /usr/local/etc/docker/diun

Create these files:

/usr/local/etc/docker/diun/env:

TZ=Europe/London
LOG_LEVEL=info
LOG_JSON=false

/usr/local/etc/docker/diun/diun.yml:
(replace blog@example.com and you@example.com)

watch:
  workers: 10
  schedule: "0 */6 * * *"
  firstCheckNotif: false

notif:
  mail:
    host: 172.17.0.1
    port: 25
    ssl: false
    insecureSkipVerify: true
    from: blog@example.com
    to:
      - you@example.com
    templateTitle: "{{ .Entry.Image }} released"
    templateBody: |
      Docker tag {{ .Entry.Image }} has been updated.

providers:
  file:
    filename:
      /etc/diun/images.yml

/usr/local/etc/docker/diun/images.yml:
(Omitting matomo if you didn't install it)

- name: crazymax/diun:latest
- name: certbot/certbot:latest
- name: ghost:latest
- name: nginx:latest
- name: matomo:latest
- name: mysql:8.0

Now try it out:

sudo docker run -d --name diun --env-file /usr/local/etc/docker/diun/env -v diun-data:/data -v /usr/local/etc/docker/diun:/etc/diun:ro crazymax/diun:latest

Then:

sudo docker logs diun

If everything looks okay, test the notifier

sudo docker exec diun diun notif test

And if you're good to go:

sudo docker stop diun
sudo systemctl enable docker-container@diun.service
sudo systemctl start docker-container@diun.service

Setup Backups

I use BorgBase and borgmatic for backups.

sudo pipx install borgmatic
sudo apt install mysql-client-core-8.0

Now you can backup whatever you like. Borgmatic can use lvm snapshots so your filesystem will be in a consistent state and take mysql dumps so your databases are in a consistent state.

The other way to do it is lock db writes until a snapshot comes up and backup the data directory).

I like to keep all my system changes in git repositories and back those up. I backup the docker volumes and my configuration files. Backing up a bunch of stuff like Docker images and apt packages is a waste of backup space for me.

Here's (roughly) my configuration to get you started. You really should read the official reference but this config will help. Replace ssh://your_borgbase_repo_here with your borgbase repo, your_borgbase_password_here with your borgbase password, and mysql_root_pass with your mysql root pass.

If you backup your docker volumes like I do, make sure you exclude your MySQL directory if you're backing it up with in SQL format (like this does). Also if you back up your docker volumes (like this does) exclude /var/lib/docker/volumes/backingFsBlockDev (like I have) otherwise Borg will get stuck in a loop reading rubbish from it forever.

source_directories:
    - /root/package-selections.txt
    - /etc/.git
    - /usr/local/.git
    - /var/spool/cron/crontabs/root
    - /var/lib/docker/volumes
repositories:
    - path: ssh://your_borgbase_repo_here
      label: backups
      encryption: repokey-blake2
      append_only: true
exclude_patterns:
    - /var/lib/docker/volumes/backingFsBlockDev
    - /var/lib/docker/volumes/blog-mysql-data
user_runtime_directory: /var/lib/borgmatic-runtime
user_state_directory: /var/lib/borgmatic
encryption_passphrase: "your_borgbase_password_here"
read_special: true
commands:
    - before: everything
      when:
          - create
      run:
          - dpkg --get-selections >/root/package-selections.txt
compression: zlib,4
retries: 3
retry_wait: 300
keep_daily: 7
keep_weekly: 2
keep_monthly: 3
checks:
    - name: repository
      frequency: 2 weeks
log_file: /var/log/borgmatic/backup.log
skip_actions:
    - compact
mysql_databases:
    - name: all
      hostname: localhost
      username: root
      restore_username: root
      password: "mysql_root_pass"
      format: sql
lvm:
    snapshot_size: 4.5GB

Before launching it though, we need to re-create the MySQL container to bind it's port to localhost so we can connect to it.

sudo systemctl stop docker-container@blog-mysql.service
sudo docker run -d -p 127.0.0.1:3306:3306 \
	--name blog-mysql --network blog-net \
	--env-file /usr/local/etc/docker/blog-env-mysql \
	-v blog-mysql-data:/var/lib/mysql \
	mysql:8.0
sudo docker stop blog-mysql
systemctl start docker-container@blog-ghost.service \
	docker-container@blog-nginx.service \
	docker-container@matomo.service \
	docker-container@blog-mysql.service

Now launch a dry-run:

sudo /root/.local/bin/borgmatic -c /usr/local/etc/borgmatic.yml -n --dry-run -v 1 --progress create

If everything is okay test it doesn't output anything on a successful backup:

sudo /root/.local/bin/borgmatic -c /usr/local/etc/borgmatic.yml -v 0

Then add it to your crontab (or systemd) if you get no response (success).

sudo contab -e

Add:

15 1 * * * /root/.local/bin/borgmatic -c /usr/local/etc/borgmatic.yml -v 0

And your system will backup daily at 1:15 am.

Comments