Years ago I set up a blog and a couple of other things on a shared hosting-server; this was so long ago that virtualized servers were rare. Long overdue I am moving to a virtual private server - with a lot more flexibility. First step is to set up some basic provisioning (I use Ansible) and secure the server (using ufw and fail2ban).
First things first, I decided on a budget VPS-server from Contabo, at €9.99 (now €7.99!) per month (with a further €2/month discount because I bought 3 months up front). The server is in Germany, so pings are not terrible from Norway (where I live). You can check how my server compares on ServerBear.
I have chosen Debian as my server os, so it was delivered with a pretty minimal Debian stable/jessie (initially Debian 8.3), and it can be accessed using ssh.
Everything on GitHub
My plan is to put my setup in a github-repo called formasjon, and you are fairly free to reuse anything and everything if you find it interesting, at your own peril, of course. (‘Formasjon’ is the Norwegian word for ‘formation’.)
Development and staging in Vagrantboxes
To reduce the risk of making changes I use two local virtual machines (VMs) running in VirtualBox. These are set up using Vagrant. Both tools are installed on my Mac using Homebrew.
The two servers:
devmain- I poke around in this one manually while developing the automation scripts.
localmain- I use this for staging to make sure my automation works as it should.
The setup for the VMs are described in a Vagrantfile.
The installations of Debian in these VMs differ a little in the packages installed (they have more than the VPS), I’ll guess I’ll figure out how to handle that as I go along.
To secure communication with the server I have generated a public/private key-pair using OpenSSH and copied them to the VMs and the VPS.
cd provision # <1> ssh-keygen -t rsa -b 4096 -C "formasjon" # <2> ./copy-public-key-dev.sh // <3> ./copy-public-key-staging.sh ./copy-public-key-prod.sh <sshuser> <vps-server-ip> # <4>
<1> I do provisioning operations in this folder
<2> Make sure to save the keys in the provision/ssh-folder
<3> I made a couple of helper scripts for doing the copying (yeah, I know, duplication)
<4> The user and IP is not in the “prod”-script
I use Ansible for automation and running commands remotely, which will be very helpful when I have a large formation of servers flying among the clouds…
I use three inventory files for Ansible,
staging, and an encrypted one for
prod, as you can see from this commit. The inventory files contain information about the server(s) in each environment and on how to connect to them. By encrypting the prod-inventory I can hide potentially sensitive information. Ansible has a feature called Ansible Vault that makes it quite easy to work with.
To edit the prod inventory:
ansible-vault edit prod
If you get tired of typing the vault passphrase you can add:
--vault-password-file ~/.vault_pass and store the passphrase in
The main Ansible playbook
My main playbook is called
site.yml and should be possible to run at all times.
The complete ansible command to run it is:
ansible-playbook -v --vault-password-file ~/.vault_pass --key-file=ssh/id_rsa -i prod site.yml
My VPS did not have all the required packages to run most of the ansible modules (python being the most obvious missing piece). To fix this I started with a role for getting the prereq’s in place. You’ll find everything in this commit.
To repeatedly use the same arguments when invoking ansible is a bit tedious so I have made a couple of helper scripts:
run- to run ansible commands, e.g
./run all -i dev -a "apt list --upgradeable"or
./run all -i prod -m ping
play- to run ansible playbooks, e.g
./play -i dev site.yml
A first step in making my server a bit more secure is to upgrade packages. To check if there is a need I can do:
./run all -i dev -a "apt list --upgradable"
To upgrade I use upgrade.yml, a very short Ansible playbook, to run the playbook I can do:
./play -i dev upgrade.yml
Firewall and fail2ban
I use ufw for my firewall as it is a bit easier to use than iptables directly; Ansible also has a module for it. The role has a simple set of tasks.
Also, to limit the chance of brute-forcing password-based SSH-users I added fail2ban. Thankfully someone has already created a role for fail2ban (pulled from Ansible Galaxy), all I did was put it in my base-setup.yml, add some host-specific variables and press play (or rather
./play -i dev site.yml). As a sidenote, I suggest you use fail2ban, and not denyhosts for this, as denyhosts seems to be abandonware.
The whole firewall/fail2ban-setup is in this commit.
The final git state can be found on the tag blog-2016-03-02.
Btw … any server with ssh is continually being attacked so you should make sure your servers are secure.
Now that my server is fairly secure I can take a few days to ponder my next steps for my VPS. Perhaps OpenVPN? Suggestions?