Author: Gergő Fándly
Date: 2024-11-22
Setting up a VPS and installing your own services on it can be a viable alternative to paying for subscription-based services. I’ve been operating my self-hosted services for about 10 years now. First I had all of them running on a Raspberry PI B+ (the first generation), but after some time I switched to a VPS which has been running for about 6 years now, without any major modification except for software updates.
But time has changed since then and a lot of cool new stuff came out. My primary requirement also changed from hosting PHP based sites to hosting various dockerized services, so I decided it’s about time to start fresh.
And since I’m rebuilding my VPS anyway, I decided it would make a great guide for setting up a VPS for yourself or for your small business.
In this guide I’m going to show you how to set up secure SSH authentication to your VPS, configure the firewall, install postgresql, host some services using Docker containers and keep the whole thing backed up.
The first step is to choose a VPS provider suitable for your needs. I’d recommend looking for a local provider, since they tend to be cheaper and the network latency is also frequently lower. In my case I’ve been going with Romarg for the past few years. Although they still don’t support IPv6, their service is reliable and there are no charges for network traffic.
For 20 euros a month (excluding VAT) I got a VPS with 4 vCores, 8 GB RAM and 100 GB SSD with unlimited traffic on an up to 1000Mbps uplink. That’s a pretty good deal in my opinion.
Before creating the VPS you will have the chance to select an operating system. I like to go with Debian. It’s robust, reliable, easy to use, and can be seamlessly upgraded between major versions, there’s no need to reinstall it.
When first starting up your VPS you will be provided with some basic means to log in to your server. In case of Romarg you get a random generated root password, in other cases you have to provide a public SSH key when creating your instance.
Log in to your instance:
I would also recommend setting up a DNS record for your VPS so you don’t have to use the IP address all the time. It also makes it easier for you if your IP address changes. In the case of my server, it has the hostname citadel.systest.eu
.
Now that you’re logged in, do some basic setup.
On debian you get a pretty ugly, colorless shell for the root user. I like to fix this as the first step:
cp /etc/skel/.bashrc ~/.bashrc
source ~/.bashrc
It’s already better, isn’t it?
Now we can set the hostname of our system to match the one we’ve configured in the DNS record:
hostnamectl hostname vps.example.com
And since we can’t trust our VPS provider that the images are up to date, we do a system update:
apt update
apt upgrade
Now we have an updated system that has a name.
You really don’t want to log in as root as a security best practice. In fact no one should be able to log in as root. You should have a separate user account for everyone managing the server and give sudo access to them.
Let’s create a user called john
:
useradd -m john
Now let’s think about how we’re going to log in. You should not use passwords for SSH login. They are brute-forceable. Instead, use PKI. If you want to keep it simple you can use a single SSH key, but I highly prefer multi-factor authentication. This is the setup I currently prefer: the first factor is an SSH key stored on your device (for example on your laptop) and the second factor is either another SSH key stored on a FIDO2 authenticator, or a TOTP code.
To set this up create an SSH key on your local device first. Run the following command then follow the instructions.
# run on your local system
ssh-keygen -b 4096
Now you have a public key stored in ~/.ssh/id_rsa.pub
. Copy the contents of that file to the authorized_keys
file of your user on your VPS. In case of a user named john that would be /home/john/.ssh/authorized_keys
.
Now grab your FIDO2 token and create an SSH key on it:
# run on your local system
ssh-keygen -t ecdsa-sk -O resident -O verify-required -C "vps_ssh"
This command will generate an ECDSA key on your token and output the corresponding public key to a file. Copy that file’s content to the same authorized_keys
file as before.
Now repeat this for all of your devices and tokens.
Once done, make sure only john
has access to that file:
chown -R john:john /home/john/.ssh
chmod 600 /home/john/.ssh/authorized_keys
Now set up your backup 2FA method, an authenticator app:
# install the required package
apt install libpam-google-authenticator
# switch to the desired user
su john
# configure the authenticator
google-authenticator
You will be asked several questions. Do the following:
And done. The whole process should look similar to this one:
$ google-authenticator
Do you want authentication tokens to be time-based (y/n) y
Warning: pasting the following URL into your browser exposes the OTP secret to Google:
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/[email protected]%3Fsecret%3D5YIULMN5AUTCXFP3PFLKJO6BAQ%26issuer%3Dvps.example.com
[QR code here]
Your new secret key is: 5YIULMN5AUTCXFP3PFLKJO6BAQ
Enter code from app (-1 to skip): -1
Code confirmation skipped
Your emergency scratch codes are:
32150451
99113651
52960621
35272892
75785072
Do you want me to update your "/home/john/.google_authenticator" file? (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, a new token is generated every 30 seconds by the mobile app.
In order to compensate for possible time-skew between the client and the server,
we allow an extra token before and after the current time. This allows for a
time skew of up to 30 seconds between authentication server and client. If you
experience problems with poor time synchronization, you can increase the window
from its default size of 3 permitted codes (one previous code, the current
code, the next code) to 17 permitted codes (the 8 previous codes, the current
code, and the 8 next codes). This will permit for a time skew of up to 4 minutes
between client and server.
Do you want to do so? (y/n) n
If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting? (y/n) y
Don’t be afraid I only generated this config for the purpose of this blog post, this TOTP setup is not used anywhere.
Now that we have all the authentication methods set up, it’s time to use them, since our system doesn’t recognize any of these by default.
In /etc/pam.d/sshd
modify the following:
auth required pam_google_authenticator.so
@include common-auth
@include common-password
The file should look like this:
# PAM configuration for the Secure Shell service
auth required pam_google_authenticator.so
# Standard Un*x authentication.
#@include common-auth
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
# Uncomment and edit /etc/security/access.conf if you need to set complex
# access limits that are hard to express in sshd_config.
# account required pam_access.so
# Standard Un*x authorization.
@include common-account
# SELinux needs to be the first session rule. This ensures that any
# lingering context has been cleared. Without this it is possible that a
# module could execute code in the wrong domain.
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
# Set the loginuid process attribute.
session required pam_loginuid.so
# Create a new session keyring.
session optional pam_keyinit.so force revoke
# Standard Un*x session setup and teardown.
@include common-session
# Print the message of the day upon successful login.
# This includes a dynamically generated part from /run/motd.dynamic
# and a static (admin-editable) part from /etc/motd.
session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate
# Print the status of the user's mailbox upon successful login.
session optional pam_mail.so standard noenv # [1]
# Set up user limits from /etc/security/limits.conf.
session required pam_limits.so
# Read environment variables from /etc/environment and
# /etc/security/pam_env.conf.
session required pam_env.so # [1]
# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
session required pam_env.so user_readenv=1 envfile=/etc/default/locale
# SELinux needs to intervene at login time to ensure that the process starts
# in the proper default security context. Only sessions which are intended
# to run in the user's context should be run after this.
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open
# Standard Un*x password updating.
#@include common-password
Now we can modify the SSHd configuration as well. In /etc/ssh/sshd_config
to the following:
PermitRootLogin
to no
– we don’t allow SSH login for the root userPubkeyAuthentication
to yes
– allows login using PKIPubkeyAuthOptions
to verify-required
– require verification for FIDO2 keysPasswordAuthentication
to no
– don’t allow logging in with passwordsChallengeResponseAuthentication
to yes
– enable support for TOTPUsePAM
to yes
AuthenticationMethods
to publickey,publickey publickey,keyboard-interactive
– this sets the accepted flows: 2 public keys OR 1 public key and TOTPSet up sudo access for john:
apt install sudo # sudo might not be installed by default
visudo # edit sudoers file, it is not supposed to be edited directly
In the sudoers
file make sure that everyone in the sudo group has permission to execute any command without password:
%sudo ALL=(ALL:ALL) NOPASSWD: ALL
Now we just have to add john to the sudo group:
usermod -a -G sudo john
Finally, restart the SSH daemon and try to log in as john:
systemctl restart sshd
# from your local system
ssh [email protected]
Output should be similar to this one:
gergo@glap2:~$ ssh [email protected]
Enter passphrase for key '/home/gergo/.ssh/id_rsa':
Confirm user presence for key ECDSA-SK SHA256:G07NOEBsHVyvnmzolTKmhEjYUNYeMmDmrHWdoSQfT4E
Enter PIN for ECDSA-SK key /home/gergo/.ssh/id_ecdsa_sk:
Confirm user presence for key ECDSA-SK SHA256:G07NOEBsHVyvnmzolTKmhEjYUNYeMmDmrHWdoSQfT4E
User presence confirmed
Linux citadel.systest.eu 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Nov 22 10:51:08 2024 from 198.51.100.74
gergo@citadel:~$
Now you’re logged in as your user. Whenever you need root access just use sudo -s
.
If you’re using SSH agent on your local system it might not work out of the box. You can either kill the agent using eval $(ssh-agent -k)
, or install ssh-askpass
and start the agent like:
export SSH_ASKPASS=ssh-askpass
eval $(ssh-agent -s)
Now that we have authentication set up, we can configure our firewall to secure our system. For this I like to use ufw, which stands for uncomplicated firewall. It’s way less flexible than iptables or nftables, but it’s more than enough for our use case and leaves less room for error.
Start by installing ufw:
apt install ufw
Set the default policy to rejecting incoming connections:
ufw default deny
And allow the port for SSH:
ufw allow 22/tcp
Now enable the firewall:
ufw enable
If you now type ufw status
you should see this table:
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
Almost all of the services I need use postgresql as their database. When running dockerized applications it is a best practice to have a single database running on the host system or on a separate system. This helps with management and backup (we will get to there later).
Start by installing postgres:
apt install postgresql
We need to do some simple changes to it’s configuration. In /etc/postgresql/15/main/postgresql.conf
set listen_addresses
to '*'
. This way we can connect to the database from any network, not just localhost.
And in /etc/postgresql/15/main/pg_hba.conf
add a new line at the bottom of the file:
host all all 172.16.0.0/12 scram-sha-256
This allows all hosts from the network 172.16.0.0/12
(the network submask used by docker) to access all the databases and all the users with password login. This is required since we want our docker containers to access the postgres database.
We also have to set up an additional rule in the firewall for this to work:
ufw allow from 172.16.0.0/12 to any port 5432 proto tcp
We want to expose HTTP services to the internet and we want to use the same 80/443 port for all of them. Because of this reason, we need to set up a reverse proxy. In the past I’ve used Apache2 or Nginx, but in this setup I’m going with Caddy. It has built in HTTPS certificate renewal, sane defaults, easy to maintain config files and has a pretty decent performance.
Debian has a caddy package in its default repo, but it’s pretty outdated, so I recommend installing it from caddy’s own repo:
apt install -y debian-keyring debian-archive-keyring ca-certificates apt-transport-https curl
curl -fsSL 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -fsSL 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list
apt update
apt install caddy
This will create a Caddyfile in /etc/caddy/Caddyfile
I like to keep config files for each vhost separated, so let’s modify that file to only include this line:
import sites-enabled/*
And create the folder /etc/caddy/sites-enabled
. We will place all of our site configurations in this folder.
We should also allow the ports for HTTP (80) and HTTPS (443) on our firewall:
ufw allow 80/tcp
ufw allow 443/tcp
Since the services I’ll be hosting all run inside Docker containers, we need to install docker:
curl -fsSL 'https://download.docker.com/linux/debian/gpg' | gpg --dearmor -o /usr/share/keyrings/docker.gpg
echo "deb [signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/debian bullseye stable" | tee /etc/apt/sources.list.d/docker.list
apt update
apt install docker
And since I like to manage service through a GUI interface, we’re also going to install Portainer. It’s a really cool tool to manage docker stacks on multiple hosts.
It also runs as a docker container which I will start using a docker-compose file. I prefer to store the configuration/data for my non-system services in the /srv
directory so I created the /srv/portainer
folder and placed the following docker-compose.yml
file inside that:
services:
app:
image: portainer/portainer-ce
restart: always
environment:
VIRTUAL_HOST: "portainer.example.com"
VIRTUAL_PORT: 9000
volumes:
- "./:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8000:8000"
- "127.0.0.1:9000:9000"
Note that I’m only exposing the HTTP port to the localhost. That is because we’re going to use Caddy as a reverse proxy.
Let’s start up portainer using docker compose up -d
and create /etc/caddy/sites-enabled/portainer
for our reverse proxy config:
portainer.example.com {
reverse_proxy localhost:9000
log {
output file /var/log/caddy/portainer.example.com.log
}
}
Now reload caddy (systemctl reload caddy
) and we should be able to access our portainer instance on the specified host. We have to complete a first time setup there and then we’re able to manage our docker stacks using portainer.
Portainer has built-in user management, but it’s always better to centralize authentication, especially if there’re going to be multiple people accessing these services. I’ve tried multiple authentication servers and so far Authentik seems to be the best one. It has relatively small resource requirements and is really versatile.
Let’s get started by creating a postgresql database for authentik. First log in to postgres as the root user:
sudo -u postgresql psql
Then create the database, create a user and grant all privileges on it:
CREATE DATABASE authentik;
CREATE USER authentik WITH ENCRYPTED PASSWORD 'pa$$w0rd';
GRANT ALL PRIVILEGES ON DATABASE authentik TO authentik;
We also need to create several folders to store data in:
mkdir -p /srv/authentik/{certs,media,redis,templates}
And now ce can log in to portainer and create a new stack with this config:
services:
app:
image: ghcr.io/goauthentik/server
restart: always
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: 172.17.0.1
AUTHENTIK_POSTGRESQL__USER: authentik
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_POSTGRESQL__PASSWORD: "pa$$w0rd"
AUTHENTIK_SECRET_KEY: "long generated secret key"
AUTHENTIK_EMAIL__HOST: email-smtp.eu-west-1.amazonaws.com
AUTHENTIK_EMAIL__PORT: 587
AUTHENTIK_EMAIL__USERNAME: "abc"
AUTHENTIK_EMAIL__PASSWORD: "def"
AUTHENTIK_EMAIL__USE_TLS: true
AUTHENTIK_EMAIL__FROM: [email protected]
volumes:
- /srv/authentik/media:/media
- /srv/authentik/templates:/templates
ports:
- 127.0.0.1:9001:9000
depends_on:
- redis
worker:
image: ghcr.io/goauthentik/server
restart: always
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: 172.17.0.1
AUTHENTIK_POSTGRESQL__USER: authentik
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_POSTGRESQL__PASSWORD: "pa$$w0rd"
AUTHENTIK_SECRET_KEY: "long generated secret key"
AUTHENTIK_EMAIL__HOST: email-smtp.eu-west-1.amazonaws.com
AUTHENTIK_EMAIL__PORT: 587
AUTHENTIK_EMAIL__USERNAME: "abc"
AUTHENTIK_EMAIL__PASSWORD: "def"
AUTHENTIK_EMAIL__USE_TLS: true
AUTHENTIK_EMAIL__FROM: [email protected]
volumes:
- /srv/authentik/media:/media
- /srv/authentik/templates:/templates
- /srv/authentik/certs:/certs
depends_on:
- redis
redis:
image: redis:alpine
command: --save 60 1 --loglevel warning
restart: always
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
volumes:
- /srv/authentik/redis:/data
Also create the reverse proxy configuration in /etc/caddy/sites-enabled/authentik
and reload caddy afterwards:
auth.example.com {
reverse_proxy localhost:9001
log {
output file /var/log/caddy/auth.example.com.log
}
}
Now you can complete the first time setup using https://auth.example.com/if/flow/initial-setup/
. Once that’s done you can log in to the administration interface. Now there’s a lot to set up here. Enough for another blog post, so I won’t enter into much detail here. But I’m recommending this page with example flows which was a huge help for me.
As I’ve mentioned earlier, we’d like to have centralized authentication. So this is what we’re going to configure for portainer right now.
In authentik under applications click on Create With Wizard. Give it the name Portainer
. For provider type select OAuth2/OIDC. For the authorization flow select implicit consent, for redirect URIs enter https://portainer.example.com
and leave the rest on default.
Now go back to portainer and under settings -> authentication select OAuth. There check Use SSO and Automatic user provisioning. At the bottom of the page set the following OAuth configuration:
https://auth.example.com/application/o/authorize/
https://auth.example.com/application/o/token/
https://auth.example.com/application/o/userinfo/
https://portainer.example.com/
https://auth.example.com/application/o/portainer/end-session/
preferred_username
email openid profile
Auto Detect
Click save settings, log out and you should be able log in using the user you’ve set up in authentik.
Authentik also supports proxy authentication which we will use later on, so go back to authentik and create a new outpost. Give it a name and set it’s type to proxy then click create. Click the Deployment Info button to get the required variables for the deployment.
Back to portainer and create a new stack called auth-outpost
:
services:
app:
image: ghcr.io/goauthentik/proxy
environment:
AUTHENTIK_HOST: <from deployment info>
AUTHENTIK_INSECURE: false # even though in deployment info it says true
AUTHENTIK_TOKEN: <from deployment info>
ports:
- 127.0.0.1:9999:9000
For backup I’ve been using restic for a while and there’s a cool scheduler+GUI made for it called Backrest.
For it to work we need an off-site storage solution first. I recommend using Backblaze B2 in S3 compatible mode, but restic has a wide range of supported backends for repositories.
To install backrest grab the latest .tar.gz file for Linux x86_64 from the github releases page, unpack it, and call the install script:
mkdir /root/backrest-install
cd /root/backrest-install
wget https://github.com/garethgeorge/backrest/releases/download/v1.6.1/backrest_Linux_arm64.tar.gz
tar -xzvf backrest_Linux_arm64.tar.gz
chmod +x install.sh
./install.sh
cd
rm -Rf /root/backrest-install
This will make backrest listen on port 9898, but we want to use authentik to do the authentication. Since backrest doesn’t really have authentication let alone OAuth2 support, we will use the auth outpost we created earlier. The outpost is used in conjunction with our reverse proxy to authenticate the requests made to backrest.
For this let’s create a new application in authentik and call it backrest. For the provider type select Forward Auth (Single Application). For authorization flow select implicit consent and for external host type in https://backup.vps.example.com/
. Now go to outposts, click on the created outpost and assign the newly created application to it.
Now it’s time to configure our reverse proxy in /etc/caddy/sites-enabled/backrest
:
backup.vps.example.com {
route {
reverse_proxy /outpost.goauthentik.io/* localhost:9999
forward_auth localhost:9999 {
uri /outpost.goauthentik.io/auth/caddy
copy_headers X-Authentik-Username X-Authentik-Groups X-Authentik-Email X-Authentik-Name X-Authentik-Uid X-Authentik-Jwt X-Authentik-Meta-Jwks X-Authentik-Meta-Outpost X-Authentik-Meta-Provider X-Authentik-Meta-App X-Authentik-Meta-Version
trusted_proxies private_ranges
}
reverse_proxy localhost:9898
}
log {
output file /var/log/caddy/backup.vps.example.com.log
}
}
This config will forward the authentication to our outpost listening on localhost:9999.
Now we can access backrest on the set domain name.
First of all, create a repository for your backups. Next create backup plans. You can get creative here, but I’m going to provide you some must haves for our current setup:
/root/.local/share/backrest
/etc
and /srv
/tmp/pgbak
sudo -u postgres mkdir -p /tmp/pgbak
sudo -u postgres pg_dumpall --globals-only -f /tmp/pgbak/globals.sql
sudo -u postgres pg_dump -f /tmp/pgbak/authentik.tar -F tar authentik
rm -Rf /tmp/pgbak
/tmp/portainer-backup.tar.gz
curl -f -X POST -H "X-API-Key: <your_access_token>" -d '{"password": "x"}' -o /tmp/portainer-backup.tar.gz https://portainer.example.com/api/backup
rm /tmp/portainer-backup.tar.gz
If you followed this guide you should have a VPS with secure SSH authentication, a firewall, an OAuth2 provider and Portainer ready to serve all the various applications you need.
If you have any questions, feel free to comment below, reach out to us at [email protected], or turn to our infrastructure management services.
Comments