Docker Compose as a Portable, Extensible, and Atomically Upgradable Cloud Box
Service Summary
This service creates a host that can be used to deploy services in containers that practically anyone can use to set up and manage containerized services. This can be used to set up multi-application clouds that are able to be maintained with minimal effort, and efficiently backed up and redeployed for disaster recovery.
Workflow
- Use the Docker Compose UI to deploy new containerized services
- Redeploy updated containerized services
- Back up all persistent storage (config, data)
- Encrypt all traffic with a Let’s Encrypt cert
Phase 1
Base Install
First we update the server and get docker set up
# dnf update -y
# curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/fedora/docker-ce.repo
# dnf -y install docker-ce vim htop
# systemctl enable docker
# shutdown -r now
Docker-Compose UI
# docker run -d --name docker-compose-ui -p 5000:5000 -w /opt/docker-compose-projects/ -v /var/run/docker.sock:/var/run/docker.sock francescou/docker-compose-ui:1.13.0
MariaDB
First of all we’ll create a MariaDB container to server as our database for all of our apps.
Volumes
To create volumes, we’ll create docker volumes.
volumes:
- mysql:/var/lib/mysql
- conf.d:/etc/mysql/conf.d
Environment Variables
Environment variables are typically used to set passwords, runtime configuration options, or anything else that would need to be set. Keep in mind that this requires that the configuration options be appropriately set up in the docker image build. Any reputable image should have these documented in the docker repo with recommendations on how to use them. For instance:
environment:
MYSQL_ROOT_PASSWORD: <password>
Since we’re using this database for all of the applications (and any in the future) that we want to run, we can’t set this to a random password, as we’ll most likely need to log in to set up the initial databases for the various applications.
Network
Here is where we are able to keep everything on the up-and-up. Literally I’m talking about uptime here. This is where I can swap out/upgrade any piece of my stack and still have it connect to the right place when it comes back up again.
Docker-Compose
Bringing this all together in the docker-compose file looks like this:
version: '3.6'
services:
mariadb:
image: mariadb
container_name: mariadb
restart: always
volumes:
- mysql:/var/lib/mysql
- conf.d:/etc/mysql/conf.d
networks:
- db
environment:
MYSQL_ROOT_PASSWORD: testpassword
volumes:
mysql:
conf.d:
networks:
db:
name: db
If you’re using the Docker Compose UI, the hit ‘create’, ‘rebuild’, and then ‘start’.
NGINX
First we get this all setup:
version: '3.6'
services:
nginx:
image: nginx
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- conf.d:/etc/nginx/conf.d
- letsencrypt:/etc/letsencrypt
networks:
- ui
volumes:
conf.d:
letsencrypt:
networks:
ui:
name: ui
Let’s Encrypt
Once that’s set up and the volumes are created, stop the docker container, and run certbot
to get a valid certificate.
# dnf install -y certbot
# certbot certonly --standalone -d andrewcz.com
# rm -rf /var/lib/docker/volumes/nginx_letsencrypt/_data
# ln -sT /etc/letsencrypt /var/lib/docker/volumes/nginx_letsencrypt/_data
TODO: Why can’t I do this:
# ln -sT /etc/letsencrypt/live /var/lib/docker/volumes/nginx_letsencrypt/_data/live
# ln -sT /etc/letsencrypt/archive /var/lib/docker/volumes/nginx_letsencrypt/_data/archive
And then put the following in /var/lib/docker/volumes/nginx_conf.d/_data/default.conf
:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
#
# Redirect _all_ HTTP traffic to HTTPS
#
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name andrewcz.com;
# IDK what this does
underscores_in_headers on;
#
# Set SSL/TLS options
#
ssl_certificate /etc/letsencrypt/live/andrewcz.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/andrewcz.com/privkey.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
# Only use safe chiphers
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA';
ssl_prefer_server_ciphers on;
#
# Include all of the locations for this domain
#
include /etc/nginx/conf.d/andrewcz.com/*.conf;
#
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
And then do a quick # mkdir /var/lib/docker/volumes/nginx_conf.d/_data/andrewcz.com
This allows us to specify in /var/lib/docker/volumes/nginx_conf.d/_data/andrewcz.com/*.conf
the services that we want to serve, while maintaining a standard of security for each of them. FWIW, this setup earns an ‘A’ on Qualys SSL Labs
Nextcloud
Setup MySQL database
This is the setup that can be found in the Nextcloud documentation.
# docker exec -it mariadb bash
# mysql -u root -p
CREATE USER 'nextcloud'@'%' IDENTIFIED BY 'testpassword';
CREATE DATABASE IF NOT EXISTS nextcloud;
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud'@'%' IDENTIFIED BY 'testpassword';
FLUSH privileges;
Deployment
Now, using the previous systems, (keep in mind that the volumes will have to be created before the image can be ran) we can set up a nextcloud container using the following docker-compose file:
version: '3.6'
services:
nextcloud:
image: nextcloud
container_name: nextcloud
volumes:
- data:/var/www/html/data
- config:/var/www/html/config
- themes:/var/www/html/themes
networks:
- ui
- db
environment:
MYSQL_HOST: mariadb
MYSQL_PASSWORD: testpassword
MYSQL_ADMIN_PASSWORD: testpassword
volumes:
data:
config:
themes:
networks:
db:
external: true
ui:
external: true
NGINX server
Enter the following into /var/lib/docker/volumes/nginx_conf.d/_data/andrewcz.com/nextcloud.conf
:
#
# proxy the PHP scripts to Apache listening on http://nextcloud
#
location /nextcloud/ {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Front-End-Https on;
proxy_pass http://nextcloud;
rewrite ^/nextcloud(.*)$ $1 break;
}
Make sure to restart nginx
after putting this file together.
Nextcloud Config
For the setup, visit andrewcz.com/nextcloud
, which redirects you to a CSS-less HTML page that asks you to setup the details, which you can fill into the fields. Then after that, a 404
is thrown unless…
This has to be in the Nextcloud instance’s config file, but I’m not 100% sure how to get it there…
...
'trusted_domains' =>
array (
0 => 'nextcloud',
1 => 'andrewcz.com',
),
'overwritewebroot' => '/nextcloud',
...
At that point, andrewcz.com/nextcloud
will get you where you want to go.
Secondary subdirectory site
This should be fairly simple. Let’s create a new docker-compose file for an apache service and an NGINX configuration to match.
HTTPD
version: '3.6'
services:
httpd:
image: httpd:2.4
container_name: httpd
volumes:
- httpd:/etc/httpd
networks:
- ui
volumes:
httpd:
networks:
ui:
external: true
Nginx
#
# proxy the PHP scripts to Apache listening on http://nextcloud
#
location /httpd/ {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Front-End-Https on;
proxy_pass http://httpd;
rewrite ^/httpd(.*)$ $1 break;
}
Once again, make sure to restart the nginx
container after adding this.
Transfer to new server
The epitome of a ‘lift-and-shift’ is the ability to redirect the server to a brand new VPS anywhere. So I uploaded a file to the root of my nextcloud install, and am going to recreate the containers after transferring the named volume directories over to the new instance.
On the new server:
# dnf update -y && curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/fedora/docker-ce.repo && dnf -y install docker-ce vim htop && systemctl enable docker && shutdown -r now
On the old server:
# ssh-copy-id root@new-servers-ip-addr
# scp -r /var/lib/docker/volumes/{mariadb,nextcloud,nginx,httpd}* root@new-servers-ip-addr:/var/lib/docker/volumes/
# scp -r /etc/letsencrypt root@new-servers-ip-addr:/etc/letsencrypt
On the new server follow the setup of Phase 1 from “Docker-Compose UI” without dealing with anything in /var/lib/docker/volumes
at all.
That being said, the httpd
subdomain works just fine – HTTPS and everything! At first, the Nextcloud instance was complaining, but it appeared to just be a permissions problem, and after issuing a chown -R 33:33 /var/lib/docker/volumes/nextcloud_*
(where 33
is www-data
within the container) it worked perfectly, with preserved data and all!
I’d call that a smashing success.
Phase 2
So after my buddy gave a presentation on this, I got the idea from him to make this automated using Ansible. Of course, this will take a bit of re-thinking due to the fact that I probably won’t be using the UI anymore, as convenient as it was, since it is a hindrance to automation. We’ll have to standardize a couple things that have been heretofore decided for us, like the location of the docker-compose files and how ansible should be set up.
Ansible
This will ensure portability for others to use this with a minimal amount of fussing.
The role can be found at smacz/role-compositional. Feel free to reference the README for detailed info…once I get around to writing it.
First key in Dict
There was a switch from Python2 to Python3 where when dict_keys
was returned, it had to be cast to a list instead of being returned as a list. Therefore, Ansible provided a way to do this with Jinja2 natively, since casting to different types is not possible within variables: Dictionary Views
Backups
So for backups, we can assume that we will be able to get the directories that we want in /var/lib/docker/volumes/
and just copy those to wherever we’re making our backups. In fact, the volumes that we put in there were explicitly chosen in order to make this type of backup possible.
There are some additional directories in there that we didn’t create though that are programmatically generated by docker. We’ll want to avoid these since they will undoubtedly be full of things that are meant to be replaced by instance and version-specific information. We only care that our data gets backed up.
All the directories that we create as specific volumes are named <container name>_<volume name>
. All of the directories that docker programmatically generates for us do not have any underscores. Therefore, we should be safe by simply copying only the directories that have underscores in them inside of the /var/lib/docker/volumes
directory.
$ rsync -ave ssh root@compose-box:'/var/lib/docker/volumes/*_*' ./
Keep in mind that this has to be ran as root in order to preserve file ownership. If you need root privileges for this (which you probably do) and you’re using ssh keys, make sure to call this out in the rsync
command:
$ sudo rsync -ave "ssh -i /Users/cziryaka/.ssh/id_ed25519" root@compose-box:'/var/lib/docker/volumes/*_*' ./
Nextcloud
Restorations
Nextcloud
The Nextcloud restore is fairly simple:
- Make sure that the data directories are already there and populated:
/var/lib/docker/volumes/nextcloud_config
/var/lib/docker/volumes/nextcloud_data
/var/lib/docker/volumes/nextcloud_themes
- Copy the MySQL database dump into the container. This is easiest if it’s copied into one of the mounted volumes on the host.
- Login to the container and create the database, then import the database dump:
docker exec -it mariadb bash
mysql -u root -p<password> -e 'CREATE DATABASE IF NOT EXISTS nextcloud'
mysql -u root -p<password> nextcloud < /path/to/database/dump
- Then the Nextcloud config should have
oc_
as the database table prefix setting:dbtableprefix
. - Run the compositional role with the Nextcloud service. This will set up everything else.