This service creates a host that can be used to deploy services in containers that practically anyone can use to set up and manage containerized services. This can be used to set up multi-application clouds that are able to be maintained with minimal effort, and efficiently backed up and redeployed for disaster recovery.
- Use Cockpit to deploy new containerized services
- Redeploy updated containerized services
- Back up all persistent storage (config, data)
First we update the server and get docker and cockpit set up
# dnf update -y # dnf install -y cockpit cockpit-docker cockpit-selinux && systemctl enable cockpit.socket # dnf remove -y docker docker-common container-selinux docker-selinux docker-engine # dnf -y install curl # curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/fedora/docker-ce.repo # dnf -y install docker-ce # systemctl enable docker # passwd # shutdown -r now
After that, we need to set up the root user’s password. It’s easier for the time being to just use
root as it’s got all of the permissions we need to set stuff up right now.
You should now be able to access the cockpit GUI that is running on port 9090.
From here on out, all of the terminal commands can be ran in the terminal available on the cockpit webpage.
Now let’s create an application. There are a couple things that we’ll have to set up. We’ll have to set up volumes and environment variables.
First we get to download the image:
[Containers] -> [Get new image] ->
To create volumes, we’ll create docker volumes. It’s easy to do from the command line and they are all placed in
/var/lib/docker/volumes/ and they all have a subdirectory
_data/ that is where the actual data is stored.
# docker volume create mariadb-conf.d # docker volume create mariadb-mysql
These two volumes are named for the application container that they are to be mounted in, and the directory that we are mounting them at.
This makes more sense when we start creating this volume in Cockpit:
[Containers] -> [Images] -> [mariadb:latest] -> [>]
This will bring up the new container setup screen. Since this is our first container, and it is not going to be exposed to the internet, we can unselect the [Links] and [Ports] checkboxes. These, however, will come in handy later. However, you should change the container name to prepend the application service (
mariadb) to the Container Name. The rest of it is fine as it is randomly generated.
For right now, we’re interested in the [Volumes] section. It can be filled in with the following:
Volumes: - container: '/var/lib/mysql' host_path: '/var/lib/docker/volumes/mariadb-mysql/_data' access: 'ReadWrite' - container: '/etc/mysql/conf.d' host_path: '/var/lib/docker/volumes/mariadb-conf.d/_data' access: 'Read'
Environment variables are typically used to set passwords, runtime configuration options, or anything else that would need to be set. Keep in mind that this requires that the configuration options be appropriately set up in the docker image build. Any reputable image should have these documented in the docker repo with recommendations on how to use them. For instance:
Environment: - key: 'MYSQL_ROOT_PASSWORD' value: '<password>'
Since we’re using this database for all of the applications (and any in the future) that we want to run, we can’t set this to a random password, as we’ll most likely need to log in to set up the initial databases for the various applications.
Setup MySQL database
This is the setup that can be found in the Nextcloud documentation.
# docker exec -it mariadb_<...> bash # mysql -u root -p CREATE USER 'nextcloud'@'%' IDENTIFIED BY '<password>'; CREATE DATABASE IF NOT EXISTS nextcloud; GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud'@'%' IDENTIFIED BY '<password>'; FLUSH privileges;
Now, using the previous systems, (keep in mind that the volumes will have to be created before the image can be ran) we can set up a nextcloud container using the following definition:
Image: 'nextcloud:latest' Container_Name: 'nextcloud_<...>' Ports: - container_port: '80' protocol: 'TCP' host_port: '80' Links: - container_name: 'mariadb_<...>' alias: 'mariadb' Volumes: - container: '/var/www/html/config' host_path: '/var/lib/docker/volumes/nextcloud-config/_data' access: 'ReadWrite' - container: '/var/www/html/data' host_path: '/var/lib/docker/volumes/nextcloud-data/_data' access: 'ReadWrite' - container: '/var/www/html/themes' host_path: '/var/lib/docker/volumes/nextcloud-themes/_data' access: 'ReadWrite' Environment: - key: 'MYSQL_HOST' value: 'mariadb' - key: 'MYSQL_PASSWORD' value: '<password>' - key: 'NEXTCLOUD_ADMIN_PASSWORD' value: '<password>'
Now, since the docker container creates an unnamed volume that’s mounted at
/var/www/html in the container, it will be there and we can’t delete it. However, as long we don’t specify the host path, it should work as planned without having to change anything.
Also, the way that this is set up right now, the webgui still needs to be set up. Perhaps if all of the environment variables were set up that are in the docker documentation, it would be bypassed. However, for the time being, the webgui initialization still needs to be filled out.
Now in order to get to a place where we can have multiple websites running out of the standard ports, we’ll have to put a reverse proxy in front of them to appropriately redirect HTTP/S requests.
For the nextcloud instance, to get it working with the reverse proxy, change the tag from
fpm, and remove the ports that are being exposed to the host.
Image: 'nginx:latest' Container_Name: 'nginx_<...>' Ports: - container_port: '80' protocol: 'TCP' host_port: '80' - container_port: '443' protocol: 'TCP' host_port: '443' Links: - container_name: 'nextcloud_<...>' alias: 'nextcloud' Volumes: - container: '/etc/nginx/' host_path: '/var/lib/docker/volumes/nginx-nginx/_data' access: 'ReadWrite' - container: '/etc/letsencrypt/' host_path: '/var/lib/docker/volumes/nginx-letsencrypt/_data' access: 'ReadWrite' Environment:
Although we don’t have cockpit in a container for obvious reasons, we should be able to simply redirect the
/cockpit subdirectory to the same host, but to port
8080, while stripping off the subdirectory definition.
Alternatively, the cockpit service can be proxied regularly using the setup shown in the Github Cockpit Wiki. Cockpit may also be able to be ran on an additional internal port for it to be forwarded too by changing the systemd Socket.
Since we’re putting this behind a reverse proxy, we can simply run the FPM version that is meant to be ran behind a proxy.