Creates the new scaffolding in the PWD, not the default
This was hard. Like, super hard. But I want to control everything from the same place, so I decided to do this.
I split up the inventory into networks.
- Hub - My home network
- VMLab - My testing network made entirely of VMs
- VPS - My random VPS’s for personal use
- AndrewCz - My public domain hosted in both a DMZ at my place and on VPS’s as necessary
There are four directories in my
ansible directory with these names. Those also have sub-directories of
inventory.ini files to specify expected behavior.
In each inventory file, I have a cross-section of groups made up of two ideas.
I’ll start with the second. A web server is a web server. Port 80 should be open, ssh too (obviously) but besides that it should be pretty locked down. Likewise, a database server is a database server, an smtp server is an smtp server, etc. Since this is my own work, I’ll most very likely be using all of the same tech for all of the types/protocols. For example, every database that I use will be MariaDB, each web server will be using Apache, etc.
Example #2 groups:
This works well because I’ll be using the same way to chroot a process or configure the base install, etc. This then provides a common base to work on top of and customize. Get down to the lowest common denominator if you will. Next, I’ll only have to watch updates on those groups of software and it’ll be easy to update them all in case of a critical vulnerability (e.g.
ansible-playbook -i andrewcz web.yml, with only one host line
But primarily, I’ll be working with multiple applications. Think of a git application for a moment. To be easily publicly accessible it’ll need a webGUI front-end. I’ll want it to keep up to date, so it’ll probably store repos on a ftp server which I could access and change independently. Then, if I use gitlab or another fancy front-end, I’ll want a database server to keep that information. Hell, I could get fancy and set up an authentication service if I wanted.
Example #1 groups:
This works well because if I need to deploy an application stack in a brand new network (someone else’s infrastructure, another domain, a pentesting VM network) all I have to do (if I get the playbook and variables right) is to create a brand new inventory with the correct host and variable information. Also, I can efficiently use the
--limit flag if I only want a specific part of my application stack reconfigured. Especially considering that many of these can talk to central syslog servers and I can deploy these too. Best practices all around.
Why group by application?
Ansible expects that each project directory is only for one project (application). That’s too much work for me - especially to update and patch as necessary. Plus it’s more fun this way. I get to have total control of the entire infrastructure without having to jump around projects and networks. Ansible states that:
When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts. You start to think about modeling what something is, rather than how to make something look like something. It’s no longer “apply this handful of THINGS to these hosts”, you say “these hosts are dbservers” or “these hosts are webservers”. In programming, we might call that “encapsulating” how things work. For instance, you can drive a car without knowing how the engine works.
My definition of encapsulating is as easy as being able to describe infrastructure as:
These HOSTS are TYPES for APPLICATION.
Hosts to types are easy. It’s squeezing the application in there is the hard part.
Since this is going against the grain, it’s bound to be a bit hard to think out, and there are bound to be a couple spots where it is infeasible to make this work. However, thinking through this over the past couple of days (albiet not having been able to fully dive into practical usage of ansible) I think it is a good compromise between “Devops working on an App” and “IT dealing with everything for the company”. As I tend to lean towards the latter, I have decided to structure ansible as I described it earlier.
Also, I’ve always had this unexplored fetish of being able to sit at my desk in front of my multiple glowing screens and slowly, slowly raise my hands as would a conductor of a full orchestra to evoke the opening stanza of a classic concerto, as my computer does all of my work for me - almost as if I were using advanced science to create the illusion of magic.
Clustering and other infrastructure customizations
This would have to be done as a second layer on top of any top-level type. This can also be determined from a variable that is passed to the task/role selection. The base-level would set up defaults for a single host, but must not prohibit idempotentcy if there is some of these second-level layers added on. Especially if an upgrade/patch needs to be applied. Luckily with my two-tiered approach I can detail actions to be taken on multiple types of servers of one application - potentially intermingling them if necessary.
Chroots and best practices
I’m not sure how this would work with chroots and the permissions that various web applications would need, but I am not too apprehensive about this. In fact, if everything works out, I feel that it might even increase the security of my systems seeing that they are uniformly following best practices. No special snowflakes here.
Specifically on top of established protocols (web, smtp, etc) that would break the underlying base-layer that I create. I feel that the base layer would be simply a service install, enable and start. Anything else should be considered quite closely before being added to the base-layer playbook.
sigh I’m sure there’s something that I’m forgetting.