However, I kept finding things to add to that list as I looked back on it:
It wasn’t very “DRY”
The containers shared a lot of similarities in terms of starting base image and what I was doing. I repeated a bunch of shared code between the containers. Yuck. Seemed better that these could be running as a single container and save some overhead. That violates the light vs heavy containers rule, but my scaling needs are limited right now. I’m valuing simplicity and speed of development above the complexity and possibility to scale.
There was no testing of the container contents
Everything I was doing was manually tested. Hacker Dockerfile, run build, exec into it, validate by hand, rinse, repeat. It just felt hacky. Also, using purpose-built containers like php5.6-apache meant having to go in and break up the entrypoint/cmd. I had several issues overriding those and adding in my own bits successfully.
Inspecting the containers as I built it was tedious
See point 2 above. It was incredibly hard to know if what I was doing was achieving the desired result. The build process happens and you get an image. Hopefully you didn’t make a mistake and it now no longer runs. I did that quite a bit.
I was building on my deploy system directly
There was again a problem keeping track of what’s working and what’s not. I had to frequently clear the system of images and start the build over to validate the process.
Revised Approach with Ansible-Container
I started a new repo called https://github.com/bgeesaman/netpxemirror-ac and implemented a simple helper shell script, orc to help me with using Ansible-Container. I edited the ansible/container.yml and the ansible/main.yml to create a single container called maas from the phusion/baseimage:
container.yml
In the main.yml, I found it necessary to keep the hosts: all section intact from the example and just edit the per-“host” steps. The name maas from container.yml becomes the “host” in main.yml. Taking a page from previous work with packer and Ansible before, I separated things into a role called maas. Note that the name has no relation to the “host”, but it needs to exist under the ansible directory in a roles folder.
main.yml
I performed an ansible-galaxy init inside the ansible/roles/maas directory and began editing my tasks/main.yml and my vars/main.yml.
Stepping Through the Role
When ansible uses a role, it uses the variables, files, templates, and plays contained in that role when running the tasks/main.yml play. This gives a nice structure to hold the files needed to build this container.
CAVEAT
I have to comment out the service isc-dhcp-server start entry in /etc/my_init.d/30_dhcp on my workstation in order to run/test locally. This is because the subnets in the dhcpd configuration file don’t match any local interfaces. Also, this container will give out leases to systems talking on the same subnet as my workstation when running/testing. This might not be what you want.
tasks/main.yml
Because this is a Debian based image, updating apt is a common first step. Next, it installs some packages listed in vars/main.yml. Then, it walks through separate plays for installing the yum mirror pieces, the syncing pieces, the TFTP server, and the DHCP server. Finally, I drop in the last “init” script with some basic shell stuff that looks a lot like what I was doing in my entry scripts in my Dockerfiles.
Easing the Build/Test Process
I wrote a really quick shell script in the root of the repo called orc which is very rough right now and specific to my needs. However, it means I can do a ./orc build followed by an ./orc test followed by a ./orc deploy as needed. Here’s what actions orc provides at a glance:
orc build
Runs ansible-container build
orc buildclean
Runs ansible-container build --from-scratch to rebuild from a clean starting point. Useful if large changes to packages and scripts are made in the role.
orc run
Runs ansible-container run locally. This variation has a while loop for the init script instead of your “production” init script. In my case, it takes the place of /sbin/my_init from the phusion/baseimage.
orc test
Finds the id of the running container, runs the chef/inspec docker container and attaches to it to run the test suite. Thanks to the Chef folks for making inspec so easy to use/install.
orc deploy
A bit of custom stuff to send over the latest image to my deploy system in docker-compose format and starts it there.
What’s Better?
Well, this approach pretty much provides the same functionality as before, but it’s much easier to adjust/tweak and maintain. Here’s where things stand after moving to this method:
Lack of central inventory of systems and asset attributes
All of that data is now stored in the maas roles’ vars/main.yml in a format that Ansible can parse/loop through
Hardcoded paths and configuration files/settings in Docker containers
All of the key variables and path names have been separated into variables–again stored in the maas roles’ vars/main.yml. Now, changing names of files/directories is a trivial exercises as needs change.
Hardcoded DHCP Leases
These are generated during the templating run by Jinja2 filters in the dhcpd.conf.j2 template stored in the maastemplates folder and called by the dhcp.yml task/play.
Using the variable storage method in the roles’ vars/main.yml, the ks.cfg.j2 template is used to generate all the per-host kickstart files.
SELinux is disabled on deploy
Not yet addressed, but it should be a simpler debugging exercise now that it’s a single docker container running on deploy.
One version of one operating system supported
Not yet addressed, but it’s easier to add support now that all configuration files are templated with Ansible/Jinja2.
Logging from the containers
The phusion/baseimage runs a syslog daemon out of the box, but it currently is not sent anywhere.
It wasn’t very “DRY”
Instead of three separate containers from 2 different FROM base images, it’s now a single debian-based container running 3 key services.
There was no testing of the container contents
Adding inspec testing means I can now confidently add test coverage and perform testing during the build process in a few seconds instead of manually validating functionality.
Inspecting the containers as I built it was tedious
The additional insight gained by using debugging features in Ansible and the logging as the play is being run means I can more quickly diagnose where something went wrong. Ansible does a decent job of capturing the error message when things go sideways, and it spits it out immediately.
I was building on my deploy system directly
Using a bit of docker and ssh glued together with some docker-compose, I have the build and test processes running on my workstation and the final container running on the deploy system.