Interesting Stuff on the Web Feb 1st-7th


Docker have started talking about a new framework, calling it a Container-As-A-Service. There aren’t many details out yet, outside of a white paper, but it sounds like it will be a complete integration of the different docker services. More information will be provided at an upcoming webinar on Feb 16th, 1PM EST!


Finally this week showed many awesome steps forward with Docker Compose including the ability to set custom IPs and host aliases on containers! Also the Splunk integration will hopefully help get Docker more use in enterprise,


With the recent release of the AWS Certificate Manager and the ability to get SSL certificates for free, the base assumption is that you would use these everywhere! Ryan Brown points out the integration between AWS Cloudfront and Cert Manager isn’t quite there yet.. and then provides the full set of code for you to create a new Type to provide the integration!

This wasn’t the only awesome post by Ryan, he spoke with Speaker and provided another in-depth post of implementing the AWS API Gateway as a method of being a proxy passthrough to other APIs, and to keep the costs down, using AWS Lambdas!

Finally to wrap up some of the AWS stuff, was a new whitepaper by Puppet Labs, where they have been working to make managing nodes in the cloud easier by generating certificates based on the instance id, and keeping a track of running instances.

Puppet AWS Integration


Git hooks are very powerful, and this post really starts diving into some of the possible use-cases! I really like the post checkout check to include the branch’s current build status.

Git Hooks

Github posted about sub modules – linking and embedding projects in one repo. As they point out, this is an edge case, embedding external dependencies. I hadn’t even realized that git has these commands to make this easier to manage and thankfully the post goes step by step through a use case of implementing this with an existing project.

Finally one of the best things I saw this week was the Git Large File Storage. This was always a pain – debating what should go into source control, what is an artifact.. Turns out we could have just been using this, where you can configure it to store certain file types into lfs and the rest to your repo!


After college I rarely ever think about the underlying structures behind my code – the network stack, how the code is converted into assembly.. this is what made this post about firewalls and TCP sequence numbers so interesting.

Laura Frank posted a great introduction to Go concurrency patterns, starting off with what concurrency is, how it is implemented and end with a couple of great links including this one to help you visualize concurrency!

Go Concurrency Fanning

Cool stuff

Well now for some stuff that just doesn’t really fit anywhere else.. and first off are these awesome eagles.. sorry.. Anti-Drone devices 😉

Anti Drones

You don’t need to worry that this will boost the eagles confidence enough to take on us – thankfully science has thought ahead and we now have exoskeletons to combat the future eagle army!

This started off sounding awesome – the second life of the VR realm, AltspaceVR had just been released for the Gear VR.. Sadly it turns out that this is only for the consumer edition and no joy for the Note 4 users.

Now for anyone who likes leds, and trying to make Arduino led cubes, then you will love the 512 led Tittle and its awesome implementation! Hopefully they set this up with IFTTT integration, using this as a build light could be great fun 😉

And finally, for anyone who really wants to excel as a developer this Monday-Friday guide is a life saver!

A Complete DevOps Pipeline Demo Part 1

In work we’ve discussed tools and implementations we want, yet often we talk about each individually. This makes it hard to really evaluate what the fully implemented pipeline would be like to work with.


So to give an idea, I’m pulling together multiple docker containers and connecting them to provide a local infrastructure to test with. This is primarily built around the idea of ChatOps which has to primary benefits:

  1. Developers don’t need to login into multiple services, but can orchestrate everything from a single location
  2. The team will gain cohesion as they learn what everyone else is working on, as status updates come in from different services etc. Osmosis!


I’m going to walk through several components:

Docker Machine

I’m running this on OSX and so using Virtualbox.

With the plan to run all these containers off a single docker machine, I’m boosting the memory to 8gigs. If this is happening behind a proxy, set the information and set the no_proxy info. Setting the no_proxy info may take a few tries – setup the machine, get the ip via

docker-machine create \
-d virtualbox \
docker-machine ip devops

For the rest of the article I’ll assume that the IP was Now run the following code with everything together:

docker-machine create \
-d virtualbox \
--virtualbox-memory 8192 \
--engine-env HTTP_PROXY=<proxy host>:<proxy port> \
--engine-env HTTPS_PROXY=<proxy host>:<proxy port> \
--engine-env http_proxy=<proxy host>:<proxy port> \
--engine-env https_proxy=<proxy host>:<proxy port> \
--engine-env no_proxy=



The first step is setting up the central chat server. Otherwise you could always use Slack or HipChat, but since I work behind a proxy, it is easier to have it behind it and call outside.

docker run --name mattermost -d --publish 8065:80 mattermost/platform

Login to the server at and setup the initial setup.


Now go to System Console:

Mattermost System Console Menu Select

Under the Service Settings, you will want to enable Incoming,  Outgoing Webhooks, and Enabling Overriding Usernames from Webhooks:

Mattermost Webhooks Enabled

Click Save at the bottom and switch back to your channel:
Mattermost Channels
Go to your Account settings, we need to setup the webhooks to work with our bot!
Mattermost Account Settings
Switch to the Integration tab, and click Incoming Webhooks, select the channel you want and click Add:
Mattermost Account Incoming Webhook
Right now, I’m setting this for the default channel – in an actual implementation I’d have a channel per project and potentially generic service channels. Note down this URL, we will use it with Hubot.
Now quickly back to the terminal to run:
docker exec mattermost ifconfig
And you should see the IP address of the container on the docker-machine. Now we are going to increment it by one, as the next container we kick off will register it – which will be our chat bot. Assuming mattermost returned, the bot will recieve
Now similar to the Incoming Webhooks, go to Outgoing Webhooks and set the channel, a trigger if you want to, and set the callback URL to
Mattermost Outgoing Webhook
Click Add and we are done with Mattermost!


Mattermost is an alternative to Slack that you can manage yourself, and built as an alternative, it is made to integrate well with components built for Slack. Hubot is a chat bot that you can develop and thankfully someone created an adapter for Mattermost:
I cloned this to a local machine, called the bot Gort, and built a docker image up from it, based on an existing image.
You will want to update this, set MATTERMOST_INCOME_URL to be the Incoming Webhook URL, and get the token from the Outgoing Webhook and set the variable MATTERMOST_TOKEN to it.
# Dockerfile file to build Mattermost Bot container
# AUTHOR: John Doyle
# Pull base image.
FROM guttertec/nodejs
MAINTAINER John Doyle <>
# Install Yo, Hubot
npm install -g yo generator-hubot && \
apt-get update && \
apt-get install -y redis-server && \
rm -rf /var/lib/apt/lists/*
# Define default command.
RUN /etc/init.d/redis-server start
# Mattermost environment variables
ENV MATTERMOST_TOKEN 63irrdrwejdrtxn1w7wesy6nzh
# Create Hubot
ADD ./Gort/ /Gort
CMD /Gort/bin/hubot -a mattermost

Save this file in the parent directory of the bot and call it Dockerfile.

Now we need to build this into an image with the following command (remove the proxy bit if not needed obviously..) :

docker build --build-arg http_proxy=http://<proxy host>:<proxy port> -t mattermostbot .

The first time it will take 15-20 seconds to compile, future changes will be far faster as it only deals with the deltas.

Now that the bot image is built, you can run the bot in a container:

docker run --name chat-bot -d mattermostbot

We can see the containers running:

docker ps

And we can check the logs of the bot:

docker logs chat-bot

Within mattermost, you can now check if the bot is alive:

Screen Shot 2016-01-31 at 7.26.36 PM

I’ll continue this next with the integration of Jira with Mattermost, and integrating Jira with Bitbucket and Jenkins.

Docker Boston January Meetup – January 19th 2016

This was a terrific meetup that combined hardware, docker, demos, and beer – a really great combo it turns out! VMTurbo hosted the meetup in their Boston offices.


NVBots was represented by their co-founder and CTO, Forrest Pieper and Areth Foster-Webster, who migrated their system to docker. The company started off in their college dorm with a 3D printer that the founders fought over who had access to it. What resulted was an iterative approach until we have their NVPro™ 3D Printer that they demo’d.

The 3D machine is designed for schools at the moment, where students can submit jobs to the printer, a teacher can approve or deny the requests and the printer will provide a webcam view of each build before it removes the final product from the build plate and processes the next model in the queue!

Where be Docker?

One of the issues the team faced was how to get the machines, which were scattered across different schools, to update.

In comes – the devops for the Internet of Things!

Areth explained how this simple to implement service was exactly what the team was searching for and introduced them to Docker. Docker allowed the team to develop on their local machines, checking in to their own repo as they went. Finally when they were happy with a version they would push the code and Dockerfile to a specific group’s repo on

A tag relates the git repo to the the docker file, and all the instances of the docker file with the tag are displayed under that tag on

So a checkin would behave similar to a checkin to Heroku – tests would automatically run and the Docker image would be built. Once it was built, it would being a rolling update of all the instances that are in that tagged group. This is automatic and there is no manual intervention required – the agent manages the update behind the scenes!

Not only does the service provide this graceful rolling update, but it provides numerous other services to interact with the containers – you can access the terminal via the browser, you can see the logs, you can see the GPS location and set environment variables etc.

Two of the big takeaways that the team found was:
– Rebuilding the container was far faster than manually restarting the device
– Testing the container was reduced overall as they did not need to perform tests both locally and on the raspberry pi itself.

EMC Code

Jonas Rosland, who organizes the Docker Boston meetups, stepped up next with a really fascinating topic – how to implement persistent data storage in Docker!

As, well my poor understanding goes, the container is entirely virtual and runs purely in memory. It has no access to the underlying OS file system, so it can’t actually save and reuse files. When you kill your container – you kill it good!

Now EMC Code have been working on a solution to this, and they are not alone – indeed folks from Flocker and Blockbridge were in attendance who are defining their own solutions to this very problem.

The EMC Code community have come up with REXRay, which is a driver for docker-volume that interfaces with the underlying infrastructures to create volumes that can be stored externally and mounted by the container.

You would first create your volume on say EC2, and then you would use the REXRay driver to mount this to a location within the container. Now the container can use this to persist data. You can bring the container down, start a new container and reconnect to the volume and access the same data!

This alone was very cool, but Jonas wasn’t done. This is a lot of manual setup, and so he introduced us to Mesos – a service that normally runs entire frameworks. Yet with another tool, Marathon, it turns out you can tune it to run applications on Mesos! As Jonas described these services, you “program towards your datacenter”.

With Marathon’s API you can start defining all these setups – have it go out, create the volumes, attach it to the container, health checks, dependencies etc.

Lots of options to start playing with!