Interesting Stuff on the Web Feb 1st-7th


Docker have started talking about a new framework, calling it a Container-As-A-Service. There aren’t many details out yet, outside of a white paper, but it sounds like it will be a complete integration of the different docker services. More information will be provided at an upcoming webinar on Feb 16th, 1PM EST!


Finally this week showed many awesome steps forward with Docker Compose including the ability to set custom IPs and host aliases on containers! Also the Splunk integration will hopefully help get Docker more use in enterprise,


With the recent release of the AWS Certificate Manager and the ability to get SSL certificates for free, the base assumption is that you would use these everywhere! Ryan Brown points out the integration between AWS Cloudfront and Cert Manager isn’t quite there yet.. and then provides the full set of code for you to create a new Type to provide the integration!

This wasn’t the only awesome post by Ryan, he spoke with Speaker and provided another in-depth post of implementing the AWS API Gateway as a method of being a proxy passthrough to other APIs, and to keep the costs down, using AWS Lambdas!

Finally to wrap up some of the AWS stuff, was a new whitepaper by Puppet Labs, where they have been working to make managing nodes in the cloud easier by generating certificates based on the instance id, and keeping a track of running instances.

Puppet AWS Integration


Git hooks are very powerful, and this post really starts diving into some of the possible use-cases! I really like the post checkout check to include the branch’s current build status.

Git Hooks

Github posted about sub modules – linking and embedding projects in one repo. As they point out, this is an edge case, embedding external dependencies. I hadn’t even realized that git has these commands to make this easier to manage and thankfully the post goes step by step through a use case of implementing this with an existing project.

Finally one of the best things I saw this week was the Git Large File Storage. This was always a pain – debating what should go into source control, what is an artifact.. Turns out we could have just been using this, where you can configure it to store certain file types into lfs and the rest to your repo!


After college I rarely ever think about the underlying structures behind my code – the network stack, how the code is converted into assembly.. this is what made this post about firewalls and TCP sequence numbers so interesting.

Laura Frank posted a great introduction to Go concurrency patterns, starting off with what concurrency is, how it is implemented and end with a couple of great links including this one to help you visualize concurrency!

Go Concurrency Fanning

Cool stuff

Well now for some stuff that just doesn’t really fit anywhere else.. and first off are these awesome eagles.. sorry.. Anti-Drone devices 😉

Anti Drones

You don’t need to worry that this will boost the eagles confidence enough to take on us – thankfully science has thought ahead and we now have exoskeletons to combat the future eagle army!

This started off sounding awesome – the second life of the VR realm, AltspaceVR had just been released for the Gear VR.. Sadly it turns out that this is only for the consumer edition and no joy for the Note 4 users.

Now for anyone who likes leds, and trying to make Arduino led cubes, then you will love the 512 led Tittle and its awesome implementation! Hopefully they set this up with IFTTT integration, using this as a build light could be great fun 😉

And finally, for anyone who really wants to excel as a developer this Monday-Friday guide is a life saver!

A Complete DevOps Pipeline Demo Part 1

In work we’ve discussed tools and implementations we want, yet often we talk about each individually. This makes it hard to really evaluate what the fully implemented pipeline would be like to work with.


So to give an idea, I’m pulling together multiple docker containers and connecting them to provide a local infrastructure to test with. This is primarily built around the idea of ChatOps which has to primary benefits:

  1. Developers don’t need to login into multiple services, but can orchestrate everything from a single location
  2. The team will gain cohesion as they learn what everyone else is working on, as status updates come in from different services etc. Osmosis!


I’m going to walk through several components:

Docker Machine

I’m running this on OSX and so using Virtualbox.

With the plan to run all these containers off a single docker machine, I’m boosting the memory to 8gigs. If this is happening behind a proxy, set the information and set the no_proxy info. Setting the no_proxy info may take a few tries – setup the machine, get the ip via

docker-machine create \
-d virtualbox \
docker-machine ip devops

For the rest of the article I’ll assume that the IP was Now run the following code with everything together:

docker-machine create \
-d virtualbox \
--virtualbox-memory 8192 \
--engine-env HTTP_PROXY=<proxy host>:<proxy port> \
--engine-env HTTPS_PROXY=<proxy host>:<proxy port> \
--engine-env http_proxy=<proxy host>:<proxy port> \
--engine-env https_proxy=<proxy host>:<proxy port> \
--engine-env no_proxy=



The first step is setting up the central chat server. Otherwise you could always use Slack or HipChat, but since I work behind a proxy, it is easier to have it behind it and call outside.

docker run --name mattermost -d --publish 8065:80 mattermost/platform

Login to the server at and setup the initial setup.


Now go to System Console:

Mattermost System Console Menu Select

Under the Service Settings, you will want to enable Incoming,  Outgoing Webhooks, and Enabling Overriding Usernames from Webhooks:

Mattermost Webhooks Enabled

Click Save at the bottom and switch back to your channel:
Mattermost Channels
Go to your Account settings, we need to setup the webhooks to work with our bot!
Mattermost Account Settings
Switch to the Integration tab, and click Incoming Webhooks, select the channel you want and click Add:
Mattermost Account Incoming Webhook
Right now, I’m setting this for the default channel – in an actual implementation I’d have a channel per project and potentially generic service channels. Note down this URL, we will use it with Hubot.
Now quickly back to the terminal to run:
docker exec mattermost ifconfig
And you should see the IP address of the container on the docker-machine. Now we are going to increment it by one, as the next container we kick off will register it – which will be our chat bot. Assuming mattermost returned, the bot will recieve
Now similar to the Incoming Webhooks, go to Outgoing Webhooks and set the channel, a trigger if you want to, and set the callback URL to
Mattermost Outgoing Webhook
Click Add and we are done with Mattermost!


Mattermost is an alternative to Slack that you can manage yourself, and built as an alternative, it is made to integrate well with components built for Slack. Hubot is a chat bot that you can develop and thankfully someone created an adapter for Mattermost:
I cloned this to a local machine, called the bot Gort, and built a docker image up from it, based on an existing image.
You will want to update this, set MATTERMOST_INCOME_URL to be the Incoming Webhook URL, and get the token from the Outgoing Webhook and set the variable MATTERMOST_TOKEN to it.
# Dockerfile file to build Mattermost Bot container
# AUTHOR: John Doyle
# Pull base image.
FROM guttertec/nodejs
MAINTAINER John Doyle <>
# Install Yo, Hubot
npm install -g yo generator-hubot && \
apt-get update && \
apt-get install -y redis-server && \
rm -rf /var/lib/apt/lists/*
# Define default command.
RUN /etc/init.d/redis-server start
# Mattermost environment variables
ENV MATTERMOST_TOKEN 63irrdrwejdrtxn1w7wesy6nzh
# Create Hubot
ADD ./Gort/ /Gort
CMD /Gort/bin/hubot -a mattermost

Save this file in the parent directory of the bot and call it Dockerfile.

Now we need to build this into an image with the following command (remove the proxy bit if not needed obviously..) :

docker build --build-arg http_proxy=http://<proxy host>:<proxy port> -t mattermostbot .

The first time it will take 15-20 seconds to compile, future changes will be far faster as it only deals with the deltas.

Now that the bot image is built, you can run the bot in a container:

docker run --name chat-bot -d mattermostbot

We can see the containers running:

docker ps

And we can check the logs of the bot:

docker logs chat-bot

Within mattermost, you can now check if the bot is alive:

Screen Shot 2016-01-31 at 7.26.36 PM

I’ll continue this next with the integration of Jira with Mattermost, and integrating Jira with Bitbucket and Jenkins.

Interesting Stuff on the Web Jan 15th-31st

Mini Metro

This is a game with a beautifully crafted minimalist design. The game involves you constructing subway stations routes and seeing how your design holds as the city continues to expand!

Yet, as great as the game is, I really enjoyed reading their development blog that shows not just how they developed the game, but how they interact with the wider development community.

VR Cycling

This is a great hack that reminds me of see VirZoom at Boston FIG a few years ago. It uses an Arduino to track the bike’s motion and inputs that into a Unity game via an iphone interface. It makes you wonder what else you could repurpose from reality to act as a component in VR! I only wish that more people were developing for the GearVR.

AWS Environment Management

This topic has been pretty interesting to me, how to separate your environments to ensure the best separation of control and enhance your security. This is why I was very surprised to see the AWS Blog come out with a standardized architecture for multiple environments on a single account. I believe the largest issue here is that you’re placing all your trust on the Management section and the complex policies you have put in place.

multiple environments one account

I’m a bigger fan of splitting your non-production and production environments out between different accounts and creating a system to migrate data and applications between the two accounts. Having a well designed pipeline that already migrates data and applications between Dev, QA, UAT, etc. should make this a far simpler system to implement.


Space Shuttle Challenger

This week was the 20th anniversary of the terrible disaster of the Space Shuttle Challenger.

Smart Homes

Smart homes keep on advancing, I’ve already picked up a Nest thermostat and looking at the Echo and integrating it with some Philip Hue bulbs… And Dome Alert fits in home protection – providing checks for flood, fire, freeze, and carbon monoxide. I’ll be curious to see if they publish an API that you can integrate into IFTTT – so you’re not just alerting the emergency services, maybe it contacts your family, or a neighbor if you’re elderly. Even if its not IFTTT, Amazon is looking to find reasons to integrate with your phone. seems to be heading that way, reaching out for integrations. Where CANT you put technology? This time the good ol’ padlock has been reinvented with a technical twist in the form of needing a thumbprint to unlock it thanks to TappLock! The part I really like about it is that you can grant access to others via the app… sadly for as glossy as it looks etc, I’d only use this indoors – My bike lock quickly become banged up just from carrying it around with the chain!

Google Shakeups

So Google Hangouts have come out with version 7.0 of their app and they apparently no longer want you to text off it. After forcefully moving texting from Google Voice to Hangouts, which was a nice move once I got over it… Now breaking up the service seems more than a little annoying. Talking about Google Services, it looks like Hangouts isn’t the only one with a broken relationship – Google Play Games have dropped their requirement to have a Google Plus account, another nail sadly in its coffin.


This week we saw both the Google Docs finally allowing mobile commenting and an update to the GitHub comment markdown!  The easier it is to express yourself, the better it will be for everyone.

Serverless REST API

The AWS Lambda and API Gateway services look to be the next big step forward in cloud computing, and entire frameworks are springing up around it. Now it is even easier to implement thanks to Austin Collin‘s great step by step tutorial.

Safari’s Defensive Programming

Well what I really mean is the lack of defensive programming! Rather than having secure loose coupling between components, it appears that updates to their search component resulted in a product bug that crashed the browser. Prod bugs happen all the time, but reducing the potential for impact is an important development philosophy.

RIP Java Applets

After years, the Java Applet for browsers is finally coming to an end! But if you really miss them (you crazy fool) you can check out the online museum

People in Tech

This week, three people really stood out for me! PuppetLabs did a great interview with Trisha Gee, who has worked at MongoDB and now currently is a Developer Advocate for JetBrains. Next up was the story of Margaret Hamilton, a young programmer just out of MIT who went to work for NASA and developed the flight software for the Apollo space program! Last, but certainly not least, was Professor Minsky who founded the MIT AI lab and lead the industry, and sadly passed away this week.

Azure Stack

Azure is a very interesting service, developed my Microsoft, that was designed not only to be a competitor to AWS, but to be able to be run on top of AWS. One issue that developing with cloud services is local testing. You don’t want to checkin your code, and have to wait till its fully deployed to perform a basic integration test. Microsoft solved this with the release of the Azure Stack, allowing you to host a private cloud!



Interesting Stuff on the Web Jan 17th – 24th

Cloud Orchestration

As more and more cloud offerings come out, there is the opportunity to diversify an application’s infrastructure to gain the integral benefits from each offering. To help make this decision Google released a tool, PerkHitBenchmark, to compare the performance of each cloud. Using this tool Zach Bjornson performed an in-depth analysis of the storage perform between Amazon S3, Google Cloud Compute, and Microsoft Azure. An interesting point that Zach found was that Amazon and Azure perform best with small files (under 1MB) or streaming, but Google Compute is the clear winner for large files.

Feature Toggles

This is an awesome deployment strategy that I’ve always wanted to implement. Similar to Blue/Green deployments, the idea is to only release features to a subset of users – commonly used in Canary deployments.. cause like.. you get the idea. A well implemented toggle system can be very powerful, but very complex to architect properly. Pete Hogson thankfully has brought out a highly detailed article that walks through the implementation and refactoring.


CloudWatch Custom Metrics

CloudWatch is an AWS monitoring service that you can use to react and alert based on different metrics within your infrastructure. This is often used to implement auto scaling groups to fire when certain metrics exceed or drop below defined limits. Codeship have provided a tutorial on how to integrate application logs into the monitoring service and use THOSE metrics to trigger alerts. So where before you might have alerted based on the CPU usage, now you can trigger an auto scaling policy based on the number of a specific type of log messages.

Automated Failure Testing

Netflix is well known for the testing approach – purposefully bringing down services in production to assess how their systems respond. They have even released multiple frameworks under the name the Simian Army to automate this. As nerve-wrecking as this approach is, it instils a need for every developer in the company to question themselves on how each line of code they write would need to react to any potential failure. Netflix recently implemented a new POC on a testing framework called MOLLY that was devised in Berkeley. This framework examines existing successful requests to identify points of failure, you fail these points, analysis again to see how the system reacts and continue until all potential failure points are mapped. After this its the time based manual task of re-enforcing these points!

Datacenter Decommissioning

While AMD are just releasing their Seattle chipset and Qualcomm start up development, people start implementing hybrid clouds where they realize the cost-savings of giving up part of their data center to move to the ever expanding public cloud. This is why I particularly like this video that was shown at the AWS NY Summit back in 2014 of Conde Nast tearing down all their servers!

Developer Experience Design

I’m sure everyone has heard of UXD, User Experience Design, and the science and patterns that have developed over time. Developer Experience Design though is what it is called to develop a platform that other developers will build upon. We build APIs based on our own use, creating an interfacing framework. David Dollar runs the DXD unit at Heroku and he talks about the approach he takes . For anyone who has developed on the Heroku platform, the ease of use that they have crafted the system with is amazing. The benefits of a development system that is designed to enhance the developer, just like a well designed UX, can not be underestimated. David has three main criteria: Is it easy to get started? Is it easy to use? Is it easy to get help? He discusses DXD further with Steve Boak on a new DevOps podcast, Don’t Make Me Code.

AWS Certificate Manager

One of the coolest things to be released this week was the AWS Certificate Manager! This tool simplifies the generation, application, and automated renewal of SSL certificates. This appears to be the first step of Amazon getting into the certificate authority business as while their certificates currently have a root authority with Starfield Services, it turns out that Amazon bought them recently. Maybe the release of Lets Encrypt got them to release the Certificate Manager earlier, before they had a chance to fully incorporate the CA into their systems.

Docker and Jenkins Workflow had a great webinar on Continuous Development with Jenkins Workflow and Docker, showing a very nice pipeline. They go through the entire development process, where Jenkins would integrate, migrating between environments etc. The pipeline is entirely coded within the Jenkinsfile, implanting the philosophy of Infrastructure as Code!

Backend-less Angular Apps

A major part of agile applications is the feedback loop, constantly ensuring that the development matches the client expectations. This can be difficult to achieve in multi-tiered applications where the entire infrastructure needs to be assembled. We work around this with different UX techniques to recreate the front end, that which the client actually interacts with – drawings, wireframes, static sites. Nam Tran walks through implementing an incredibly high fidelity front end with a mocked backend.

Internet of Things Security

As more and more devices become connected to the net, we integrate them into our lives and eventually become reliant on them to perform their tasks. When dial-up modems and wifi routers started rolling out, we faced the same security issues back then also. We eventually secured them, but now that IoT devices become more prevalent the security issues come up with even more frightening implications with such things as – a search engine that trawls the internet for open ports…

Virtual Reality

I’m a fan of VR and AR, the advances have been amazing. This week saw the Teslasuit announced, a suit that seems suited to Neuromancer with its full haptic feedback system! Finally, with VR becoming more mainstream with the release of the GearVR and Oculus, we are seeing an increasing number of offerings and now Funny or Die have a VR sketch at Sundance – can’t wait to see it!

Docker Boston January Meetup – January 19th 2016

This was a terrific meetup that combined hardware, docker, demos, and beer – a really great combo it turns out! VMTurbo hosted the meetup in their Boston offices.


NVBots was represented by their co-founder and CTO, Forrest Pieper and Areth Foster-Webster, who migrated their system to docker. The company started off in their college dorm with a 3D printer that the founders fought over who had access to it. What resulted was an iterative approach until we have their NVPro™ 3D Printer that they demo’d.

The 3D machine is designed for schools at the moment, where students can submit jobs to the printer, a teacher can approve or deny the requests and the printer will provide a webcam view of each build before it removes the final product from the build plate and processes the next model in the queue!

Where be Docker?

One of the issues the team faced was how to get the machines, which were scattered across different schools, to update.

In comes – the devops for the Internet of Things!

Areth explained how this simple to implement service was exactly what the team was searching for and introduced them to Docker. Docker allowed the team to develop on their local machines, checking in to their own repo as they went. Finally when they were happy with a version they would push the code and Dockerfile to a specific group’s repo on

A tag relates the git repo to the the docker file, and all the instances of the docker file with the tag are displayed under that tag on

So a checkin would behave similar to a checkin to Heroku – tests would automatically run and the Docker image would be built. Once it was built, it would being a rolling update of all the instances that are in that tagged group. This is automatic and there is no manual intervention required – the agent manages the update behind the scenes!

Not only does the service provide this graceful rolling update, but it provides numerous other services to interact with the containers – you can access the terminal via the browser, you can see the logs, you can see the GPS location and set environment variables etc.

Two of the big takeaways that the team found was:
– Rebuilding the container was far faster than manually restarting the device
– Testing the container was reduced overall as they did not need to perform tests both locally and on the raspberry pi itself.

EMC Code

Jonas Rosland, who organizes the Docker Boston meetups, stepped up next with a really fascinating topic – how to implement persistent data storage in Docker!

As, well my poor understanding goes, the container is entirely virtual and runs purely in memory. It has no access to the underlying OS file system, so it can’t actually save and reuse files. When you kill your container – you kill it good!

Now EMC Code have been working on a solution to this, and they are not alone – indeed folks from Flocker and Blockbridge were in attendance who are defining their own solutions to this very problem.

The EMC Code community have come up with REXRay, which is a driver for docker-volume that interfaces with the underlying infrastructures to create volumes that can be stored externally and mounted by the container.

You would first create your volume on say EC2, and then you would use the REXRay driver to mount this to a location within the container. Now the container can use this to persist data. You can bring the container down, start a new container and reconnect to the volume and access the same data!

This alone was very cool, but Jonas wasn’t done. This is a lot of manual setup, and so he introduced us to Mesos – a service that normally runs entire frameworks. Yet with another tool, Marathon, it turns out you can tune it to run applications on Mesos! As Jonas described these services, you “program towards your datacenter”.

With Marathon’s API you can start defining all these setups – have it go out, create the volumes, attach it to the container, health checks, dependencies etc.

Lots of options to start playing with!