Interesting Stuff on the Web Feb 1st-7th


Docker have started talking about a new framework, calling it a Container-As-A-Service. There aren’t many details out yet, outside of a white paper, but it sounds like it will be a complete integration of the different docker services. More information will be provided at an upcoming webinar on Feb 16th, 1PM EST!


Finally this week showed many awesome steps forward with Docker Compose including the ability to set custom IPs and host aliases on containers! Also the Splunk integration will hopefully help get Docker more use in enterprise,


With the recent release of the AWS Certificate Manager and the ability to get SSL certificates for free, the base assumption is that you would use these everywhere! Ryan Brown points out the integration between AWS Cloudfront and Cert Manager isn’t quite there yet.. and then provides the full set of code for you to create a new Type to provide the integration!

This wasn’t the only awesome post by Ryan, he spoke with Speaker and provided another in-depth post of implementing the AWS API Gateway as a method of being a proxy passthrough to other APIs, and to keep the costs down, using AWS Lambdas!

Finally to wrap up some of the AWS stuff, was a new whitepaper by Puppet Labs, where they have been working to make managing nodes in the cloud easier by generating certificates based on the instance id, and keeping a track of running instances.

Puppet AWS Integration


Git hooks are very powerful, and this post really starts diving into some of the possible use-cases! I really like the post checkout check to include the branch’s current build status.

Git Hooks

Github posted about sub modules – linking and embedding projects in one repo. As they point out, this is an edge case, embedding external dependencies. I hadn’t even realized that git has these commands to make this easier to manage and thankfully the post goes step by step through a use case of implementing this with an existing project.

Finally one of the best things I saw this week was the Git Large File Storage. This was always a pain – debating what should go into source control, what is an artifact.. Turns out we could have just been using this, where you can configure it to store certain file types into lfs and the rest to your repo!


After college I rarely ever think about the underlying structures behind my code – the network stack, how the code is converted into assembly.. this is what made this post about firewalls and TCP sequence numbers so interesting.

Laura Frank posted a great introduction to Go concurrency patterns, starting off with what concurrency is, how it is implemented and end with a couple of great links including this one to help you visualize concurrency!

Go Concurrency Fanning

Cool stuff

Well now for some stuff that just doesn’t really fit anywhere else.. and first off are these awesome eagles.. sorry.. Anti-Drone devices 😉

Anti Drones

You don’t need to worry that this will boost the eagles confidence enough to take on us – thankfully science has thought ahead and we now have exoskeletons to combat the future eagle army!

This started off sounding awesome – the second life of the VR realm, AltspaceVR had just been released for the Gear VR.. Sadly it turns out that this is only for the consumer edition and no joy for the Note 4 users.

Now for anyone who likes leds, and trying to make Arduino led cubes, then you will love the 512 led Tittle and its awesome implementation! Hopefully they set this up with IFTTT integration, using this as a build light could be great fun 😉

And finally, for anyone who really wants to excel as a developer this Monday-Friday guide is a life saver!

Those who cannot remember the past are condemned to repeat it…

I threw some slides together when discussing database rollbacks and thought I would post them!

Database Rollbacks

A painful process that no database enjoys – trying to rollback your changes.

Installing code to the database via a deployment tool.

All our changes are bundled up and installed to the Database via our deploy tool – so far so good!

The database install was unsuccessfull resulting in packages being uncompiled.

Crap – something went wrong and the database has been poisoned!

A signpost that a rollback is required.

Stop everything – we got to pull that new code out.

The old version of code is installed over the code just installed.

The old version kicks the yuppie new package out – after all, back in ITS day there was no problem.

A healthy database

Excellent – everything is working again! Now its time to figure out what happened and how to ensure it doesn’t happen again.

Database Source Control Management

A topic that generally ends up with people pulling their hair out! I hope to document my own experiences with this beast in the hope someone else may benefit and avoid some of the pit falls we fell into.

To start off we must first recognize that there are two distinct types of objects with Oracle – those you can create and replace and those which you can only create or drop. Procedures, functions, packages, triggers, views, and lots more fall into the replace category. Tables would be the major consideration for create or drop. We must cater for tables and the like slightly differently – where we can allow the SCM’s history to manage replaceable objects we must separate out our creation and alteration scripts for non replaceable objects.


While working with objects I am not a fan of stipulating the object type in the object name, with so few characters we end up fighting for space to fit into company naming standards. Thankfully file extensions are already standardized, giving us the basis to figure out the object type. A short list can be seen here:

Object Type File Extension
Table .tbl
Package Spec .pks
Package Body .pkb
Trigger .trg
View .vw

Now we get down to the actual SCM design! Through experimentation I found it best to try and mirror the database layout in the file system. So we look at a database at a time, and within each database you have a set of schemas and then within each schema you have your objects. To clean up the schema we added an artificial layer where we broke up the objects into DDL, DML and PLSQL – this is primarily for readability.


Possibly another method would be to organize around the database namespaces – but I was happy to distinguish my table index from by table create script based on the file extension.

While there are many features to consider with the SCM, a lot are determined by your release management plan – how your automated deploy tools interact with the SCM and the database. Do the tools need to apply labels to particular versions or do they go off a particular branch? Does the SCM support particular tags that you can insert the version number into comments and commit it to the database? Etc.

One general principle of Source Control Management systems in the ability to branch and when coming up with a methodology I find it best to consider the most complicated use case. For us it turned out to be multiple developers working on a single database object in a single database instance.

Straight away we know there is an issue because databases can not handle synchronized development. We need to ladder the development – this is a project management requirement but it needs to be understood to be a limitation within the SCM, i.e. someone needs to poke the PM about it… so while development team A works on Object A in development and then releases the code to QA, team B can begin building on top of team A’s work. This creates a dependency and should rightly be flagged as a risk as any issues with team A’s work will require them to go back to development and code a fix which then must be incorporated into team B’s work… it could get awfully messy with looming deadlines.

So to begin our branching strategy we decided we could not rely on team A hitting all their deadlines and making it into production – we should start off with the assumption that they wont make it at all. We should have a baseline of the object as it is in production. With this in mind we propose having the MAINBRANCH of the SCM to correspond to production. Code is not checked in or merged to this branch prior to the deploy tool committing the change to production.

With this common baseline we will have team A and team B both branch off from this common point. Each will independently implement their design in the code and check it in to their respective branch. There must be a threshold though – at some stage a merge has to occur. Since we are restricted to one instance per environment it was thought to be QA. Only one version of the code may be in QA at any one time, and so if team A gets kicked back to development and the PMs between the teams agree it is ok, team B can proceed into QA since at this point in time they do not have any of team A’s code. The first team to get passed QA may continue on the release schedule into whatever the next environment is – the other team is now responsible for merging their code with the team that has gone ahead.

So if team A found bugs in QA and had to return to development to fix them and in the mean time team B was cleared by QA then team A, after fixing their bugs, must perform a merge between their object and team B’s object.

Now as team B goes to production, their code base is checked into the MAINBRANCH. Then as team A goes to production, their code base is checked in – resulting in a continuous delta increment of code.

One major part within this entire process is communication. You must at all times be aware of what branches exists that contain a copy of a particular object – you need to be able to reach out to the other team to coordinate development efforts. This may be handled by scripts checking branches or maybe the SCM tool itself provides some ability to alert you.

With these comments in mind hopefully you will be able to leverage the experience and design an SCM process that best suits your group.