NVBots was represented by their co-founder and CTO, Forrest Pieper and Areth Foster-Webster, who migrated their system to docker. The company started off in their college dorm with a 3D printer that the founders fought over who had access to it. What resulted was an iterative approach until we have their NVPro™ 3D Printer that they demo’d.
The 3D machine is designed for schools at the moment, where students can submit jobs to the printer, a teacher can approve or deny the requests and the printer will provide a webcam view of each build before it removes the final product from the build plate and processes the next model in the queue!
Where be Docker?
One of the issues the team faced was how to get the machines, which were scattered across different schools, to update.
In comes resin.io – the devops for the Internet of Things!
Areth explained how this simple to implement service was exactly what the team was searching for and introduced them to Docker. Docker allowed the team to develop on their local machines, checking in to their own repo as they went. Finally when they were happy with a version they would push the code and Dockerfile to a specific group’s repo on resin.io.
A tag relates the git repo to the the docker file, and all the instances of the docker file with the tag are displayed under that tag on resin.io.
So a checkin would behave similar to a checkin to Heroku – tests would automatically run and the Docker image would be built. Once it was built, it would being a rolling update of all the instances that are in that tagged group. This is automatic and there is no manual intervention required – the resin.io agent manages the update behind the scenes!
Not only does the service provide this graceful rolling update, but it provides numerous other services to interact with the containers – you can access the terminal via the browser, you can see the logs, you can see the GPS location and set environment variables etc.
Two of the big takeaways that the team found was:
– Rebuilding the container was far faster than manually restarting the device
– Testing the container was reduced overall as they did not need to perform tests both locally and on the raspberry pi itself.
Jonas Rosland, who organizes the Docker Boston meetups, stepped up next with a really fascinating topic – how to implement persistent data storage in Docker!
As, well my poor understanding goes, the container is entirely virtual and runs purely in memory. It has no access to the underlying OS file system, so it can’t actually save and reuse files. When you kill your container – you kill it good!
The EMC Code community have come up with REXRay, which is a driver for docker-volume that interfaces with the underlying infrastructures to create volumes that can be stored externally and mounted by the container.
You would first create your volume on say EC2, and then you would use the REXRay driver to mount this to a location within the container. Now the container can use this to persist data. You can bring the container down, start a new container and reconnect to the volume and access the same data!
This alone was very cool, but Jonas wasn’t done. This is a lot of manual setup, and so he introduced us to Mesos – a service that normally runs entire frameworks. Yet with another tool, Marathon, it turns out you can tune it to run applications on Mesos! As Jonas described these services, you “program towards your datacenter”.
With Marathon’s API you can start defining all these setups – have it go out, create the volumes, attach it to the container, health checks, dependencies etc.
Lots of options to start playing with!