BattleHack Boston

battlehack-logo

This past weekend involved PayPal’s BattleHack being run from IsoBar upstairs in South Station in Boston! Having not even heard of BattleHack previously, I was in for a surprise with how this hackathon was to be managed.

The event was a 24 hour hackathon running from 1pm Saturday till 1pm Sunday with the aim of building an application that would benefit the community, and to be in the running for the prizes it also needed to integrate PayPal.

Some things really made this hackathon stand out:

food

 

The Food

I feel obliged to post this first as it was excellent! A constant stream of new items would appear throughout the hackathon – breakfast, lunch, and dinner. Beers and waffles. Crazy caffeinated items at around 2am.

supporters

Development Support

An awesome aspect of this hackathon was the involvement of both PayPal and any of the sponsors. There was always someone there from the company to help you if you ran into issues integrating their code into your application. Rather then pulling your hair out reading an API for the first time, these guys vastly simplified the process. I’d love to see more hackathons taking this approach in future.

massages

Non HACK STUFF

How to describe this.. everything else falls into this giant category – from having massages (!) to an area with inflatable chairs to napping to Justin Woo and his amazing ability to keep people in the game! Justin was constantly wandering the floor and coming up with things to keep people awake and energized.

techsupportOur Hack

Given all this support, myself and Rahul hacked our way around Android and Ruby to provide an app we called Empowered Locals. The idea was to provide an application that someone walking around their local neighborhood could create an issue and try and raise some funds from the community to fix it.

With the two of us, I have to admit we got a lot of coding done! There were times we were flying high and times where there may be marks left from banging heads against tables – having a test fail constantly for a good half hour or more before realizing we were not passing the data correctly at all…

PayPal integration was very nice since they provided a sample application to base it off. While the plan at the time was to simple move the money into a separate PayPal account, I see now we could have created an agreement with the donors so that when someone finally steps forward to announce the issue has been fixed, then the money is transferred. Certainly more reading up on the PayPal use cases is required!

trophy

The Winners

It was great to see how polished some of the applications were after only 24 hours of coding! I’ll be very interested in seeing if any of the applications continue on to be fully fledged apps that go into use.

The winners were:

1. BUtiful Bois – Two BU students who wrote a mobile application that provides security measures while walking home at night. This included alerting friends if not home within 30 minutes, providing the latest information from local police. Sadly our demo was right after these guys so I was not paying as much attention!

2. Cannery – A group who melded hardware and software together for an important community project. They used android and several shields to graph out data sets in Boston. These would include temperature, noise level etc.

3. Battlestars – A very organized group that developed an app for used with charity runs, whereby donors got alerted as the person runs the course passing set milestones.

end

In summary this was a blast and hope more people sign up to attend this series!

 

The Joke that is UAT

clown shoes

 

One of the final stages of development tends to be UAT – User Acceptance Testing, where end users are presented with the final product and asked to test and sign off on the product.

In a recent session it struck me how lubricious it was to combine acceptance testing with system testing. Expecting an end user to instantly identify not just UX problems, but formula problems in a very fast half hour meeting. Without being intimately familiar with the underlying data how could they spot all issues – spotting 5.0% when it should be 7.4%.

System testing is taken care of by Unit Tests and ATDD in the hopes that any issues with the underlying logic will be exposed and corrected. These should be based on the requirements that the team received, and these should have been signed off by the business sponsor.

tinkerbell

Course what we tend to find is that users submit requests that are more similar to wish lists than well defined requirements. Now as developers work on these requirements, we start creating unit tests based on our interpretation of the requirements. If there is disagreement on the team about the interpretation, then we seek the business user to confirm.

Yet if there is no disagreement on the interpretation then the formulas get implemented and the first time the user gets a chance to spot the issue is during UAT. Now we have a bug in the system that got full sign off from everyone and will remain in the system until someone notices the difference.

Computer Bug

So now the question is how could we have seen this coming and better prepared to prevent last minute changes or introducing bugs…

 

The old encapsulation diet is back in fashion

Stack of hats

A major coding principle deals with encapsulation – the basic idea is you want two services to chat with the minimum number of dependencies, this is known as loose coupling.

A way to think of this is at a restaurant – the waitress takes your order and she needs to know order number and the customers, she hands off the order to the cook who prepares the meal. The cook does not need to know anything about the customers (ok, so many allergies, special requests… :)). This information, the customers, is encapsulated by the waitress!

Each level of the networking protocol stack encapsulate the data of the layers above it. We can look at the four layer network stack:

UDP_encapsulation

 

This provides a huge benefit in terms of design, allowing each layer to be programmed separately. When a message is received by a layer it is broken up into two sections – the header and the payload.

The payload contains all the data for the layers above our layer, but the header is constructed by the corresponding layer on the sender’s machine. The header lays out the information that is used by the layer to take whatever action is required. For instance the IP layer contains the destination IP address, or the transport layer stipulating the port.

We do not need to know about the logic in the layers above and below – if data is coming from above, we simply add it all into the payload section. This loose coupling means network stack can be constructed by separately developed layers, and upgraded independently of everything else.

Think about when you have your laptop – if you have a network cable plugged in, then the physical layer uses the correct 802.3 network protocol, while if it is using a wireless card then it will implement the 802.11x protocol. The layers above do not need to know how the data is being transmitted, as such when another method becomes available then only the physical layer needs to be upgraded.

802.11 network stack

 

So, the stack is all about encapsulation!

Multiplexing of the Past

I finally understand the old problem of trying to balance accessing the internet and leaving the telephone line open for phone calls, and how it was eventually resolved.

The basis of the problem is the fact that there is a single physical copper line coming into the household. With a single line we were limited to sending information from one end to the other – think of the switchboard operator who would have to manually connected two lines!

Telephone_operators,_1952Eventually the operator was put out of a job by automatic exchanges which provided the first dial tone. Still we had the problem that a single line could only be used to transmit a single signal at a time.

In comes multiplexing! We have a limited resource, the telephone line, which we want to share so the concept of multiplex was created where the shared resource was split up but everyone who used it had no idea others might also be using the line.

time division multiplexing

There are many types of multiplexing:

  • Synchronous Time-Division Multiplexing (STDM)
  • Frequency-Division Multiplexing (FDM)
  • Statistical Multiplexing

STDM is one of the more basic ideas, where time is simply broken up and assigned in a round-robin style to everyone who wants to use the line, i.e. so maybe I only use the line every X milliseconds.

FDM is slightly more complex and something you will recognize as one of the solutions implemented so us nerds could continue to connect to the internet. The idea is that your voice and hearing has a very specific range, outside of that you can’t hear – think dog whistle.

dogwhistle

 

So the brilliant idea was that there is a huge amount of frequencies that are being left out, and that if we transmitted our packets in another bands, that we could actually transmit both together. So again we would divide up the allocated frequency band by the number of users who wanted to use it and restrict them to send on those.

When DSL broadband came in you needed to go around to all the phones in the house and add a DSL filter to the line, to split out these frequencies to ensure as the phones would not know what to deal with all this extra information coming in at these higher frequencies.

dsl filter

This works up to a the point you want to scale the system, which is why we have have statistical multiplexing. This is based off STDM, but sets an upper bound on the the time it is allocated and is the origin of the network packet! Now messages would be broken up into these specific sized network packets and at the line each packet is assessed on a packet-by-packet basis to prioritize them.

This is all the responsibility of the physical layer in the network protocol stack, though it is only one of many of it’s responsibilities! More to come…