Multiplexing of the Past

I finally understand the old problem of trying to balance accessing the internet and leaving the telephone line open for phone calls, and how it was eventually resolved.

The basis of the problem is the fact that there is a single physical copper line coming into the household. With a single line we were limited to sending information from one end to the other – think of the switchboard operator who would have to manually connected two lines!

Telephone_operators,_1952Eventually the operator was put out of a job by automatic exchanges which provided the first dial tone. Still we had the problem that a single line could only be used to transmit a single signal at a time.

In comes multiplexing! We have a limited resource, the telephone line, which we want to share so the concept of multiplex was created where the shared resource was split up but everyone who used it had no idea others might also be using the line.

time division multiplexing

There are many types of multiplexing:

  • Synchronous Time-Division Multiplexing (STDM)
  • Frequency-Division Multiplexing (FDM)
  • Statistical Multiplexing

STDM is one of the more basic ideas, where time is simply broken up and assigned in a round-robin style to everyone who wants to use the line, i.e. so maybe I only use the line every X milliseconds.

FDM is slightly more complex and something you will recognize as one of the solutions implemented so us nerds could continue to connect to the internet. The idea is that your voice and hearing has a very specific range, outside of that you can’t hear – think dog whistle.



So the brilliant idea was that there is a huge amount of frequencies that are being left out, and that if we transmitted our packets in another bands, that we could actually transmit both together. So again we would divide up the allocated frequency band by the number of users who wanted to use it and restrict them to send on those.

When DSL broadband came in you needed to go around to all the phones in the house and add a DSL filter to the line, to split out these frequencies to ensure as the phones would not know what to deal with all this extra information coming in at these higher frequencies.

dsl filter

This works up to a the point you want to scale the system, which is why we have have statistical multiplexing. This is based off STDM, but sets an upper bound on the the time it is allocated and is the origin of the network packet! Now messages would be broken up into these specific sized network packets and at the line each packet is assessed on a packet-by-packet basis to prioritize them.

This is all the responsibility of the physical layer in the network protocol stack, though it is only one of many of it’s responsibilities! More to come…

Android and Robotium (To the rescue!)



Ah, I love a good regression suit that passes after check-in!

This is where Robotium is really making a big difference for me especially as I try and adhere to Acceptance Test Driven Development, ATDD. The basis of ATDD is designing a test around the acceptance criteria/success measures of a story/task/use-case. This is done before development and once you have it passing, that is it – development is done (ok.. plus any cleanup, re-factoring..)!

Given a story, we break this down into the individual tasks the user takes and each task will have an ATDD test. Our story is:

As a User,
I want to find a bar,
So that I can see it's menu

When we break this story down into its individual tasks we get the following list:

1. User clicks “Find Bar” button from the main menu

2. User enters bar name into input field and selects “Search”

3. User is returned the bar’s menu

So each of these becomes a test case which starting off all fail:

package com.robotium.test;
import com.robotium.solo.Solo;
import com.balortech.MainActivity;
import android.test.ActivityInstrumentationTestCase2;
public class BreezeTest extends ActivityInstrumentationTestCase2{
	private Solo solo;
	public BreezeTestTest() {
	public void setUp() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		//setUp() is run before a test case is started. 
		//This is where the solo object is created.
		solo = new Solo(getInstrumentation(), getActivity());
	public void tearDown() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		//tearDown() is run after a test case has finished. 
		//finishOpenedActivities() will finish all the activities that have been opened during the test execution.
	public void testMainMenuFindBar() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		assertTrue("Main Menu's Find Bar button sends the user to the right activity", false); 
	public void testFindBarSearch() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		assertTrue("Entering a bar's name in the search field send the user to the results", false); 
	public void testBarMenu() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		assertTrue("The bar's result page shows up correctly", false); 

Then taking the first test we start to build out it out:

public void testMainMenuFindBar() throws 1.5.0/docs/api/java/lang/Exception.html">Exception {
		solo.clickOnMenuItem("Find Bar");
		//Assert that NoteEditor activity is opened
		solo.assertCurrentActivity("Main Menu's Find Bar button sends the user to the right activity", "FindBar"); 

At this point we leave the ATDD and begin building up our Unit Tests and the code to drive those – again the Unit Test written first, implement the java to pass it and so on. At the end when the final Unit Test passes, the ATDD test should pass. Then on to the next one!

Happy testing folks ;)

Baby Steps with Android

Screen Shot 2014-03-10 at 11.13.25 PMAh that glorious moment where everything finally works!

One of the major changes in direction we are taking with BalorTech is to focus on the journey and build an MVP around each step rather than develop pieces independently and hooking them together later.

As such our first step was actually setting up an Android application that presented a user with the button to Find Bars, which when pressed will call an web service.

A lot of time has been spent on trying to figure out the normal testing framework for Android – both a white box and black box. It appears that Android provides a number of testing frameworks with their development toolkit ranging from monkeyrunner to uiautomator. There are some external options also in the form of Robotium which overwrites default Android classes on run time. More analysis will be done to see which would be best to implement for ATDD…


Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency


I recently finished reading Eliyahu Goldratt’s The Goal and it definitely grounded me – stop looking at the details and step back. Shortly after reading this I was listening to a talk on Agile where the presenter asked if anyone had read the book and followed up to see if anyone had read Slack. Having enjoy The Goal, I decided to pick up a copy of Slack based on this and I’m very happy I did!

Slack covers a huge amount of management in a very concise 210 pages, with the primary topic being that brakes need to be applied. Slack, Tom DeMarco describes, is any time you are not working. It is during these periods of time that people get the chance to think and evaluate and this allows a higher quality of plan to form.

If I can only highlight one section, I’d like to highlight the dangers of unintentionally causing yourself to burn out. We estimate the effort of implementing a piece of functionality and track velocity of total effort that can be done within a sprint. The more sprints accomplished the greater the confidence is in the velocity as the volatility decreases and a consistent output materializes.

Yet this is hinged on one of two things happening:

  1. The work environment remains unchanged – nothing occurs differently that would have an impact on you.
  2. The environment changes and the plan is updated based on its impact to you.

The possibility of burn out occurs when velocity starts to settle and the confidence in achieving it is high, suddenly it turns from being a metric to being a goal. There is an expectation that it can be done. Still if the environment changes you simply update the plan, but due to the expectation I think there is now a greater change required in the environment to legitimize the changing of velocity. A small task pops up unrelated to the work that consumes 2 hours – does this really require change? On its on maybe not, but then it starts popping up more and it gradually becomes a downward slope and you find yourself in the “Hurry Up” mantra describes it working away at breakneck speed which, as Tom points out, got its name for a good reason…

2fast2furiousBest option, I guess, is to always be on the look out for this and to ensure when you start to veer off and need to work to gain the Slack required to effectively do you job.. that it is time to reset things.


Me, You, and Quality

Harking back to a 2001 article by Robert Glass on IEEE, Frequently Forgotten Fundamental Facts about Software Engineering, and one point sticks out for me:


And this, I find, is so easy to forget.

Tom DeMarco, author of Slack, attempts to put this into perspective by explaining what product has the greatest quality in his eyes. Tom was on the beach with his finance and was to be married later that day. There were only the two of them on the beach and so one took a picture of the other. With the use of Photoshop they were able to combine both images together, as if someone else took the picture – and this became their wedding photo.

The quality of this is as he puts it, “A new product has not just assisted work that used to be done some other way; it has transformed the whole way people think about the possibilities.”

His nine reasons for picking Photoshop as number one are:

  1. It is unique
  2. It redefines the whole notion of photo processing
  3. It even redefines the way you think about photos.
  4. It allows you to do things that were barely imaginable before.
  5. It is deeply thought out; in particular, its use of channels is almost infinitely extensible and usable in ever-increasing number of ways.
  6. It is fully implemented; for example, its “undo” feature can undo even the most complex action.
  7. Its human interface sticks in the mind – no need for a manual
  8. It is revolutionary in the way it affords an interface for third party add-on providers
  9. It is solid as a rock

Of these bullet points only one is related to software defects.

Yep so much is written about software quality that concentrates on the defects, Kenett and Baker talk about it as “The degree to which a software product possess the specified set of attributes neccesary to fulfill a stated purpose” in Software Process Quality. Hong and Goh similarly describe it as “To a customer, quality means defect-free products and satisfactory service” in their article Six Sigma in Software Quality.

Having metrics on defects and other aspects of the product are very important, but turning these into objectives can backfire as Edward Deming explains in point 11 of Deming’s 14 Points and Quality Project Leadership, “Setting production targets only encourages people to meet those targets through whatever means necessary, which causes poor quality.”

So how do we go about expanding quality to look at the product as a whole? How do we see what type of transformation the product gives? Are we automating the easiest parts of a process, in effect leaving the increasingly more complex tasks to the user to figure out or are we making the complex tasks easy?