Home

Testing and concurrency


Our team is currently working with a client on a medium sized, medium complexity Java application which has quite low test coverage. We are introducing characterisation tests
to snapshot functionality. These will give us the confidence to refactor away technical debt and extend the application without regression. One of the problems we are experiencing is the concurrent nature of the application. I have have worked on applications in the past which supported very high concurrency without issue but this application is different. I have not fully thought through why this application does differ but there are some obvious points:

  • This application spawns threads in Java code a lot. In previous applications we have always avoided this complexity by utilising somebody else's thread pool code.
  • I am used to stateless service classes which operate on domain objects. The stateless service classes obviously have no concurrency issues and the domain objects can be protected using synchronisation blocks. This application seems to have a lot more stateful objects that interact (this is anecdotal, I am have not analysed the code specifically for this attribute).

One of the first refactorings we are looking at is to remove all the Thread.sleep calls from test classes. The CI server reports significant number of test failures which turn out to be false positives. In a significant number of cases the use of Thread.sleep is to blame. I have seen two slightly different uses of Thread.sleep in the test code.

  1. The test spawns a thread which is calling some method of the class under test whilst the main test thread interacts with the class under test in some other way. The test thread calls Thread.sleep to ensure that the second thread has time to complete its processing before the test verifies the post conditions.
  2. The class under test contains some internal thread spawning code. The test thread again needs to execute a Thread.sleep to remove the chances of a race condition before firing the asserts.

Both these approaches suffer from the same problems.

  • The Thread.sleep might be long enough to allow the second thread to complete processing on one machine (e.g. the developers high spec workstation) but it is not long enough to allow the thread to complete its processing on a heavily loaded, differently configured, usually more resource constrained CI server. Under certain load situations the test fails. It works in others. The use of Thread.sleep has made the test non-deterministic.
  • Often the response to the above problem is to make the sleep longer. Yesterday I saw a very simple test which took over thirteen seconds to execute. Most of that test duration was sleeps. Refactoring to remove the sleeps resulted in a test that executed in 0.4 seconds. Still a slowish test but a vast improvement. The last application I worked on had 70% coverage with 2200 tests. If each one had taken thirteen seconds to execute then a test run would have taken almost eight hours. In reality that suite took just over a minute on my workstation to complete. You can legitimately ask a developer to run a test suite which takes one minute before every checkin and repeat that execution on the CI server after checkin. The same is not true of a test suite that takes eight hours. You are probably severely impacting the teams velocity and working practices if the build before checkin takes eight minutes. There are very few excuses for tests with arbitrary delays built into them.

To resolve both issues we introduce a count down latch.

Where the test spawns a thread, the latch is decremented inside the spawned thread and where the test code had a sleep a latch.await(timeout) is used. We always specify a timeout to prevent a test that hangs in some odd situation. The timeout can be very generous, e.g. ten seconds where before a one second sleep was used. The latch will only wait until the work is done in the other thread and the race condition has passed. On your high spec workstation it might well not wait at all. On the overloaded CI server it will take longer, but only as long as it needs. A truly massive delay is probably not a great idea as there is a point where you want the test to fail to indicate there is a serious resource issue somewhere.

Where the class under test spawns a thread (an anti-pattern I suspect) then we amend the code so it creates a latch which it then returns to callers. The only user of this latch is the test code. Intrusive as it is, it is often the only way to safely test the code without more significant refactoring. 

There are some larger issues here. Is the code fundamentally wrong in its use of threading? Should it be recoded to use a more consistent and simple concurrency model and rely more on third party thread pool support?

At risk of straying from my comfort zone of simple, pragmatic, software delivery, deep down, I have never been very happy about the implications of complicated multi-threaded code and automated testing. You can write a class augmented with a simple and straightforward test class which verifies the classes operation and illustrates its use. You can apply coverage tools such as Emma and Cobutura which can give a measure of the amount of code under test and even the amount of complexity that is not being tested. I am not convinced it is always possible to write simple tests that 'prove' that a class works as expected when multiple threads are involved (note I say always and simple).

I do not know of any tools that can give you an assurance that you code will always work no matter what threads are involved. Perhaps a paradigm shift such as that introduced by languages such as Scala and Erlang will remove this issue?

There is some good advice available regarding testing concurrent code and I am sure lots of very clever people have spent lots of time thinking this through but its certainly not straight in my head yet.

Three Peaks

This weekend I did the Three Peaks challenge. At six o'clock on Thursday I dashed round the corner and jumped into a Ford Galaxy with three other dads from Maple School. It wasn't a clean exit as the kids came home with Joanne just as I ran out of the door. Jamie was really upset apparently as I didn't get the chance to say goodbye. Helen just missed me and she was also upset.

We flogged it up to Warrington where we stayed in a Travelodge with adjacent truck stop. We had our dinner in the self service food hall followed by a lager shandy in the truckers bar. It was pretty grim. We were the only drinkers not wearing a corporate trucking uniform and in the small minority without massive beer bellies. One of our guys was wearing sandals and another ordered a G&T. I have probably watched too many films but I did worry that the truckers might abduct us as this point to use as their playthings.

The next day our driver got us up to Glasgow where we picked up our fourth team member and de facto leader. We then made our way to Ben Nevis, stopping briefly for a very tasty burger at Loch Lomond and again for some heavy traffic. We got to Ben Nevis just before half five. We spent a few minutes preparing and then set off dead on 17:30.

It had been a cracking day weatherwise and I had worn my sunglasses at Loch Lomond. Ben Nevis doesn't work like that so as soon as we started it began to rain. We made good progress on the ascent but by the halfway mark is was very poor visibility and stinging rain. We made it to the top in two hours by which point my fingers were so numb I couldn't operate my camera. I had failed to pack a hat or gloves (it had been an uncomfortably hot week). We rushed back down the mountain with only a minor pause when I slipped and banged my knee. The pain was intense and I really thought my race was over but after a few minutes it subsided and I was able to continue. We made it back down very quickly in an hour and a half, catching the other two teams who had a half hour head start on us!

At the bottom of the mountain I had to take all my clothes off (by the side of the road) as I was soaked and then we all piled in to rush down to Scarfell Pike. It was damp and unpleasant and my knee was pretty painful. I applied ibuprofen gel, freeze spray and deep heat but it kept on waking me on the long drive to Cumbria.

We got to Scarfell about four am and found no parking spaces. Our driver stuck the car at the side of the road and we had a brief (and for my part, unpleasant) breakfast. The other Maple teams started at least ten minutes ahead of us. We then rocketed up Scarfell which was very busy and compared to Nevis, very easy. It was still cold and unpleasant on top but I never even put my waterproof jacket on. We messed around for a few minutes on the peak taking photos and then were off.

We got to the bottom about ten past eight in the morning and headed off to Snowdon in Wales. Our driver did not get a lot of sleep (if any) during these breaks. He must be a machine as his driving was calm and accurate with great navigation throughout.

We got to Snowdown about 12:30 after a delay caused by an accident which forced us onto back roads. It was quickly up the Pyg Track to the summit. The last part of the ascent on the Zig Zag was pretty exhausting and then it was straight down again via the Miners Track. I found going downhill, clambering over stones very hard going on my knees and my guts by this point. We got to the flatish section of the Miners track and from there it was easy. We romped in at 23:13:44, beating the other two Maple teams whom both came in within the twenty four hours.

It was an excellent and slightly disorientating experience that I am not sure I would rush to repeat but I am glad I did it.

I slept like a log Saturday night and then went up the Miners Track again with Helen and the kids (and a large group of others from Maple). This time the weather was fowl. We were soaked to the skin and the winds were gusting at 80mph in the valley. We made it to the second lake but were forced back. Edith and Tom were both screaming and it was impossible even to see the rain was so hard. I was still damp six hours later when we finally got home!

Importance of real time performance monitors

Decent log entries are essential. On our current project we aim to write enough data to allow off-line analysis of performance and usage plus errors. I am also an advocate of more immediate and accessible runtime information. Log analysis is great but sometimes you need empirical data right away. On our current project we use the Java MBean facility. These MBeans can be easily accessed in a graphically rich way using tools like JConsole or VisualVM.

We have a couple of different types of analyzer which we expose through MBeans. One simply records how many times an event has occurred in a short time period. Another calculates an real time average, again across a short time period. For example, we have analyzers which record the length of time it takes to make a call to a particular downstream application. Each duration is recorded and an average over the last ten seconds is reported via the MBean. This calculation has been implemented to make it very efficient from a CPU perspective since 99.999% of the time the average discarded before anybody bothers to look at it. Originally we were only using two or three of these average analyzers in the system. As developers found them useful they were placed around every single external interaction and we suddenly found ourselves with several thousand per application. These used about 25% of the heap and consumed significant CPU resource. The analyzer was then optimized and now consumes negligible resources.

I have been personally a little disappointed that our operations team have not made as much as of this facility as I expected. They are happy with their existing log analysis tools. As a team, we have questioned whether our investment in MBeans is worthwhile. We concluded that it was as even though the Ops team don't use it in Production the development group rely on the data exposed through JMX for trouble shooting, especially in system test, monitoring load tests and as a quick way to gauge the health of Production.

Last week I was reminded again how useful this immediately accessible data was. After a system restart Production was doing 'something funny'. We had ambiguous portents of doom and various excited people considering some fairly drastic remedial action, including switching off a production system which was serving several thousand users. The fear was that something in the affected system might be placing unbearable demands on downstream applications. This seemed unlikely as we have many layers of throttles and queues to prevent just such an occurrence but there was something odd going on. The first port of call for the developers were the log files. With several thousands transactions being performed a second there was a lot of log lines whizzing past. Panic began to creep in as it was impossible to discern what, if anything was going on in the explosion of data. I was able to walk over to my workstation and bring up VisualVM. In about thirty seconds I could see that right at that very moment we were sending a great many messages but well within the tolerances we had load tested against. I was able to use VisualVMs graphing function to track various data and within a minute or so could see that there was an unexpected correlation between two sets of events. (The number of messages sent to mobile phones and the number of identification requests made to a network component were drawing the same shaped graph, with a slight lag between the first and second sets of data and an order of magnitude difference in volume). Again these events were both within tolerances. Yes something unexpected was occurring. No it was not going to kill the system right now. We went to lunch instead of pulling the plug.

The data we collected pointed us in the right direction and we were able to find, again using VisualVM, that a database connection pool had been incorrectly set to a tenth of its intended size. The Ops guys made some tuning changes to the configuration based on what we had discovered. The application stayed up through the peak period.

In summary, log files are essential but there is still a need for real time, pre-processed data available via a easy to access channel. MBeans hit the spot in the Java world. Developers should not be scared of calculating real time statistics, like average durations, on the fly. They do need to make sure that the system does not spend a disproportionate amount of resources monitoring itself rather than delivering its function.

Concrete problems when developers opt out of TDD

We have two major classifications of automated test in common use:
  • Acceptance tests which execute against the application in its fully deployed state.
  • Unit tests which typically target a single class and are executed without instantiating a Spring container.
The acceptance tests are written in language which should make them accessible outside of the development team. They are used to measure completeness, automatically test environments and provide regression tests. Their usefulness is widely accepted across the team and they tend to be very longevid, i.e. tests that were written a year ago against a particular API are relevant today and will continue to be relevant as long as that API is supported in production. The unit tests are written by developers and will almost certainly never be read by anybody other than the developers or possibly the technical leads. I program using TDD as I find it a natural way to construct software. I personally find that the tests are most useful as I am writing the code, like scaffolding. Once the code is stablized the tests still have a use but are no longer as critical. A refactoring of the application in some future sprint may see those tests be heavily amended or retired. They are not as longevid as the acceptance tests.

I have been reflecting on the usefulness and investment in test code for as long as I had been doing TDD. I had come to a conclusion that whilst acceptance tests are non-negotiable on projects where I have delivery responsibility, perhaps unit tests for TDD are not mandatory in certain situations. I have worked with several developers who are very very good and simply do not see the value in TDD as it is contrary to their own, very effective, development practices. I know in my team right now a couple of the very best developers do not use TDD the way everybody else does. Education and peer pressure has had no effect. They are delivering high quality code as quickly as anybody else. Its hard to force them to do differently - especially when some of them pay lip service to TDD and do have a high test coverage count. I know that they write those tests after they write their code.

In the last few weeks I came across a couple of concrete examples where TDD could have helped those developers deliver better code. In the future I will try and use these examples to persuade others to modify their practice

1. Too many calls to downstream service.

The application in question has a mechanism for determining identity of a client through some network services. Those network services are quite expensive to call. The application endeavors to call them infrequently as is safe and cache identity when is is resolved. We recently found a defect where one particular end point in the application was mistakenly making a call to the identity services. It was not that the developer had made a call in error, it was that the class inheritance structure effectively defaulted to making the call so did so without the developer realizing. The identity returned was never used. I suspect that this code was not built using TDD. If it had been then the developer would have mocked out the identity service (it was a dependency of the class under construction) but would not have set an expectation that the identity service would not have been called. The use of mocks not only to specify what your code should be calling but what it should not be calling is extremely useful. It encourages that top down (from the entry point into the system) approach where you build what you need when you need it.

Its likely that the defect would never have been introduced had the developer been using TDD. As it is we have a application which is making a large number (and it is a large number) of irrelevant calls to a contentious resource. We now have to schedule a patch to production.

Coincidentally, there was an acceptance test for this service, which was passing. This highlights a deficiency in our acceptance tests we have to live with. They test the 'what' but not the 'how'. The tests were running against a fully deployed application which had downstream services running in stub mode. The test proved that functionally the correct result was returned but it had no way of detecting that an additional spurious call to another service had been made during the process.

2. Incorrect error handling

In a recent refactoring exercise we came across a piece of code which expected a class it was calling to through an exception whenever it had an error processing a request. The error recovery in the code in question was quite elaborate and important. Unfortunately, the class being called never threw an exception in the scenarios in question. It had a status object it returned which indicated if corrective action needed to be taken. (It was designed to be used in conjunction with asynchronous message queues where throwing an exception would have introduced unnecessary complexity). The developer could have easily used mock objects and set an expectation that the exception would be thrown and the problem would have remained. But, if TDD was being used and the developer was working top down then the expected behavior of the mocks would have guided the implementation of downstream classes. Nothing is foolproof but I think this manner of working should have caught this quite serious error.

More subjective problems

I have also noted two other potential consequences of having some developers opt out of TDD. I do note that some developers on the team produce code that is more complex than others. It is fine from a cyclomatic complexity perspective but when you try and understand what it is doing you find yourself with a higher WTF count than you would expect. I think (again this is subjective, I have not gathered any empirical evidence) that a lot of the complexity comes from a lack of cohesion in the code. Logic is spread around in a way which made sense to the original developer as they had internalized all the classes concerned. That logic is not obvious to a new pair of eyes. If you are using TDD then this encourages cohesion in classes because it focuses the mind on what the class is responsible for before the developer has to worry about how it delivers those responsibilities.

This is a very subjective point and I would happily agree that several of the team members who do use TDD occasionally produce nasty code. My gut feeling however, is that it happens less often.

One final problem with some of the high flyers not using TDD is that bad practices tend to propagate through the team just as quickly as good ones. I have caught a couple of new joiners following a bad example or simply not using TDD becuase the developer they look to as a mentor is not evangelizing about the technique because they themselves do not buy into the practice. This is a shame as those new joiners often have a greater need of the rigor that TDD imposes than the more experienced developers.

Anti-pattern: The release vehicle.

At my current client site you cannot get a piece of compiled code into production unless you can find an appropriate 'release vehicle', i.e. a planned high ceremony release of the component which has been officially prioritised, scheduled and funded. (Note: The same does not apply to non-compiled code such as JSPs or XML templates containing complex XPath expressions).

Somebody very clever, who probably had a beard (Grady Booch?), once said that "Regular releases into production are the lifeblood of the software development process.". I agree. My current client also seems to be in agreement but cannot extract themselves from the constraints their existing processes.

The client in question has a successful agile adoption. Walking round the development teams you see task boards, burn downs and SCRUM meetings. Go to a management meeting and you'll hear them talk about two week iterations and the importance of continuous integration. At a strategic level, the organisation (which is very large) is still waterfall orientated. This has implications for the way in which work is financed. Funds for the development, testing and deployment of a certain application are released on waterfall inspired milestones. This, in conjunction with a legacy of long development cycles has led the this 'release vehicle' anti-pattern.

The organisation has an unwillingness to make a deployment of a component into production unless there is named and funded change request which covers its release. Activities within development, possibly funded internally as 'business as usual' do not have such CRs. Therefore, a development activity such as refactoring for technical debt reduction or improving performance might get engineering buy in but will not get released into production until some CR happens to touch the same application.

It is common to see refactorings made which then sit in source control for literally months as they wait for an excuse to go live. Medium to low priority defects or useful CRs which lack very high prioritisation from marketing never get executed because the programme manager does not have a release identified for the change.

The application suite can appear inert to external parties as it takes a considerable period for changes to make it through the full release cycle. This erodes confidence. If I was a product owner and saw that a team was taking six months to execute my minor change I am not going to be inclined to believe that the same team can turn around my big important changes quickly. I am going to be looking for other mechanisms to get my changes into production and earning money quickly. Once I find a route that works I am going to keep using it.

Why do people like the release vehicle?
  • It is the way the whole software lifecycle as exposed to the rest of the organisation works. The QA team don't test a component unless they have funding from marketing. Marketing won't be paying for something that has no role in a prioritised proposition. The Operations team won't support the deployment actives for our component if they don't have the cash from the same marketing team.
  • It looks like it is easier to manage for PMs. Releases (because they are infrequent) are a big deal, involve lots of noise, planning, disruption to everyday working pattern.
  • It reduces the infrastructure costs. It costs resource to make a release unless every aspect including testing and operational deployment is fully automated (and even then there is potential cost, dealing with failures etc.). It costs resource to automate a manual build process. Engineers appreciate that fully automated build processes are a priority because in the end they reduce costs and increase agility. It is that age old problem of trying to convince not just the build team, but the build team's manager and the build team's manager's manager that it is worth diverting resource in the short term to fix a problem in order to make a saving in the long term.
** This is a symptom of our strategic failure to get agile adopted beyond the development group. Until we do so, we will continue to hit these sort of issues.

What we should do instead:

We should schedule frequent (bi-weekly, ideally more frequent) updates in production from the trunk of source control for every component. We should not need an excuse for a release. The release process should be as cheap as possible, i.e. automated build, regression test, deployment and smoke test. The code in the trunk is supposed to always be production ready and the automated tests should keep it that way.

If we achieve this we should:
  • Reduce complexity in branch management (no merging changes made months ago).
  • Avoid a massive delay between development and deployment which is not cost effective and makes support very hard.
  • Increase our perceived agility and responsiveness.
  • Enable refactoring to improve non-functionals (stability, latency, dependency versions, capacity).
  • Prevent a release from being a 'special occasion' which requires significant service management ceremony.
If you release all the time everybody knows how to release. If you release twice a year every release involves re-education of the teams involved on deployment, load testing, merging etc.. etc. This increases the cost and risk that it fails.

Note: Having frequent, regular, low ceremony releases is greatly eased by having a fully automated build and deploy process but you can have one without the other. As stated above, having such a build process makes regular deployments to production cost effective but is an enabler rather than the justification for this change to working practice.

Ten breathtaking miles in the snow

I went for a 10 miler today in the snow. It was one of those fantastic runs that you remember for years afterwards. It was cold (sub-zero) but crisp and dry, not damp at all. The snow was pretty thick still, about 10cm off pavement. I ran one of my usual routes down to Sandridge then over the hill to Ayers End then back over the hill again to Nomansland Common and then home.

The snow was very powerdery, like icing sugar and it had been windy so the drifts were deep. The sunken lane behind Pound farm had filled up to knee height but the farmer (or somebody) had cut a narrow path through it. There were loads of hardy St Albans folk out enjoying the weather and breathtaking views in the bright sunlight. On the top overlooking Sandridge there was even a family having a picnic on a rug.

The path at the top of the hill had drifted up and there was no easy way thru. I jumped straight in. It only came up to just below my knee but the shock was incredible. It was like jumping into an ice bath. All the heat seemed to get sucked out of my calf muscles. It only lasted for about twenty metres but was a real trial to get through. I warmed pretty quickly as soon as I got out and brushed the snow off (bare legs).

I don't like to pause on runs, especially when its cold, but the views and the silence was so spellbinding several times I halted to drink it in for a few seconds.

Got back to St Albans feeling pretty good. Managed to do it at nine and a half minute mile rate which is nothing clever but the going was tough.
Tags :

Tale of two SCRUM stand ups

I walked past two teams doing their daily SCRUM standup today. Both teams claim to be agile. I didn't join in (even as a chicken) but just observed for a minute or so.

The first team was sitting down in a breakout area. Their body language spoke volumes. There was not one single participant maintaining eye contact with anybody else. Two people were playing on their phones. One developer had his head in his hands. Most had bored expressions. The team leader who is also the SCRUM master was the only person who spoke for the entire time I watched.

The second team was stood in a space near their desks. They were gathered round a task board which appeared to be up to date and the focus of several of the individual's updates. One person spoke at a time. Almost everybody appeared to be paying attention to whomever was speaking. Most updates were short and concise. A couple rambled on.

Other than both teams calling their meeting a SCRUM I could see no similarities.

As our agile adoption has spread beyond the original teams I suppose it is inevitable that as the experience gets spread a little thinner that people will simply label their existing activities with agile sounding names. Often we have no clear remit in those teams to supply a mentor and to try offer advice would result in rebuttal as team leaders guard their territory. Does this matter? Is there a risk that these teams who are not practicing agile correctly will diminish and discredit agile in the eyes of our programme managers? This is sounding a bit like an excuse for an Agile Inquisition going round checking that no team is using Agile's name in vain. This cannot be a good thing either.

Another great day off

I have had quite a few days off recently and the common theme seems to be mostly how rubbish they are. Today is a good example. So far my day off has consisted of leaving home at 6:30am and cycling, in the rain, to work, with a slow puncture. I had to go into work as something important overan so I need to go in to finish it off. Six hours later I left work and cycled home. As soon as I got in Helen went for a nap. There wasn't much in the house for lunch so I made a mess of my diet by having jam sandwich followed by chocolate biscuits. Very healthy. Then I read some email. It's 14:48 now. Tom needs to be woken up and Jamie will be finished at school in twenty minutes. Sigh. Before you know it, it will be time for bed.
Tags :

Chef Edith

Edith loves helping in the kitchen and has helped with the Sunday roast for the last few weeks. She keeps on telling me what to do and likes me to ask her permission and say 'Yes, Chef Edith' lots.

This Sunday when we got the meal on the table Helen congratulated 'Chef Edith' on an excellent dinner only to have Edith correct her "I'm not Chef Edith now, mummy, I'm Eater Edith". She then proceeded to consume her own body weight in roast potatoes, chicken and sausages.

Jamie has a habit of prefixing every statement with 'Actually'. Edith has picked up on this and got a bit confused so now every time we have Yorkshire Puddings with the dinner she thinks they are called 'actual-puddings'. If you try and correct her then you get a quizzical look then she will keep on calling them 'actual-puddings'. It is sending Jamie mad "because ice cream and jelly is an actual pudding, not these". This might well explain why she keeps doing it.

Obi-Wan's time was up

Jamie got the original Star Wars trilogy for his birthday yesterday. He has watched all of the clone wars cartoons so Obi-Wan, R2D2 are all familiar friends. Today we watched Star Wars. The stream of questions was unceasing. "Why are the goodies (Storm Troopers) on the bad team now?", "Why is Obi-Wan so old?". We got to the fight scene between Darth Vader and Obi-Wan and Jamie couldn't quite believe that Obi-Wan got killed. He was a little upset and sat there quietly for a few minutes until I thought he was just watching the film when suddenly he announced "Obi-Wan was quite old.". I asked why this was significant, was it that an old Obi-Wan being cut down by Darth Vader's light sabre was okay? "Yes, he was old so he would have died soon anyway.". He perked up after that.
Tags :

Finally completed a long run as planned

I finally made it round a 20 mile training route. Five weeks from now (unless there are yet more problems) I will be recovered after running 26.2 miles in the Leicester Marathon. Every other time I have run a marathon I would have run three or four 20 milers by this point. I should be starting my taper in two bloody weeks! It's been a non-stop litany of illness, work commitments, family stresses and obligations and just about everything else. We went to a party last night and I had  to abstain from drinking so that I could make it out this morning.
The good news is that I made it round in one piece, felt pretty good during and was fine afterwards (no puking or crippling aches). I did two gels, one at 1:15 and another at 2:15 and ran with a bladder pack so I had plenty to drink. I am contemplating running the race with the pack. The only reason I can think not do so is that I look stupid. This doesn't sound like a good reason for not doing something beneficial, especially as I will look pretty stupid anyway...

Value and cost in hardware and software

Neal Ford's presentation on emerging architecture contained a reference to the controversial Intel practice in the 1990s regarding the 386-SX math co-pro. Intel were producing two variants of the 386 chip at the time:
  • The 386-DX which featured a math co-processor on board which allowed faster execution of floating point maths required in financial applications, graphics packages etc..
  • The 386-SX which was cheaper but did not feature the math co-processor and therefore was less 'powerful'. It had enough to differentiate itself from the previous generation of 286 chips but was regarded as distinctly inferior to its more expensive sibling.

This all sounded fair enough until it became public that the 386-DX and 386-SX shared the same manufacturing process including the construction of the maths co-processor. Where the process differed was that at some point the math copro in the SX chip was destroyed via some mechanical process. Suddenly the perception of the SX went from a lower spec product to a broken product. In Neal Ford's presentation he described the whole process as Intel selling customers their trash, like the SX was a defective unit being offloaded to unsuspecting users as a working chip.

At a very low technical level Neal's statement is true but not at a commercial or practical level. Intel engineers were given a change in requirements by their marketing department: Produce a chip that is going to enter the budget market to complement but not fully compete against the 386-DX. They looked at their system and determined the most efficient way to achieve this end was to 're-configure' existing 386-DX chips. This was likely much much cheaper than setting up a whole new production line and testing the brand new chip it produced. To do otherwise would be against the pragmatic engineering ideals that Agile is supposed to champion. If you flip the argument around and say: Should we redesign from scratch the entire process to achieve the same end result but at much higher cost just so that we claim that the chip contains no parts that it doesn't need. Maybe we find this so objectionable because a chip is a tangible entity and we are used to associating (rightly or wrongly) the cost of raw materials and manufacturing with the value of such items. Maybe we don't factor in the cost of design and marketing, which I suspect are massive for a complex, consumer electronic product like a cutting edge CPU.

This pattern raised its head again, but with less fuss, a couple of years ago when HP shipped high end servers with multiple CPUs. Some of the CPUs were disabled upon delivery. If the customer's processing requirements increased over time (which they always do) then they could pay HP who could then remotely enable the additional processors without the customer incurring the cost of an engineer on site, downtime for installation etc. etc.. Again, this early step towards todays processing on demand cloud computing concept raised some eyebrows. Why should customer's pay for something that was going to cost the supplier nothing? Again, this is a preoccupation with the physical entity of the manufactured component. If the additional CPUs had been sitting idle in a HP server farm rather than at the customer's site and purchasing them involved work being sent across the network my suspicion is that nobody would have had any objections.

We use a UML design tool at my current client site called Visual Paradigm. It has a number of editions, each with a different cost. It has a very flexible license purchase system which we have taken advantage of. We have a large number of standard level licenses because the features that this edition gives will support most of our users most of the time. Occasionally we need some of the features from a more expensive edition. Its not that only one or two individuals require these features, we all need them, just very occasionally. The Visual Paradigm license model supports this beautifully. We have a couple of higher edition licenses. On the rare occasion that users need the extra features they just start the program in the higher edition mode. As long as no other users connected to our license are using the higher edition at that time, there is no issue. The similarity with the examples above is that there is only one installation. We don't need to install a different program binary every time we switch edition. We love this as it makes life easy. I am sure Visual Paradigm like it as well as it simplifies their build and download process.

Two me the two scenarios, software and hardware appear pretty much identical. Everybody appreciates that the cost of creating a copy of piece of software is so close to zero that it is not worth worrying about. Therefore we don't mind when a supplier gives us a product with bits disabled until we make a payment and get a magic key. It's harder to think of hardware in the same way, that the build cost doesn't matter. That there might be no difference in manufacturing costs for two products with very different customer prices. The cost of delivering the product, like delivering software, includes massive costs which have nothing to with creation of the physical artifact.  

Maybe this wasn't the point in the above presentation but I guess the thing that startled me was that my natural inclination was to immediately associate the value with the tangible item. In my head this is all getting mixed up with free (as in speech) software and the idea that it is unproductive and unethical to patent / own / charge for ideas.

Agile 2009

I have spent the week at the Agile2009 conference in Chicago. This annual conference, now in its eighth year, is the premier international gathering for agilsts. It caters for a whole spectrum of experience from new comers to the discipline to the gurus who are leading the way.

I attended some very thought provoking sessions as well as presenting my own experience report on techniques for technical architecture in an agile context. My colleague from Valtech US, Howard Deiner, battled hardware and network issues to present a well received demonstration of Continuous Integration. Both sessions got reasonable attendances in the face of stiff competition from competing presentations being held in parallel. Both received very positive feedback from attendees (mine scored 80% for command of topic and 78% overall). I got lots of positive feedback for my session in conversations with conference attendees throughout the week. This was very much appreciated.

My presentation is backed up by an IEEE Report which was published in the conference proceedings. The report's premise is that incumbent waterfall software development processes force technical architects into a position of isolation and ineffectiveness (the ivory tower). The challenge I (any many many other TAs) have faced is how to deliver the guarantees of technical correctness and consistency that clients (especially those moving from waterfall to agile) demand when some of the most widely used conventional techniques for architecture have been discredited. I am thinking primarily of the emphasis placed on up front detailed design and architectural review.

The report details architectural problems during scale up of a previously successful agile project. The report then describes and evaluates a number of techniques employed on the project to deliver the technical architecture without ascent of the ivory tower. The conclusions include the justification that documentation is not an effective tool for technical governance and that the architect must target activities which bring them closer the the actual implementation. This mirrors Neal Ford's point in his Emerging Architecture presentation that we need to accept that the real design is the code, not the summaries and abstractions of the code presented via the numerous tools (UML, narrative documents, whiteboard sessions) at our disposal. Other conclusions include the identification of automated tests as an architect's, not just a tester's, most effective tool for delivering a correct solution. The paper also identifies that soft skills around communication and people management, often the anathema of the conventional architect are critical to success. Finally the report concludes that utilizing the most cost effective techniques (rather than just the most technically powerful) were key. (That does not mean you cannot justify the use of expensive techniques, just that they may only be justifiable on the most important components in the system).

Agile 2009 was a great balance of real world experiences (such as my session) and more philosophical, academic sessions. There also the chance to listen to some insightful keynotes and take part in some exciting expert sessions which challenged the way we work. It is always easier to learn in a community of professionals with real experience and this was definitely the case at this conference. I learned as much over dinner and in break out sessions as I did in the formal seminars.

I am going to blog what I learned in some of sessions in the next couple of days, possibly earlier as I am stuck at Chicago O'Hare for eleven hours after a 'mechanical issue' with our plane!

Laptop shame

Every time I take my laptop out in public I attract derision and contempt! These Mac lovers can't understand that a laptop can still be useful even if it weighs more than the desk its sitting on (never, never attempt to rest it on a human lap - injury will follow) and is as visually appealing as a monkey's arse. I did get some kudos for running Ubuntu and doing my presentations from OpenOffice (I didn't admit I installed Ubuntu because the previous windows install had slowed to a crawl and I didn't have the disks to rebuild it).

Dada is the new mama

Helen went away for the weekend leaving me with Tom and Edith. Tom has being saying Dada for a little while now and I was a bit nonplussed when I visited my mum's cousin and he started calling me mama. I just thought he had given up using dada for a while until Helen got back. He was very pleased to see her (ear to ear grin) but he insisted on hugs from me and still called me mama and didn't seem to call Helen anything. I couldn't work out if he associates 'mama' with whomever is supplying food and hugs today or whether he was punishing Helen for abandoning him.

Tom was poorly all weekend with a high temperature. He hit 38.8c on Saturday night. I was taking it all in my stride until I used the fancy thermometer Helen bought and it started screaming and flashing red danger high temperature. Its funny how having some figures can totally change your perception of a situation, correctly or otherwise. I rang the on call doctors to check out how I was supposed to double up Calpol and Neurofen and spent all of Saturday night getting up and administering more medicine. He was still hot on Monday morning but Helen reported him much better by that evening.

Use of best of breed open source

Over the last two years of my current project I have noticed a recuring pattern. On several occasions we have identified an implementation pattern which commonly appears on many enterprise projects. That pattern is common enough that there is a well known (i.e. at least one person in the team has heard of it) open source solution which appears to be recognised by the community as being best of breed. In order to reduce risk and increase our velocity we use that open source component, possibly making changes to the design to more effectively incorporate the ready to run package. The theory (and one I fully buy into) being that be using the open source library we free up time to concentrate on the part of our solution which are truly unique and require bespoke software.

The recurring pattern I see is that on at least four occasions the best of breed package has proven to be severely sub-optimal. What is worse is that most of the time these deficiencies occur when we move into high volume load test in a cluster. It seems only then that we discover some limitation. Typically this is caused by a particular specialism required for our application which then exercises some part of the library that is not as commonly utilised as others and therefore less stable. Some times the limitation is so bad that the library has to be refactored out before launch and other occasions the issue becomes a known restriction which is corrected at the next release. All of the significant refactorings have involved replacement of the large, generic, well known library with a much smaller, simpler, bespoke piece of code.

I am undecided whether this is a positive pattern or not. On one hand using the standard component for a short period helped us focus on other pieces of code. On the other, the identification of issues consumed significant resource during a critical (final load test) period. The answer probably is that it is okay to use the standard component as long as we put it under production stresses as quickly as possible. We then need to very carefully take account of the effort being consumed and have an idea of the relative cost of an alternative solution. When the cost of the standard component begins to approach the cost of the bespoke then we must move swiftly to replace it. The cost should also factor in maintenance. We need to avoid the behaviour where we sit round looking at each other repeating "This is highly regarded piece of software, it can't be wrong, it must be us." for prolonged periods (its okay to say this for a couple of hours, it could be true). I used to work for a well known RDBMS provider. I always felt that the core database engine was awesomely high quality and that anybody who claimed to have found a defect was probably guilty of some sloppy engineering. I knew however, from painful experience, that you did not have to stray far from the core into the myriad of supported options and ancillary products to enter a world of pure shite. The best of breed open source components are no different.

Some of the problem components:

ActiveMQ (2007) - We thought we needed an in memory JMS solution and AcitveMQ looked like an easy win. It turned out that at that release the in-memory queue had a leak which required a server restart every ten to fifteen days. It also added to the complexity of the solution. Was replaced by very few lines of code utilising the Java 5 concurrency package. I would still go back to it for another look, but only if I was really sure I needed JMS.

Quartz (2007) - The bane of our operations team's life as it would not shutdown cleanly when under load and deployed as part of a Spring application. Replaced by the Timer class and some home grown JDBC.

Quartz (2009) - Once bitten, twice shy? Not us! The shutdown issue was resolved and we needed a richer scheduling tool. Quartz looked like the ticket and worked well during development and passed the limited load testing we were able to do on workstations. When we moved up to the production sized hardware and were able to put realistic load through we discovered issues with the RAMJobStore that were not present with the JDBC store (which we didn't need). It just could not cope with very large (100 000+) numbers of jobs where new jobs were being added and old ones deleted constantly.

Security compliance without empirical evidence

As the project nears the final delivery I am having to complete a statement of compliance for group security (did you feel a shiver as you read that, it was justified). One of the values I have tried to instil is that we don't do any documentation or formal design with no clearly defined audience. When we do identify a subject that does need to be formally recorded I am keen that it is done well. The interactions between components OAUTH  is one of those few key areas.

The OAUTH sequence diagram was correctly checked into the UML repository and was pretty good. Looking at it I was suddenly struck but a deep sense of unease. How was I supposed to know whether the implementation sitting on our servers bears any relation to the work of art being displayed on my screen? What value is my statement without real knowledge that we are secure? I know this is is something I have known for years and bang on about to anybody who will listen but it was a startling moment to be sitting there looking at the design and being asked to make a formal statement about its realisation without empirical evidence. I already knew from an audit of the acceptance test suite (end to end, automated, in-container tests) that one of the omissions was anything that exercised OAUTH. I decided that one of my priorities for tomorrow will be the completion of that test and that I wont be making a statement of compliance without it.

First really long run

After a week where I was seriously considering throwing in the towel for my autumn marathon I actually managed a long Sunday run! I was hoping to do twenty miles but only managed eighteen in the end. It was hot and I ran out of water which didn't help so I walk the last two miles rather than push myself and spend the rest of the day puking.

It was a glorious morning at first, not too hot or cold and no wind or rain for a change. I stopped (naughty but what the hell) for a few moments when a flight of three large civilian helicopters came over and again to watch to largish brown hawks chasing each other through the trees in the woods. The hawks were making very odd keening sounds and I was able to watch them for a minute or so. I am not sure what they were. Too small for buzzards I think and they looked too large for Sparrowhawks though I think the female Sparrowhawk is pretty chunky.

Going along the disused railway alongside the river Lea (which looked beautiful and was full of big fish) I came across a large tree that had come down in the recent storms and had blocked the path. A family was cycling the other way so I did my good dead for the day and helped the dad get the bikes over.

I did quite a hilly route and as I was climbing the steep hill coming out of Wheathampstead I caught up with a group of cyclists. I had been running for two and a half hours and covered fifteen miles at this point but the temptation was too great so I pushed that little but harder and passed the stragglers on the hill. Joy! One of the tail enders took exception and started working up thru his gears and putting a real effort in. He managed to draw level with me briefly before dropping well behind. Come the brow of the hill even the crappest came whizzing past. I even had the breath to swap pleasantries with the guy I had past. If they had hung around they would have had the last laugh though. I only managed another few hundred metres before the effort caught up with me and I started to feel much worse for ear with stomach cramps and jelly for legs. It was all downhill (performance wise) from there.

No real damage afterwards other than a nasty sun / dehydration headache which lasted until Monday. I feel asleep for half an hour after the run and that was enough to ruin my nights sleep (along with visits from Jamie and Edith). I felt like shit the next morning. I wasn't too stiff though and managed an okay 43 minute run in to work on Tuesday but a pitiful 44 (six minutes slower than the previous week) minute run back in the evening.

Valtech Blog is live

The Valtech blog is live, yay! It incorporates posts from various Valtech consultants and covers all things Agile and software development in general. My submissions are featured so no more swearing...

SpringSource in the cloud?

SpringSource has been acquired by VMWare!

Interesting that VMWare paid more for SpringSource than Redhat paid for JBoss, even in the midst of the recession.

Spring has got to be the tool of choice for the development of enterprise Java applications right now. I wonder if in the near future deploying Spring applications to the VMWare cloud (or using Spring beans already deployed in the cloud) will be as easy as deploying to Tomcat or Jetty?