- I am used to stateless service classes which operate on domain objects. The stateless service classes obviously have no concurrency issues and the domain objects can be protected using synchronisation blocks. This application seems to have a lot more stateful objects that interact (this is anecdotal, I am have not analysed the code specifically for this attribute).
- The class under test contains some internal thread spawning code. The test thread again needs to execute a Thread.sleep to remove the chances of a race condition before firing the asserts.
- Often the response to the above problem is to make the sleep longer. Yesterday I saw a very simple test which took over thirteen seconds to execute. Most of that test duration was sleeps. Refactoring to remove the sleeps resulted in a test that executed in 0.4 seconds. Still a slowish test but a vast improvement. The last application I worked on had 70% coverage with 2200 tests. If each one had taken thirteen seconds to execute then a test run would have taken almost eight hours. In reality that suite took just over a minute on my workstation to complete. You can legitimately ask a developer to run a test suite which takes one minute before every checkin and repeat that execution on the CI server after checkin. The same is not true of a test suite that takes eight hours. You are probably severely impacting the teams velocity and working practices if the build before checkin takes eight minutes. There are very few excuses for tests with arbitrary delays built into them.
This weekend I did the Three Peaks challenge. At six o'clock on Thursday I dashed round the corner and jumped into a Ford Galaxy with three other dads from Maple School. It wasn't a clean exit as the kids came home with Joanne just as I ran out of the door. Jamie was really upset apparently as I didn't get the chance to say goodbye. Helen just missed me and she was also upset.
We flogged it up to Warrington where we stayed in a Travelodge with adjacent truck stop. We had our dinner in the self service food hall followed by a lager shandy in the truckers bar. It was pretty grim. We were the only drinkers not wearing a corporate trucking uniform and in the small minority without massive beer bellies. One of our guys was wearing sandals and another ordered a G&T. I have probably watched too many films but I did worry that the truckers might abduct us as this point to use as their playthings.
The next day our driver got us up to Glasgow where we picked up our fourth team member and de facto leader. We then made our way to Ben Nevis, stopping briefly for a very tasty burger at Loch Lomond and again for some heavy traffic. We got to Ben Nevis just before half five. We spent a few minutes preparing and then set off dead on 17:30.
It had been a cracking day weatherwise and I had worn my sunglasses at Loch Lomond. Ben Nevis doesn't work like that so as soon as we started it began to rain. We made good progress on the ascent but by the halfway mark is was very poor visibility and stinging rain. We made it to the top in two hours by which point my fingers were so numb I couldn't operate my camera. I had failed to pack a hat or gloves (it had been an uncomfortably hot week). We rushed back down the mountain with only a minor pause when I slipped and banged my knee. The pain was intense and I really thought my race was over but after a few minutes it subsided and I was able to continue. We made it back down very quickly in an hour and a half, catching the other two teams who had a half hour head start on us!
At the bottom of the mountain I had to take all my clothes off (by the side of the road) as I was soaked and then we all piled in to rush down to Scarfell Pike. It was damp and unpleasant and my knee was pretty painful. I applied ibuprofen gel, freeze spray and deep heat but it kept on waking me on the long drive to Cumbria.
We got to Scarfell about four am and found no parking spaces. Our driver stuck the car at the side of the road and we had a brief (and for my part, unpleasant) breakfast. The other Maple teams started at least ten minutes ahead of us. We then rocketed up Scarfell which was very busy and compared to Nevis, very easy. It was still cold and unpleasant on top but I never even put my waterproof jacket on. We messed around for a few minutes on the peak taking photos and then were off.
We got to the bottom about ten past eight in the morning and headed off to Snowdon in Wales. Our driver did not get a lot of sleep (if any) during these breaks. He must be a machine as his driving was calm and accurate with great navigation throughout.
We got to Snowdown about 12:30 after a delay caused by an accident which forced us onto back roads. It was quickly up the Pyg Track to the summit. The last part of the ascent on the Zig Zag was pretty exhausting and then it was straight down again via the Miners Track. I found going downhill, clambering over stones very hard going on my knees and my guts by this point. We got to the flatish section of the Miners track and from there it was easy. We romped in at 23:13:44, beating the other two Maple teams whom both came in within the twenty four hours.
It was an excellent and slightly disorientating experience that I am not sure I would rush to repeat but I am glad I did it.
I slept like a log Saturday night and then went up the Miners Track again with Helen and the kids (and a large group of others from Maple). This time the weather was fowl. We were soaked to the skin and the winds were gusting at 80mph in the valley. We made it to the second lake but were forced back. Edith and Tom were both screaming and it was impossible even to see the rain was so hard. I was still damp six hours later when we finally got home!
Decent log entries are essential. On our current project we aim to write enough data to allow off-line analysis of performance and usage plus errors. I am also an advocate of more immediate and accessible runtime information. Log analysis is great but sometimes you need empirical data right away. On our current project we use the Java MBean facility. These MBeans can be easily accessed in a graphically rich way using tools like JConsole or VisualVM.
We have a couple of different types of analyzer which we expose through MBeans. One simply records how many times an event has occurred in a short time period. Another calculates an real time average, again across a short time period. For example, we have analyzers which record the length of time it takes to make a call to a particular downstream application. Each duration is recorded and an average over the last ten seconds is reported via the MBean. This calculation has been implemented to make it very efficient from a CPU perspective since 99.999% of the time the average discarded before anybody bothers to look at it. Originally we were only using two or three of these average analyzers in the system. As developers found them useful they were placed around every single external interaction and we suddenly found ourselves with several thousand per application. These used about 25% of the heap and consumed significant CPU resource. The analyzer was then optimized and now consumes negligible resources.
I have been personally a little disappointed that our operations team have not made as much as of this facility as I expected. They are happy with their existing log analysis tools. As a team, we have questioned whether our investment in MBeans is worthwhile. We concluded that it was as even though the Ops team don't use it in Production the development group rely on the data exposed through JMX for trouble shooting, especially in system test, monitoring load tests and as a quick way to gauge the health of Production.
Last week I was reminded again how useful this immediately accessible data was. After a system restart Production was doing 'something funny'. We had ambiguous portents of doom and various excited people considering some fairly drastic remedial action, including switching off a production system which was serving several thousand users. The fear was that something in the affected system might be placing unbearable demands on downstream applications. This seemed unlikely as we have many layers of throttles and queues to prevent just such an occurrence but there was something odd going on. The first port of call for the developers were the log files. With several thousands transactions being performed a second there was a lot of log lines whizzing past. Panic began to creep in as it was impossible to discern what, if anything was going on in the explosion of data. I was able to walk over to my workstation and bring up VisualVM. In about thirty seconds I could see that right at that very moment we were sending a great many messages but well within the tolerances we had load tested against. I was able to use VisualVMs graphing function to track various data and within a minute or so could see that there was an unexpected correlation between two sets of events. (The number of messages sent to mobile phones and the number of identification requests made to a network component were drawing the same shaped graph, with a slight lag between the first and second sets of data and an order of magnitude difference in volume). Again these events were both within tolerances. Yes something unexpected was occurring. No it was not going to kill the system right now. We went to lunch instead of pulling the plug.
The data we collected pointed us in the right direction and we were able to find, again using VisualVM, that a database connection pool had been incorrectly set to a tenth of its intended size. The Ops guys made some tuning changes to the configuration based on what we had discovered. The application stayed up through the peak period.
In summary, log files are essential but there is still a need for real time, pre-processed data available via a easy to access channel. MBeans hit the spot in the Java world. Developers should not be scared of calculating real time statistics, like average durations, on the fly. They do need to make sure that the system does not spend a disproportionate amount of resources monitoring itself rather than delivering its function.
- Acceptance tests which execute against the application in its fully deployed state.
- Unit tests which typically target a single class and are executed without instantiating a Spring container.
I have been reflecting on the usefulness and investment in test code for as long as I had been doing TDD. I had come to a conclusion that whilst acceptance tests are non-negotiable on projects where I have delivery responsibility, perhaps unit tests for TDD are not mandatory in certain situations. I have worked with several developers who are very very good and simply do not see the value in TDD as it is contrary to their own, very effective, development practices. I know in my team right now a couple of the very best developers do not use TDD the way everybody else does. Education and peer pressure has had no effect. They are delivering high quality code as quickly as anybody else. Its hard to force them to do differently - especially when some of them pay lip service to TDD and do have a high test coverage count. I know that they write those tests after they write their code.
In the last few weeks I came across a couple of concrete examples where TDD could have helped those developers deliver better code. In the future I will try and use these examples to persuade others to modify their practice
1. Too many calls to downstream service.
The application in question has a mechanism for determining identity of a client through some network services. Those network services are quite expensive to call. The application endeavors to call them infrequently as is safe and cache identity when is is resolved. We recently found a defect where one particular end point in the application was mistakenly making a call to the identity services. It was not that the developer had made a call in error, it was that the class inheritance structure effectively defaulted to making the call so did so without the developer realizing. The identity returned was never used. I suspect that this code was not built using TDD. If it had been then the developer would have mocked out the identity service (it was a dependency of the class under construction) but would not have set an expectation that the identity service would not have been called. The use of mocks not only to specify what your code should be calling but what it should not be calling is extremely useful. It encourages that top down (from the entry point into the system) approach where you build what you need when you need it.
Its likely that the defect would never have been introduced had the developer been using TDD. As it is we have a application which is making a large number (and it is a large number) of irrelevant calls to a contentious resource. We now have to schedule a patch to production.
Coincidentally, there was an acceptance test for this service, which was passing. This highlights a deficiency in our acceptance tests we have to live with. They test the 'what' but not the 'how'. The tests were running against a fully deployed application which had downstream services running in stub mode. The test proved that functionally the correct result was returned but it had no way of detecting that an additional spurious call to another service had been made during the process.
2. Incorrect error handling
In a recent refactoring exercise we came across a piece of code which expected a class it was calling to through an exception whenever it had an error processing a request. The error recovery in the code in question was quite elaborate and important. Unfortunately, the class being called never threw an exception in the scenarios in question. It had a status object it returned which indicated if corrective action needed to be taken. (It was designed to be used in conjunction with asynchronous message queues where throwing an exception would have introduced unnecessary complexity). The developer could have easily used mock objects and set an expectation that the exception would be thrown and the problem would have remained. But, if TDD was being used and the developer was working top down then the expected behavior of the mocks would have guided the implementation of downstream classes. Nothing is foolproof but I think this manner of working should have caught this quite serious error.
More subjective problems
I have also noted two other potential consequences of having some developers opt out of TDD. I do note that some developers on the team produce code that is more complex than others. It is fine from a cyclomatic complexity perspective but when you try and understand what it is doing you find yourself with a higher WTF count than you would expect. I think (again this is subjective, I have not gathered any empirical evidence) that a lot of the complexity comes from a lack of cohesion in the code. Logic is spread around in a way which made sense to the original developer as they had internalized all the classes concerned. That logic is not obvious to a new pair of eyes. If you are using TDD then this encourages cohesion in classes because it focuses the mind on what the class is responsible for before the developer has to worry about how it delivers those responsibilities.
This is a very subjective point and I would happily agree that several of the team members who do use TDD occasionally produce nasty code. My gut feeling however, is that it happens less often.
One final problem with some of the high flyers not using TDD is that bad practices tend to propagate through the team just as quickly as good ones. I have caught a couple of new joiners following a bad example or simply not using TDD becuase the developer they look to as a mentor is not evangelizing about the technique because they themselves do not buy into the practice. This is a shame as those new joiners often have a greater need of the rigor that TDD imposes than the more experienced developers.
Somebody very clever, who probably had a beard (Grady Booch?), once said that "Regular releases into production are the lifeblood of the software development process.". I agree. My current client also seems to be in agreement but cannot extract themselves from the constraints their existing processes.
The client in question has a successful agile adoption. Walking round the development teams you see task boards, burn downs and SCRUM meetings. Go to a management meeting and you'll hear them talk about two week iterations and the importance of continuous integration. At a strategic level, the organisation (which is very large) is still waterfall orientated. This has implications for the way in which work is financed. Funds for the development, testing and deployment of a certain application are released on waterfall inspired milestones. This, in conjunction with a legacy of long development cycles has led the this 'release vehicle' anti-pattern.
The organisation has an unwillingness to make a deployment of a component into production unless there is named and funded change request which covers its release. Activities within development, possibly funded internally as 'business as usual' do not have such CRs. Therefore, a development activity such as refactoring for technical debt reduction or improving performance might get engineering buy in but will not get released into production until some CR happens to touch the same application.
It is common to see refactorings made which then sit in source control for literally months as they wait for an excuse to go live. Medium to low priority defects or useful CRs which lack very high prioritisation from marketing never get executed because the programme manager does not have a release identified for the change.
The application suite can appear inert to external parties as it takes a considerable period for changes to make it through the full release cycle. This erodes confidence. If I was a product owner and saw that a team was taking six months to execute my minor change I am not going to be inclined to believe that the same team can turn around my big important changes quickly. I am going to be looking for other mechanisms to get my changes into production and earning money quickly. Once I find a route that works I am going to keep using it.
Why do people like the release vehicle?
- It is the way the whole software lifecycle as exposed to the rest of the organisation works. The QA team don't test a component unless they have funding from marketing. Marketing won't be paying for something that has no role in a prioritised proposition. The Operations team won't support the deployment actives for our component if they don't have the cash from the same marketing team.
- It looks like it is easier to manage for PMs. Releases (because they are infrequent) are a big deal, involve lots of noise, planning, disruption to everyday working pattern.
- It reduces the infrastructure costs. It costs resource to make a release unless every aspect including testing and operational deployment is fully automated (and even then there is potential cost, dealing with failures etc.). It costs resource to automate a manual build process. Engineers appreciate that fully automated build processes are a priority because in the end they reduce costs and increase agility. It is that age old problem of trying to convince not just the build team, but the build team's manager and the build team's manager's manager that it is worth diverting resource in the short term to fix a problem in order to make a saving in the long term.
What we should do instead:
We should schedule frequent (bi-weekly, ideally more frequent) updates in production from the trunk of source control for every component. We should not need an excuse for a release. The release process should be as cheap as possible, i.e. automated build, regression test, deployment and smoke test. The code in the trunk is supposed to always be production ready and the automated tests should keep it that way.
If we achieve this we should:
- Reduce complexity in branch management (no merging changes made months ago).
- Avoid a massive delay between development and deployment which is not cost effective and makes support very hard.
- Increase our perceived agility and responsiveness.
- Enable refactoring to improve non-functionals (stability, latency, dependency versions, capacity).
- Prevent a release from being a 'special occasion' which requires significant service management ceremony.
Note: Having frequent, regular, low ceremony releases is greatly eased by having a fully automated build and deploy process but you can have one without the other. As stated above, having such a build process makes regular deployments to production cost effective but is an enabler rather than the justification for this change to working practice.
The snow was very powerdery, like icing sugar and it had been windy so the drifts were deep. The sunken lane behind Pound farm had filled up to knee height but the farmer (or somebody) had cut a narrow path through it. There were loads of hardy St Albans folk out enjoying the weather and breathtaking views in the bright sunlight. On the top overlooking Sandridge there was even a family having a picnic on a rug.
The path at the top of the hill had drifted up and there was no easy way thru. I jumped straight in. It only came up to just below my knee but the shock was incredible. It was like jumping into an ice bath. All the heat seemed to get sucked out of my calf muscles. It only lasted for about twenty metres but was a real trial to get through. I warmed pretty quickly as soon as I got out and brushed the snow off (bare legs).
I don't like to pause on runs, especially when its cold, but the views and the silence was so spellbinding several times I halted to drink it in for a few seconds.
Got back to St Albans feeling pretty good. Managed to do it at nine and a half minute mile rate which is nothing clever but the going was tough.
The first team was sitting down in a breakout area. Their body language spoke volumes. There was not one single participant maintaining eye contact with anybody else. Two people were playing on their phones. One developer had his head in his hands. Most had bored expressions. The team leader who is also the SCRUM master was the only person who spoke for the entire time I watched.
The second team was stood in a space near their desks. They were gathered round a task board which appeared to be up to date and the focus of several of the individual's updates. One person spoke at a time. Almost everybody appeared to be paying attention to whomever was speaking. Most updates were short and concise. A couple rambled on.
Other than both teams calling their meeting a SCRUM I could see no similarities.
As our agile adoption has spread beyond the original teams I suppose it is inevitable that as the experience gets spread a little thinner that people will simply label their existing activities with agile sounding names. Often we have no clear remit in those teams to supply a mentor and to try offer advice would result in rebuttal as team leaders guard their territory. Does this matter? Is there a risk that these teams who are not practicing agile correctly will diminish and discredit agile in the eyes of our programme managers? This is sounding a bit like an excuse for an Agile Inquisition going round checking that no team is using Agile's name in vain. This cannot be a good thing either.
This Sunday when we got the meal on the table Helen congratulated 'Chef Edith' on an excellent dinner only to have Edith correct her "I'm not Chef Edith now, mummy, I'm Eater Edith". She then proceeded to consume her own body weight in roast potatoes, chicken and sausages.
Jamie has a habit of prefixing every statement with 'Actually'. Edith has picked up on this and got a bit confused so now every time we have Yorkshire Puddings with the dinner she thinks they are called 'actual-puddings'. If you try and correct her then you get a quizzical look then she will keep on calling them 'actual-puddings'. It is sending Jamie mad "because ice cream and jelly is an actual pudding, not these". This might well explain why she keeps doing it.
The good news is that I made it round in one piece, felt pretty good during and was fine afterwards (no puking or crippling aches). I did two gels, one at 1:15 and another at 2:15 and ran with a bladder pack so I had plenty to drink. I am contemplating running the race with the pack. The only reason I can think not do so is that I look stupid. This doesn't sound like a good reason for not doing something beneficial, especially as I will look pretty stupid anyway...
- The 386-DX which featured a math co-processor on board which allowed faster execution of floating point maths required in financial applications, graphics packages etc..
- The 386-SX which was cheaper but did not feature the math co-processor and therefore was less 'powerful'. It had enough to differentiate itself from the previous generation of 286 chips but was regarded as distinctly inferior to its more expensive sibling.
This all sounded fair enough until it became public that the 386-DX and 386-SX shared the same manufacturing process including the construction of the maths co-processor. Where the process differed was that at some point the math copro in the SX chip was destroyed via some mechanical process. Suddenly the perception of the SX went from a lower spec product to a broken product. In Neal Ford's presentation he described the whole process as Intel selling customers their trash, like the SX was a defective unit being offloaded to unsuspecting users as a working chip.
At a very low technical level Neal's statement is true but not at a commercial or practical level. Intel engineers were given a change in requirements by their marketing department: Produce a chip that is going to enter the budget market to complement but not fully compete against the 386-DX. They looked at their system and determined the most efficient way to achieve this end was to 're-configure' existing 386-DX chips. This was likely much much cheaper than setting up a whole new production line and testing the brand new chip it produced. To do otherwise would be against the pragmatic engineering ideals that Agile is supposed to champion. If you flip the argument around and say: Should we redesign from scratch the entire process to achieve the same end result but at much higher cost just so that we claim that the chip contains no parts that it doesn't need. Maybe we find this so objectionable because a chip is a tangible entity and we are used to associating (rightly or wrongly) the cost of raw materials and manufacturing with the value of such items. Maybe we don't factor in the cost of design and marketing, which I suspect are massive for a complex, consumer electronic product like a cutting edge CPU.
This pattern raised its head again, but with less fuss, a couple of years ago when HP shipped high end servers with multiple CPUs. Some of the CPUs were disabled upon delivery. If the customer's processing requirements increased over time (which they always do) then they could pay HP who could then remotely enable the additional processors without the customer incurring the cost of an engineer on site, downtime for installation etc. etc.. Again, this early step towards todays processing on demand cloud computing concept raised some eyebrows. Why should customer's pay for something that was going to cost the supplier nothing? Again, this is a preoccupation with the physical entity of the manufactured component. If the additional CPUs had been sitting idle in a HP server farm rather than at the customer's site and purchasing them involved work being sent across the network my suspicion is that nobody would have had any objections.
We use a UML design tool at my current client site called Visual Paradigm. It has a number of editions, each with a different cost. It has a very flexible license purchase system which we have taken advantage of. We have a large number of standard level licenses because the features that this edition gives will support most of our users most of the time. Occasionally we need some of the features from a more expensive edition. Its not that only one or two individuals require these features, we all need them, just very occasionally. The Visual Paradigm license model supports this beautifully. We have a couple of higher edition licenses. On the rare occasion that users need the extra features they just start the program in the higher edition mode. As long as no other users connected to our license are using the higher edition at that time, there is no issue. The similarity with the examples above is that there is only one installation. We don't need to install a different program binary every time we switch edition. We love this as it makes life easy. I am sure Visual Paradigm like it as well as it simplifies their build and download process.
Two me the two scenarios, software and hardware appear pretty much identical. Everybody appreciates that the cost of creating a copy of piece of software is so close to zero that it is not worth worrying about. Therefore we don't mind when a supplier gives us a product with bits disabled until we make a payment and get a magic key. It's harder to think of hardware in the same way, that the build cost doesn't matter. That there might be no difference in manufacturing costs for two products with very different customer prices. The cost of delivering the product, like delivering software, includes massive costs which have nothing to with creation of the physical artifact.
Maybe this wasn't the point in the above presentation but I guess the thing that startled me was that my natural inclination was to immediately associate the value with the tangible item. In my head this is all getting mixed up with free (as in speech) software and the idea that it is unproductive and unethical to patent / own / charge for ideas.
I attended some very thought provoking sessions as well as presenting my own experience report on techniques for technical architecture in an agile context. My colleague from Valtech US, Howard Deiner, battled hardware and network issues to present a well received demonstration of Continuous Integration. Both sessions got reasonable attendances in the face of stiff competition from competing presentations being held in parallel. Both received very positive feedback from attendees (mine scored 80% for command of topic and 78% overall). I got lots of positive feedback for my session in conversations with conference attendees throughout the week. This was very much appreciated.
My presentation is backed up by an IEEE Report which was published in the conference proceedings. The report's premise is that incumbent waterfall software development processes force technical architects into a position of isolation and ineffectiveness (the ivory tower). The challenge I (any many many other TAs) have faced is how to deliver the guarantees of technical correctness and consistency that clients (especially those moving from waterfall to agile) demand when some of the most widely used conventional techniques for architecture have been discredited. I am thinking primarily of the emphasis placed on up front detailed design and architectural review.
The report details architectural problems during scale up of a previously successful agile project. The report then describes and evaluates a number of techniques employed on the project to deliver the technical architecture without ascent of the ivory tower. The conclusions include the justification that documentation is not an effective tool for technical governance and that the architect must target activities which bring them closer the the actual implementation. This mirrors Neal Ford's point in his Emerging Architecture presentation that we need to accept that the real design is the code, not the summaries and abstractions of the code presented via the numerous tools (UML, narrative documents, whiteboard sessions) at our disposal. Other conclusions include the identification of automated tests as an architect's, not just a tester's, most effective tool for delivering a correct solution. The paper also identifies that soft skills around communication and people management, often the anathema of the conventional architect are critical to success. Finally the report concludes that utilizing the most cost effective techniques (rather than just the most technically powerful) were key. (That does not mean you cannot justify the use of expensive techniques, just that they may only be justifiable on the most important components in the system).
Agile 2009 was a great balance of real world experiences (such as my session) and more philosophical, academic sessions. There also the chance to listen to some insightful keynotes and take part in some exciting expert sessions which challenged the way we work. It is always easier to learn in a community of professionals with real experience and this was definitely the case at this conference. I learned as much over dinner and in break out sessions as I did in the formal seminars.
I am going to blog what I learned in some of sessions in the next couple of days, possibly earlier as I am stuck at Chicago O'Hare for eleven hours after a 'mechanical issue' with our plane!
Tom was poorly all weekend with a high temperature. He hit 38.8c on Saturday night. I was taking it all in my stride until I used the fancy thermometer Helen bought and it started screaming and flashing red danger high temperature. Its funny how having some figures can totally change your perception of a situation, correctly or otherwise. I rang the on call doctors to check out how I was supposed to double up Calpol and Neurofen and spent all of Saturday night getting up and administering more medicine. He was still hot on Monday morning but Helen reported him much better by that evening.
The recurring pattern I see is that on at least four occasions the best of breed package has proven to be severely sub-optimal. What is worse is that most of the time these deficiencies occur when we move into high volume load test in a cluster. It seems only then that we discover some limitation. Typically this is caused by a particular specialism required for our application which then exercises some part of the library that is not as commonly utilised as others and therefore less stable. Some times the limitation is so bad that the library has to be refactored out before launch and other occasions the issue becomes a known restriction which is corrected at the next release. All of the significant refactorings have involved replacement of the large, generic, well known library with a much smaller, simpler, bespoke piece of code.
I am undecided whether this is a positive pattern or not. On one hand using the standard component for a short period helped us focus on other pieces of code. On the other, the identification of issues consumed significant resource during a critical (final load test) period. The answer probably is that it is okay to use the standard component as long as we put it under production stresses as quickly as possible. We then need to very carefully take account of the effort being consumed and have an idea of the relative cost of an alternative solution. When the cost of the standard component begins to approach the cost of the bespoke then we must move swiftly to replace it. The cost should also factor in maintenance. We need to avoid the behaviour where we sit round looking at each other repeating "This is highly regarded piece of software, it can't be wrong, it must be us." for prolonged periods (its okay to say this for a couple of hours, it could be true). I used to work for a well known RDBMS provider. I always felt that the core database engine was awesomely high quality and that anybody who claimed to have found a defect was probably guilty of some sloppy engineering. I knew however, from painful experience, that you did not have to stray far from the core into the myriad of supported options and ancillary products to enter a world of pure shite. The best of breed open source components are no different.
Some of the problem components:
ActiveMQ (2007) - We thought we needed an in memory JMS solution and AcitveMQ looked like an easy win. It turned out that at that release the in-memory queue had a leak which required a server restart every ten to fifteen days. It also added to the complexity of the solution. Was replaced by very few lines of code utilising the Java 5 concurrency package. I would still go back to it for another look, but only if I was really sure I needed JMS.
Quartz (2007) - The bane of our operations team's life as it would not shutdown cleanly when under load and deployed as part of a Spring application. Replaced by the Timer class and some home grown JDBC.
Quartz (2009) - Once bitten, twice shy? Not us! The shutdown issue was resolved and we needed a richer scheduling tool. Quartz looked like the ticket and worked well during development and passed the limited load testing we were able to do on workstations. When we moved up to the production sized hardware and were able to put realistic load through we discovered issues with the RAMJobStore that were not present with the JDBC store (which we didn't need). It just could not cope with very large (100 000+) numbers of jobs where new jobs were being added and old ones deleted constantly.
The OAUTH sequence diagram was correctly checked into the UML repository and was pretty good. Looking at it I was suddenly struck but a deep sense of unease. How was I supposed to know whether the implementation sitting on our servers bears any relation to the work of art being displayed on my screen? What value is my statement without real knowledge that we are secure? I know this is is something I have known for years and bang on about to anybody who will listen but it was a startling moment to be sitting there looking at the design and being asked to make a formal statement about its realisation without empirical evidence. I already knew from an audit of the acceptance test suite (end to end, automated, in-container tests) that one of the omissions was anything that exercised OAUTH. I decided that one of my priorities for tomorrow will be the completion of that test and that I wont be making a statement of compliance without it.
It was a glorious morning at first, not too hot or cold and no wind or rain for a change. I stopped (naughty but what the hell) for a few moments when a flight of three large civilian helicopters came over and again to watch to largish brown hawks chasing each other through the trees in the woods. The hawks were making very odd keening sounds and I was able to watch them for a minute or so. I am not sure what they were. Too small for buzzards I think and they looked too large for Sparrowhawks though I think the female Sparrowhawk is pretty chunky.
Going along the disused railway alongside the river Lea (which looked beautiful and was full of big fish) I came across a large tree that had come down in the recent storms and had blocked the path. A family was cycling the other way so I did my good dead for the day and helped the dad get the bikes over.
I did quite a hilly route and as I was climbing the steep hill coming out of Wheathampstead I caught up with a group of cyclists. I had been running for two and a half hours and covered fifteen miles at this point but the temptation was too great so I pushed that little but harder and passed the stragglers on the hill. Joy! One of the tail enders took exception and started working up thru his gears and putting a real effort in. He managed to draw level with me briefly before dropping well behind. Come the brow of the hill even the crappest came whizzing past. I even had the breath to swap pleasantries with the guy I had past. If they had hung around they would have had the last laugh though. I only managed another few hundred metres before the effort caught up with me and I started to feel much worse for ear with stomach cramps and jelly for legs. It was all downhill (performance wise) from there.
No real damage afterwards other than a nasty sun / dehydration headache which lasted until Monday. I feel asleep for half an hour after the run and that was enough to ruin my nights sleep (along with visits from Jamie and Edith). I felt like shit the next morning. I wasn't too stiff though and managed an okay 43 minute run in to work on Tuesday but a pitiful 44 (six minutes slower than the previous week) minute run back in the evening.
Interesting that VMWare paid more for SpringSource than Redhat paid for JBoss, even in the midst of the recession.
Spring has got to be the tool of choice for the development of enterprise Java applications right now. I wonder if in the near future deploying Spring applications to the VMWare cloud (or using Spring beans already deployed in the cloud) will be as easy as deploying to Tomcat or Jetty?