Monday, 27 May 2013

Putting Lessons to Work

An important phase of the project development cycle which is often discarded along the way, is the unfortunately-named process of "Post Mortem". It is an ugly term, completely inappropriate to it's significance although quiet understandable that it came to be so named.  The correct term should be "Autopsy".  "Post-Mortem" literally means "after death" whereas "autopsy" literally translates as "see for yourself".  The death analogy is a poor one because successful projects do not die and ill-conceived ones do not die fast enough.  On the other hand, a good old see-for-yourself evaluation should be welcomed at major milestones.

This phase is often passed over in favour of more pressing tactical work because the value of this process is misunderstood. Typically, the coroner's jury includes colleagues of superior or lateral rank to those who worked on the subject project. This is practised because more can be learned by an outside observer; the team members are often too close to be able to see which factors contributed to   the various wins and losses along the project path. The team members themselves, those who's work is being vivisected, are often defensive thinking that they are being second-guessed or criticized.  It is an unnecessary concern and often, when egos get involved, works against the process.

The object is simply to identify what worked, what is working and what did not work in terms of both process and technology. We are test-driving new technologies every day and deploying them to production the next. A lot of learning, a lot of research and empirical trial-and-error occurs in the development process (hence the name), and is would be unwise to miss the opportunity to capture the lessons that every new project brings.  No project is ever perfect; there is always a bottleneck, a bit of room for improvement, something to be considered before we launch on to the next project, which will be bigger and more complex.

That next project will very likely be implemented tools very similar to those used on the project before.  While entire development shops have been known to change programming language paradigms, it is not unlike a religious conversion and does frequently occur.  The same management practices will likely also still be in place and that is when your autopsy results will be the most relevant.  Look at the results and recommendations of that report and compare them to the objectives of your current project.  Are any of them relevant to your present undertaking?  Are there patterns of management or decisions being made which led to trouble in the past? Are you leveraging the most productive practices of the past?  

Every project is different, every client unique.  The lessons from one adventure might not map neatly on to the latest venture.  But the building blocks remain the same.  The complexity of requirements continues to grow an an exponential rate and only through continuous research and learning both within and without the organization can one hope to keep up.

Thursday, 23 May 2013

JavaScript - The Redemption of The Ugly Duckling.



In 1995 when Java was introduced with great fanfare, it brought with it the promise of a browser-embedded rich client with first-class widgets across all relevant platforms. The vision of a server-side Java was talked about but that was for the future; it was the applet that was given us to play.

Depending on who you ask, today the applet is either all-but-dead on the real-world internet or was eviscerated and buried in a tragic explosion when the 90's bubble collapsed on top of it. Server-side, mobile and desktop Java thrive and in a big way, but the dreams of Java as an instrument of a dynamically deliverable lightweight application delivered by and integrated with your browser, that early promise fell dead.

Barely noticed at that coming-out party was Java's awkward little sister, JavaScript (who, later DNA tests proved, proved to be utterly unrelated). It too promised portability across browser and platform, but operating within the context of the current page. It became useful in a utilitarian sort of way and soon came to become indispensable for providing basic features like automating roll-overs and light input validation.

Any attempts to stretch JavaScript, back in those early days, were punishing. The behaviour and even the core libraries of JavaScript implementations varied widely from browser to browser. The MS-v-World DOM Wars were at their height and even if you found something cool which worked across every major browser in your (extremely manual) test suite, it was likely to break with the next browser release.  Even under the best of circumstances, the various browsers themselves did not provide the most accommodating environments.  Debugging was a painstaking process that had to be repeated on a number of browsers and the shifting, incompletely standardized JavaScript engines which browsers were shipping ensured that there were always a lot of bugs.  Things which worked on every other browser but just not this one ate up hours of time that would have been better spent on sleep.

Many web developers grew suspicious of all-too-fragile JavaScript and used it grudgingly only because there very often was no other means of implementing requirements.  It was often treated as a technology of last resort. While there was some work on the server-side, largely pushed by Netscape and a few applications were embedding the open-source Rhino engine, it was pretty marginal and mainstream developers paid it little heed.

Then something happened, or more accurately, a series of things happened which converged in the most delightful way.

The catastrophic failures of the internet bubble had prompted many tech startups to explore faster, more flexible ways of producing applications which lead to a rising interest in server-side scripting languages. PHP had appeared on the scene and new web development was adopting it at a staggering rate. Languages like python, ruby and Lua all gained momentum contributing to a sudden new-found respect for scripting languages in general. And, on the client-side, the JavaScript engines in browsers were quietly improving.

This was the state of affairs when the modern JavaScript framework appeared.  There are others, of course, but I really am referring to the truly innovative, empowering notation which is jQuery. It took a little while for the importance of jQuery to dawn on me, certainly not before FireBug had hit the web and provided the first-ever development environment for JavaScript which was actually sane.  While high-level libraries and 'frameworks' for JavaScript have been around for awhile, providing useful features, jQuery did something more dramatic.  It gave us a fundamentally new, concise notation for performing complex operations, a powerful shorthand for describing the relationship between code and the document in which it resided. In doing so, it showed what a language of tremendous power and elegance JavaScript could be and it extended the power of the developer exponentially.

Interest in JavaScript on the server-side began to rise for the first time.  Through Rhino and Java's JSR-223 Scripting Engine Interface, projects of all kinds began to spring up which supported any compliant scripting language and JavaScript among the most popular.  And along came Node.js.

Node.js  takes advantage of Google's fast V8 JavaScript engine and event processing models learned from Ruby. Both Ruby and JavaScript are Object-Oriented languages that support dynamic typing and dynamically-scoped closures as first-class types. While the event concepts may have come from Ruby, the implementation used a native library (libev, later libav) to facilitate non-blocking I/O via asynchronous execution and the result was a tight scriptable interface with clean semantics and smoking fast performance.

JavaScript is proving to be a potent force on the modern development scene. The rise of HTML5 has given JavaScript more interesting and useful things to do within the browser and it's slowly taking its place as a first-class server-side development language.

Sunday, 19 May 2013

So, You Think You're Agile?

The software development industry has generally converged on the Agile Methodology as the one-size-fits-all universal approach.  Agile has many attractive features to justify the infatuation. Working groups are largely self-organizing, short sprints produce tangible results at regular intervals, daily goals are set and reinforced through informal scrums and the overall direction is honed and re-honed via a tight feedback loop with the client.

image courtesy of commons.wikimedia.com

Agile avoids the Problems of Big Analysis: the painstaking process of determining functional and technical requirements down to the last detail before the first line of code is typed.  One can see the draw of this so-called "old-school" methodology.  The developers (not to mention the client and the Product/Project Managers) set out with clearly defined goals. Scope is established and signed-off  by all parties, designs have been finalized, the opportunity for surprise greatly reduced.  The actual work of implementing a well-defined product is greatly simplified when all related factors has been rigorously identified beforehand. One highly-evolved exponent of this approach would be the Capability Maturity Model Integration (CMMI).  Evolved? Yes, but delivery is going to take awhile.

In the rapidly-changing world of business app development, Big Analysis is also fraught with well-known perils. Thanks to the inexorability of Moore's Law, client needs (and wants) are evolving at an ever-accelerating rate. Products of all kinds are in constant peril of becoming obsoleted by competition; in the digital world, this can happen before a product is fully defined.  If any product expects to survive any length of time in the modern market, it must move from idea to distribution/deployment as rapidly as possible.

The desire to produce software quickly is certainly nothing new.  Software development is expensive: analysts, developer hours, sysadmin time and QA iterations are all costly. Even more costly are the implications of deploying or distributing a deeply flawed product. In the 90's, we spent much of our pub time discussing the precepts of "Rapid Application Development" and the Pyramid of Fulfilment. Reaching back into the elder days, we find that IBM was a leading exponent of agile development (before the term was coined) via a significant investment in APL (A Programming Language), an interactive scripting language which places strong emphasis on packing a lot of semantic meaning into a relatively small set of symbols.  This encouraged developers to  bring clients to their terminals to model program behaviour iteratively with immediate client feedback.  Modern vendors of APL still frequently practice this, always have.

The Agile Methodology has come down to us as a set of broad rules, designed by long observation and experience, to turn ideas into concrete implementations with a minimum of overhead in terms of time and budgetary resources.  It encourages project planners to choose those elements of Agile which are suitable to the project at hand and disregard the irrelevant. Every Agile shop I have had the opportunity to work in has had what amounted to a set of individualized, custom processes intended to produce the vision, such as it might be, with a minimum of time and confusion. Unfortunately, that intention is all too rarely realized.

To give the matter some deeper thought (a thing which Agile neither encourages nor does it explicitly prohibit), we observe that the more successful advocates are aware that certain elements can not be ignored.  For example, Agile dictates that project members must be dedicated to that project, not stretched across multiple projects, each of them vying for a slice of attention.  A multi-tasked developer cannot achieve the level of focus that true agile development demands when they are spending hours in unrelated meetings, or hampered by delivery schedules for other products. The Agile developer is expected to be sensitive to the reality that many aspects of a project may be under-analysed or weakly defined and they need to be responsive in identifying those aspects as they arise. This is not easily accomplished by a distracted developer or, worse, an overworked one.

Another element of Agile methodology which is all too often gleefully ignored is recommendations about work hours.  A tired programmer makes mistakes, they miss things.  The software development industry has always demanded extra hours from programmers, sysadmins and project managers alike. There will always be events which necessitate it: milestone delivery dates, clearing developmental bottlenecks, operational emergencies.  This is accepted and well-established.  But if your teams are putting in 60+ hour weeks, month after month, it points to a flaw in the planning and estimation process.  A naive manager might perceive this as getting 50% added value with no added cost but as it proves there are serious costs for this practice. Before very long the measure of concrete progress slows and things get missed, people start making mistakes. More than once, I have seen a code base regress under these circumstances. Those things that are getting missed are often the things which were not identified by the analysts or the architects because, hey, we're Agile and continuous discovery is part of the job.

There are two other aspects to Agile Methodology which are critical to successful execution but which are often ignored: the Maintenance Sprint and Peer Review.  Developers are explicitly instructed to implement any given task in most convenient way possible. Don't worry about the best way of doing a particular task, just find one which works well for now.  This philosophy is based on the observation that tasks are frequently revisited as analysis-by-discovery continues and the time spent to determine  that optimal method and subsequently code it often turns out to be wasted as new information may invalidate that code.  And this is the condition which makes both the Maintenance Sprint and Peer Review indispensable.

The Maintenance Sprint is the only opportunity you have to optimise your program, to spend time on improvements previously deferred for those parts of the business logic which have 'settled'.  It is a chance to organize the creative, task-oriented out-flux into a coherent model; to provide a sober, pessimistic Yang to counter-balance the effusive optimistic Yin of the creative developer. The Peer Review puts a second set of eyes on the code which can only serve to increase the quality of that code.  Even the most accomplished developer benefits from reviewing code with someone of similar experience, to spot the odd oversight, correct assumptions or to validate an apparently ideal solution.  Junior programmers engaged as readers in this exercise get to see how their more experienced peers go about things.  These are the means by which we tune our programs and sharpen our programmers.

A related risk is the problem of controlling scope in a context where it is acknowledged that all requirements are not known up front.  The subject of negotiating scope in an Agile environment is beyond the scope of this particular essay but it certainly bears consideration as scope-creep continues to be the leading cause of death among projects.

Agile, when done well, is a powerful and transformative methodology but is up to the project planner to ensure that the methodology used is complete.  An incomplete methodology tends to lead to an incomplete product and then it never gets out the door.

Moore's Law

Moore's Law is often stated as: The power of technology doubles every 18 months.  I will not quibble about the attribution but it was originally an observation about the density of transistors that could be packed onto integrated circuits.  However, the maxim  has transcended itself.



Technology is the intersection between science and economics.  Technological power increases both when new innovations are introduced and as the price of those innovations drops in the market making it more readily available to a larger number of consumers. In short, it is the science that consumers can afford.  Some portion of those consumers are technologists who will take today's affordable innovation and use it to produce tomorrow's. The most successful of those innovations go to market to enable the next cycle.

The popular media often comments on the steady growth of tech but that is a misnomer. The progress of technology is not steady at all.  It is accelerating.  Any phenomenon which exhibits a doubling period is experiencing an accelerated rate of growth and the converse is also true.

There is a interesting property of phenomenon with doubling periods if I may demonstrate:

Imagine that I have made an agreement with my son that, in lieu of a monthly allowance, I will pay him 1 penny on the first day, 2 pennies on the second, 4 pennies the next and so on, doubling the amount each time. Let's look at my books over time.

Day 1: paid 0.01 - total paid 0.01
Day 2: paid 0.02 - total paid 0.03
Day 3: paid 0.04 - total paid 0.07
 ...

Day 12: paid 20.48 - total paid 40.95
Day 13: paid 40.96 - total paid 81.92

I will save you all the suspense: by day 30, I will owe the boy $2,684,354.56 and have indebted myself a total of $5,368,709.12.  The interesting thing to note here is that on each day, I am paying out 1 cent more than the sum of all previous payments combined.

When we say that digital technology doubles every 18 months, we are saying that it puts at the service of the consumer more new technological power than in all of history combined. This is tempered somewhat by our ability as consumers to absorb and leverage that new-found power (think Spiderman in the early days), but the rise is seemingly inexorable. 

No acceleration can go on indefinitely; physics demands that there be a limit somewhere. The arrangement with my son has to end some time in the second year when the earth's copper supply gives out.  (If we were able to mine all of the copper in all of the asteroids and moons in our solar system, it would only extend the lifespan another week or two at best). The end of the acceleration of technological advancement is somewhat harder to see.  Limits have been predicted before and deftly avoided by diabolically clever researchers.

Not all branches of technology conform to an 18-month doubling period but a survey of their history suggests that they too grow at an accelerated rate as each new discovery or innovation inspires and enables the next generation of change.

While we flew our first kites in 200 BCE, it was nearly  2000 years before man took to the air in hydrogen balloons in the 1780s.  From there, it was only 120 years before we find the Wright Brothers putting heavier-than-air machines into the sky.  Another 15 and the Fokker Dr.I Dreidecker is taking pilots to over 6000 metres at close to 150 kph. Forwarding another century finds us today anticipating private enterprise putting tourists in space as a regularly scheduled service within the next year. It appears that some accelerated development has manifested itself. I would hard pressed to speculate what that industry's double period is but we can be assured that, on average, it delivers more to us every year than it did the year before.

Another place this phenomenon of accelerating growth may be seen, one driven by the growth of technology, is the end user. Be they consumers or business clients, they are becoming more sophisticated in their wants and needs demanding products of ever increasing complexity, while holding out the promise of an ever-shorter shelf life as obsolescence asserts itself more rapidly than ever. They not only demand more, they demand it faster.

For those of us in the business of delivering digital products, this is a serious competitive challenge; a daily call-to-arms for every analyst, manager, programmer and architect and the executive who would lead them. New tools and processes must forever be explored which might allow us to deliver more complex functionality in a shorter space of time. We no longer have the luxury of finding a niche, settling into a stable market with an established, comfortable way of delivering to it.  Moore is right behind us, dogging our every step.

A programmer implementing a brand new feature must always write new code to do so.  When called upon to enhance an existing feature, the choices are to rewrite or to refactor.  If the enhancement does not break the existing paradigm, a refactor is often effective and inexpensive.  If a new paradigm forces the issue, a rewrite may be the only option.  The processes which we employ to translate client/consumer needs into products need to evince a similar flexibility.  As clients change, as products evolve the process itself must be agile to adapt to the new paradigms or be rendered obsolete along with the products. We must stand ready to refactor the processes themselves when exigencies arise that the current process make no provisions for.

Core values must still remain intact.  It's one thing to sit at 10000 feet and remark on how cleverly the little ants organize themselves but when you hit the ground the goal remains the same and it's the same handful of questions that need answering.

  • What got built? 
  • Is it what the client wants? 
  • Did we test it?
  • Was it on time and on budget?
Whatever process is employed in organizing your resources, they must contribute directly to getting to getting it done or they need to be changed.  If they are designed to create a perception of progress without contributing to the final product, they must be jettisoned as a waste of valuable energy. 

Dwight D. Eisenhower is alleged to have said, "In preparing for battle I have always found that plans are useless, but planning is indispensable". If he did, it is a resounding early endorsement of Agile methodology.  He is reminding us that it is not about the procedure, it about the results and procedures should be jettisoned if a more direct path to those results is discovered.



Thursday, 16 May 2013

The Rivers of Babylon

The Tao gave birth to machine language. Machine language gave birth to the assembler.
The assembler gave birth to the compiler. Now there are ten thousand languages.
Each language has its purpose, however humble. Each language expresses the Yin and Yang of software. 
Each language has its place within the Tao.
But do not program in COBOL if you can avoid it.



The universe of computer programming languages is in a constant state of evolution. Paradigms once considered vogue are exhausted and shelved, only for new paradigms to be embraced according to the latest schools of thought.

One of the questions I am mostly frequently asked, whether in interviews or in social situations where I am foolish enough to mention what I do for a living, is: what is my favourite language? I have no short answer for that.

For the non-technical inquisitor, I try to offer a metaphor.  What does a carpenter say when asked what his favourite tool is? Does he tell you that he loves his hammer best?  That his skill-saw is suitable to his every task? I certainly hope not. If he does, I'm sure not going to hire him.  I would expect him to tell me that his hammer is best when there are nails to drive, the saw where there are beams to trim.  The typical carpenter owns a large tool chest, both wide and deep, packed with all sorts of contrivance, both familiar and obscure.  It includes the tape measure which is always at hand, the level and the plumb line which are needed every day; it also includes some odd looking contrivances, rarely used but for particular types of work they are exactly the right thing.

It is long proven that any Turing-complete machine can emulate any other Turing-complete machine.  As pretty much all modern programming languages are Turing complete, any given language can perform every possible task that can be performed by any other language.  In practice there are some limitations to the application of this theory, but it is essentially correct. So it is possible for a programmer to tackle anything equipped with nothing more than one programming language, a single tool for every task.

Does this mean that a serious programmer should be content with the first wooden hammer they learn how to swing? Perhaps we could answer that if we take a moment to reflect on why the current proliferation of programming languages exist.

The invention of assembler should need no explanation; directly inputting machine code is monotonous and error-prone.  The interests of both efficiency and sanity were best served by the creation of a symbolic language for addressing the machine. Then we had a human-readable (so it was held) means for instructing our machines. Theoretically, as assembler is Turing-complete (it has to be or we would never have been able to implement any of our higher-level languages upon it), it could be used to implement any computable task but there are several obvious limitations.

Assembler is very much bound to the specific processing unit for which it is designed.  As different processors may perform the same operation in vastly different ways, there is nothing portable about it. As the complexity of requirements began to grow at an exponential rate (as they continue to through this day, thanks again Mr. Moore), it proved to be extremely expensive in terms of man-hours to build the applications then in demand.

The greatest challenge in writing assembler is unfair to refer to as a limitation or a weakness, as it speaks to it's very nature: assembler requires the human programmer to translate the logic of any given problem down to a mechanical level that the machine can follow.

Humans thrive on abstractions.  We pour over maps, devour books, drink in media, all of it abstractions; all of it carefully produced representations of someone else's abstract thought. All efforts at notation made over the many thousands of years of our civilized history have been efforts to record thought into a permanent form to be revisited or reused.  This is as true for the granary records the Sumerians pressed into clay as for the musical notation of European medieval monks.

Mathematicians have been taking advantage of this for quite some time.  Much of the history of mathematics is the history of mathematical notation.  Ken Iverson's essay Notation As A Tool Of Thought demonstrates how our ability to think is enhanced and empowered by the use of notations which are able to encapsulate powerful concepts into unambiguous representations. Ideas which were once unwieldy can be reduced to symbols and become the bricks of a grander edifice. So proceeds much of our scientific and cultural history.

It was quickly seen (anticipated, in fact) that it would be far more convenient to create high-level notations more akin to patterns of human thought rather than to the machines mechanical dialect.  If such a notation is clear, unambiguous and sufficiently complete as to be able to describe a domain problem fully then it should be possible to implement it as a tool to automate that solution.

As patterns of thought are many and diverse, so it is with the the notations devised to express them. In the 1970s, we find RPG, APL, C (and yacc/lex) and Smalltalk all well-established with Scheme and awk gaining attention. The 1980s brought C++, Objective-C and perl each with passionate advocates and adherents before the 90s came along and blew up the scene. JavaJavaScriptPHPPythonRuby and Lua have become serious contenders in all kinds of software development and particularly dominate the web. Each of these languages sports features which make them compelling and highly suitable for particular tasks.

This is not to say that each generation throws out the languages and paradigms of the previous decade. The best tools evolve over time, often in useful directions attaining the maturity that invites investment. According to the TIOBE Programming Community Index, C continues to be the single most commonly used language in the development of software showing a slight increase over this time last year (as of May 2013). It is followed closely by Java and a group of C variants further down (objective,++,#) before we find PHP at #6 with around 5.8% of the market.  python, ruby and perl all come in at less than 5% before dropping off into little boutiques of groovy, XSL, R J/APL and many others, all of them doing useful work with the best tools available for the task. 

Recommending what languages are best used for new software projects depends very much on the domain.  One can not answer that question without first analyzing the requirements, determining the problem to be solved and selecting a language which is well suited to describing that problem. 

Language adherents often gather themselves into heterogeneous camps eschewing all others, confident that their tools can deliver anything that any other language or tool set can deliver.  What that misses is the education gained from seeing a familiar task undertaken in a completely unfamiliar way.  It stimulates the mind and forces you to consider new possibilities. It will teach you to stretch familiar environments in ways that you had never previously considered.