Will there be a convergence of approaches to software development?

As a senior consultant of the Cutter Consortium, I am asked every December to make my predictions for the following year. The request gives me an opportunity to reflect on the current trends development and project them forward.

Israel Gat and I have been discussing the big picture of development for a while now. We see the convergence of prescriptive, follow-the-recipe methods such as the various flavors of Agile and Lean and the more foundational, principle-based approaches such as those described by Donald Reinertson. We wrote a joint prediction: Going forward we expect to progress  to reaching the right balance between the  follow-the-recipe and apply-the-principles approaches to development. You can find our prediction here. Please take a look. We will expand on this prediction at the upcoming Cutter Summit.

Continuing the ‘value of software’ discussion

In a previous post, I promised to continue the discussion of measuring the value of software. I have had several discussions over the last weeks. Many practitioners measure the value of software using intangibles such as strategic alignment. This permits staff to set priorities by agreeing that some software has more alignment than others and so should be prioritized  first. That approach has its strengths and weaknesses. A strength is that is it is ‘math free’, and so daily consumable. The weakness is that it is ‘math-free’, and so does not provide objective, comparable measures for comparing investment decisions.

The challenge for software and systems is that the future costs and benefits are uncertain. So there are a couple of ways to proceed. One way is to treat the investment as a option and apply option pricing models. Option pricing models has some real advantages, but take some advanced math. There are especially difficult when there are multiple sources of uncertainty (aka volatility). I think we are years away of generally using such methods (although I know one example of an investment bank using such methods).

A simpler measure of an investment that has a flow of future costs and benefits is a risk-aware version of the net present value (NPV) equation. It is how the conversation with the funding stakeholder’s begin. One accounts for the uncertainties by treating the future values in the NPV probabillistically and using Monte Carlo simulation. This uses some math, but not beyond the skill level of our field. I describe this approach here. (There is also this article.)

Technical Debt, Technical Liability, and Moral Hazard.

I just had a good conversation with Zadia Codabux, a graduate student at Mississippi State University and IBM Graduate Fellow. I am her IBM mentor. Her PhD research is on technical debt. We are trying to make sense of the various perspectives across the industry of what exactly is technical debt.

As I have a mentioned, a common definition of technical debt is that is a measure, in some unit, of the deficiencies of the code that may need to addressed in the future. A colleague (nameless since I do want to take the risk of misquoting him) suggests code deficiencies do not become debt until there is commitment to address the deficiencies.

While that insight make sense, I think that it misses a key insight: Creating deficiencies raises the probability of having to make the commitment to fix the code. Ignoring the fact that bad coding adds to the likelihood of having to make the commitment is a a path to ruin. Eventually the commitment will have to made and by then the cost may seriously damage the organization.  The cost of addressing the defect grows over time.

What I have been calling technical liability, in lieu of any better name, is a probabilistic view of the distribution of costs that might be assumed. Only by reasoning about the probability, can one drive understand whether the invest in reducing the deficiencies makes good economic sense.

As Ms. Codabux points out one source of technical debt is that the person coding is motivated to get the code done quickly, perhaps cutting corners, and is likely not to assume the debt. In other words, he or she creates the risk of someone else making the commitment. Getting rewarded for creating risk someone else will assume is called moral hazard. It is how banks that are too big to fail make money. They take risks counting on the government to bail them out. I think moral hazard is something we should address as a community.

Flow measures for Software and Systems

In a previous post, I wrote about lean analytics. I have over the last few weeks written a long article on how to specify and instrument product flow measures for software and systems in the context of DevOps. Today, that paper was published on the IBM Developer Works site. In that paper I make a few key points.

  • DevOps is lean principles applied to business processes that include software
  • Unlike manufacturing, software does not have consistency of artifact.
  • Software is best managed as an artifact-centric business process.

I apply these observations to show how to instrument for software the product  product flow measures  found in value stream maps. These are the measures needed to implement the Donald Reinertsen’s flow principles.

Please take a look at the article. You can post comments on the DevWoks site or here.

What is the Unit of Value of Software?

As promised, I am continuing the discussion of software value.

In the previous posting, I pointed out that software has created immense economic value, but organizations do not measure the value of the software they create.

In this posting, I raise the question,  “If one were to measure the value of software, what would be the unit of measurement?” I have repeatedly seen teams try to prioritize software features (or epics or applications) with a value score without agreeing on a unit. They go through brainstorming exercises to prioritize the features by value to agree on the order of delivery. Generally in these meetings, the loudest or more insistent voice prevails since there is no real basis for setting the score. To establish a robust criterion for prioritization  is a choice of unit and a way to measure software in terms of that unit. Only then, one can apply an objective criteria for prioritizing the work.

The question then is “what is the unit to use?”

Here is a chain of reasoning: Suppose you were to buy instead of build the software, what you pay? You would reason about the monetary value of the benefits over time accruing over time from the software and the total cost of ownership. You might discount the future benefits to get to a net present value. The difference between the discounted benefits and costs would be a price you might be willing to pay, of course measured in money. In short, you would pay money. This price is a good surrogate for the value of the software to the business.

To reason about the benefits,  you would start by listing them. The benefits might include cost savings such as labor avoidance, the revenue from the sale of an app, the revenue from the delivery of a service, or attracting additional customers resulting in business growth. In each case the benefit  is money or easily translatable to money. So in these cases,  the unit of value is money.  I discuss a similar approach in more detail in this CACM article.

This reasoning should flow down to features. Knowing the monetary value of shipping a feature provide a firmer basis than value score for backlog prioritization methods like Weighed Shortest Job First (WSJF).

A key benefit of assigning a monetary value to the software is that the conversation with the business is much simpler. The value is not vague as somehow supporting some strategy or something, the value is described in monetary terms that even a CFO would understand.

So, is the unit of value for software money? The answer is often, but not always. Sometimes benefit of the software is to deliver some public service such as public safety or health. In a military context, the software could enhance force protection. In these cases, the units of value might be lives saved or reduction of  infections.

One can argue that one can use monetary units of value for all software, but that would entail. placing a dollar value on human lives. That can and is often done, but unpalatable to many and may not be necessary for managing priorities.

So the net is that the unit of measure of value for most software is money. On occasion, the unit of value is something like lives saved to which one might not want to assign a monetary value.

Actually assigning monetary value to the software is hard. If it were easy, it would be common practice. I argue although it is hard,it is not impossible. I will address how in a later posting


Starting the conversation on value

From my early days in the field, I have been puzzled by an apparent paradox: T

There are constant  complaints about how immature  and out of control software software development is compared to other engineering fields. Few remember the Nato Conference on the Software Crises in 1968 (I don’t). The conference raised an important issue: With the growing capabilities of computers, we need comparable growth in software development. Apparently, the term software engineering was coined at this conference. That said, the recommendations from the conference were naive (essentially waterfall) and I bet did more harm than good.

The handwringing continued. Twenty-fice years later Scientific American published ‘The Chronic Software Crises’, September, 1994 Meanwhile, there was the infamous, flawed Standish report from 1995. The Standish organization continues this theme with their annual Chaos Reports. (see Scott Ambler’s cogent debunking of the Chaos reports) which reinforce the sense of crises.

Another source of attack on our industry was the claim that IT in the end delivered no real value. IT was just a sort of utility like electricity, essential but only a necessary cost of doing business. IT therefore should managed as a cost center. (See Does IT Matter).

Here is the paradox: Throughout all this naysaying and handwringing, somehow software was able to be the key to huge economic growth. I do not not need to share with this blog’s readers the role software plays in smartphones, apps, hybrid cars,the internet,  the internet of things, cognitive computing, ….

So our industry must be doing something right!

I have been aware of this  paradox for quite a while. One answer I came up with is that software efforts seem to ‘fail’ more often than other engineering efforts because software often takes on more innovative efforts. In fact, if you control for innovation, we do no worse than other engineering disciplines. Look at the cost and schedule overruns for Boston big dig, or the Sydney Opera House. I pick these because the overruns cannot attributed to software. There are lots of other examples including the mechanical design of the Boeing 787, the first composite commercial airliner. If you are doing innovative work, one should expect false starts, and occasional abandoned efforts. However, it is when software takes on the risk to develop innovation that it has the opportunity to deliver the most value. High risk, high reward.

It also should be noted that we have made good progress in learning how to organize and manage software efforts over the last decades, especially with the adoption of agile and lean methods.  Scott’s article captures this well.

So, even though there is not (nor probably never was) a software crises, our industry does have a value problem. It is not that we don’t deliver value, we do. It is that we do not measure the value we deliver. This  was a point in a recent CACM article. I have personally seen a growth in interest of software shops to start managing not only the cost, but the value of what they are delivering.

The reason we do not often measure value is that it is hard to do. However, it is not impossible.

I will continue this discussion in the next posting.