Continuing the ‘value of software’ discussion

In a previous post, I promised to continue the discussion of measuring the value of software. I have had several discussions over the last weeks. Many practitioners measure the value of software using intangibles such as strategic alignment. This permits staff to set priorities by agreeing that some software has more alignment than others and so should be prioritized  first. That approach has its strengths and weaknesses. A strength is that is it is ‘math free’, and so daily consumable. The weakness is that it is ‘math-free’, and so does not provide objective, comparable measures for comparing investment decisions.

The challenge for software and systems is that the future costs and benefits are uncertain. So there are a couple of ways to proceed. One way is to treat the investment as a option and apply option pricing models. Option pricing models has some real advantages, but take some advanced math. There are especially difficult when there are multiple sources of uncertainty (aka volatility). I think we are years away of generally using such methods (although I know one example of an investment bank using such methods).

A simpler measure of an investment that has a flow of future costs and benefits is a risk-aware version of the net present value (NPV) equation. It is how the conversation with the funding stakeholder’s begin. One accounts for the uncertainties by treating the future values in the NPV probabillistically and using Monte Carlo simulation. This uses some math, but not beyond the skill level of our field. I describe this approach here. (There is also this article.)

Technical Debt, Technical Liability, and Moral Hazard.

I just had a good conversation with Zadia Codabux, a graduate student at Mississippi State University and IBM Graduate Fellow. I am her IBM mentor. Her PhD research is on technical debt. We are trying to make sense of the various perspectives across the industry of what exactly is technical debt.

As I have a mentioned, a common definition of technical debt is that is a measure, in some unit, of the deficiencies of the code that may need to addressed in the future. A colleague (nameless since I do want to take the risk of misquoting him) suggests code deficiencies do not become debt until there is commitment to address the deficiencies.

While that insight make sense, I think that it misses a key insight: Creating deficiencies raises the probability of having to make the commitment to fix the code. Ignoring the fact that bad coding adds to the likelihood of having to make the commitment is a a path to ruin. Eventually the commitment will have to made and by then the cost may seriously damage the organization.  The cost of addressing the defect grows over time.

What I have been calling technical liability, in lieu of any better name, is a probabilistic view of the distribution of costs that might be assumed. Only by reasoning about the probability, can one drive understand whether the invest in reducing the deficiencies makes good economic sense.

As Ms. Codabux points out one source of technical debt is that the person coding is motivated to get the code done quickly, perhaps cutting corners, and is likely not to assume the debt. In other words, he or she creates the risk of someone else making the commitment. Getting rewarded for creating risk someone else will assume is called moral hazard. It is how banks that are too big to fail make money. They take risks counting on the government to bail them out. I think moral hazard is something we should address as a community.

Flow measures for Software and Systems

In a previous post, I wrote about lean analytics. I have over the last few weeks written a long article on how to specify and instrument product flow measures for software and systems in the context of DevOps. Today, that paper was published on the IBM Developer Works site. In that paper I make a few key points.

  • DevOps is lean principles applied to business processes that include software
  • Unlike manufacturing, software does not have consistency of artifact.
  • Software is best managed as an artifact-centric business process.

I apply these observations to show how to instrument for software the product  product flow measures  found in value stream maps. These are the measures needed to implement the Donald Reinertsen’s flow principles.

Please take a look at the article. You can post comments on the DevWoks site or here.

What is the Unit of Value of Software?

As promised, I am continuing the discussion of software value.

In the previous posting, I pointed out that software has created immense economic value, but organizations do not measure the value of the software they create.

In this posting, I raise the question,  “If one were to measure the value of software, what would be the unit of measurement?” I have repeatedly seen teams try to prioritize software features (or epics or applications) with a value score without agreeing on a unit. They go through brainstorming exercises to prioritize the features by value to agree on the order of delivery. Generally in these meetings, the loudest or more insistent voice prevails since there is no real basis for setting the score. To establish a robust criterion for prioritization  is a choice of unit and a way to measure software in terms of that unit. Only then, one can apply an objective criteria for prioritizing the work.

The question then is “what is the unit to use?”

Here is a chain of reasoning: Suppose you were to buy instead of build the software, what you pay? You would reason about the monetary value of the benefits over time accruing over time from the software and the total cost of ownership. You might discount the future benefits to get to a net present value. The difference between the discounted benefits and costs would be a price you might be willing to pay, of course measured in money. In short, you would pay money. This price is a good surrogate for the value of the software to the business.

To reason about the benefits,  you would start by listing them. The benefits might include cost savings such as labor avoidance, the revenue from the sale of an app, the revenue from the delivery of a service, or attracting additional customers resulting in business growth. In each case the benefit  is money or easily translatable to money. So in these cases,  the unit of value is money.  I discuss a similar approach in more detail in this CACM article.

This reasoning should flow down to features. Knowing the monetary value of shipping a feature provide a firmer basis than value score for backlog prioritization methods like Weighed Shortest Job First (WSJF).

A key benefit of assigning a monetary value to the software is that the conversation with the business is much simpler. The value is not vague as somehow supporting some strategy or something, the value is described in monetary terms that even a CFO would understand.

So, is the unit of value for software money? The answer is often, but not always. Sometimes benefit of the software is to deliver some public service such as public safety or health. In a military context, the software could enhance force protection. In these cases, the units of value might be lives saved or reduction of  infections.

One can argue that one can use monetary units of value for all software, but that would entail. placing a dollar value on human lives. That can and is often done, but unpalatable to many and may not be necessary for managing priorities.

So the net is that the unit of measure of value for most software is money. On occasion, the unit of value is something like lives saved to which one might not want to assign a monetary value.

Actually assigning monetary value to the software is hard. If it were easy, it would be common practice. I argue although it is hard,it is not impossible. I will address how in a later posting

 

Starting the conversation on value

From my early days in the field, I have been puzzled by an apparent paradox: T

There are constant  complaints about how immature  and out of control software software development is compared to other engineering fields. Few remember the Nato Conference on the Software Crises in 1968 (I don’t). The conference raised an important issue: With the growing capabilities of computers, we need comparable growth in software development. Apparently, the term software engineering was coined at this conference. That said, the recommendations from the conference were naive (essentially waterfall) and I bet did more harm than good.

The handwringing continued. Twenty-fice years later Scientific American published ‘The Chronic Software Crises’, September, 1994 Meanwhile, there was the infamous, flawed Standish report from 1995. The Standish organization continues this theme with their annual Chaos Reports. (see Scott Ambler’s cogent debunking of the Chaos reports) which reinforce the sense of crises.

Another source of attack on our industry was the claim that IT in the end delivered no real value. IT was just a sort of utility like electricity, essential but only a necessary cost of doing business. IT therefore should managed as a cost center. (See Does IT Matter).

Here is the paradox: Throughout all this naysaying and handwringing, somehow software was able to be the key to huge economic growth. I do not not need to share with this blog’s readers the role software plays in smartphones, apps, hybrid cars,the internet,  the internet of things, cognitive computing, ….

So our industry must be doing something right!

I have been aware of this  paradox for quite a while. One answer I came up with is that software efforts seem to ‘fail’ more often than other engineering efforts because software often takes on more innovative efforts. In fact, if you control for innovation, we do no worse than other engineering disciplines. Look at the cost and schedule overruns for Boston big dig, or the Sydney Opera House. I pick these because the overruns cannot attributed to software. There are lots of other examples including the mechanical design of the Boeing 787, the first composite commercial airliner. If you are doing innovative work, one should expect false starts, and occasional abandoned efforts. However, it is when software takes on the risk to develop innovation that it has the opportunity to deliver the most value. High risk, high reward.

It also should be noted that we have made good progress in learning how to organize and manage software efforts over the last decades, especially with the adoption of agile and lean methods.  Scott’s article captures this well.

So, even though there is not (nor probably never was) a software crises, our industry does have a value problem. It is not that we don’t deliver value, we do. It is that we do not measure the value we deliver. This  was a point in a recent CACM article. I have personally seen a growth in interest of software shops to start managing not only the cost, but the value of what they are delivering.

The reason we do not often measure value is that it is hard to do. However, it is not impossible.

I will continue this discussion in the next posting.

 

 

I’m back with some updates

I am back to blogging. The run-up to the IBM Innovate Conference has been very time consuming. And I have just finished and published what is for me a major report for the Cutter Consortium (more on that below). I still have several writing projects underway, but with Innovate next week, I expect to start blogging again much more frequently, at least for a while.

The report for Cutter  fleshes out my ongoing conversation on Technical Liability as a extension of the technical debt metaphor. It can be found here. I am grateful to the Cutter leadership, especially Israel Gatt, for encouraging me to write the paper. The report builds on the initial idea introduced in my IBM blog. It goes into some detail on how technical liability can be calculated.

Recently,  I have been setting IBM Rational’s direction for instrumenting lean measures using some of the ideas in my previous blogs. The key idea is to provide instrumentation to support the implementation of the principles explained in Reinertsen’s The Principles of Product Development Flow: Second Generation Lean Product Development, the ‘must read’ of my previous post. In particular, I have been focused on surfacing the measures to manage highly variable product flow. This topic is close to my heart as it brings together two of my long-term career passions: team dynamics of software organizations and the practical application of the mathematics of business processes with high variability.

Last year, I was asked to come up with a core set of software operation measures. In response, I started the work on the product flow measures given the inherent variation of the software development. Now suddenly, as that work comes together, I have gotten several requests to start once again focusing on measuring the value delivered by software organizations, especially in the lean context.  So I will start a series of posts on that topic starting in the next posting. Please stay tuned.

 

A must-read.

I rarely recommend business books. I have little patience for the pseudo-novels about how some project overcame challenges by applying the techniques advocated by the authors. That is just me. Others seem to enjoy and learn from such texts. I prefer texts that explain how well-established techniques from other fields can be adapted to our domain. We all have much to learn from adjacent fields.

Recently, a friend and colleague recommended I read Donald Reinertsen’s text The Principles of Product Development Flow: Second Generation Lean Product DevelopmentThe Reinertsen text is such a book, drawing lessons from economics, statistics, queue management in telecom, and even how the military organizes to respond to a rapidly changing environment.

The readers of this blog presumably share my interests in applying what can be learned from a variety of fields to understanding the dynamics and economics of software and system development. This has been the topics

Briefly, I strongly recommend this book.