Following up on my previous blog, today my colleague, Israel Gat, and I published a new blog on the Cutter website, describing our recent thoughts on the future of development frameworks. Please click here. We will be describing this at length at the upcoming Cutter Summit.
As a senior consultant of the Cutter Consortium, I am asked every December to make my predictions for the following year. The request gives me an opportunity to reflect on the current trends development and project them forward.
Israel Gat and I have been discussing the big picture of development for a while now. We see the convergence of prescriptive, follow-the-recipe methods such as the various flavors of Agile and Lean and the more foundational, principle-based approaches such as those described by Donald Reinertson. We wrote a joint prediction: Going forward we expect to progress to reaching the right balance between the follow-the-recipe and apply-the-principles approaches to development. You can find our prediction here. Please take a look. We will expand on this prediction at the upcoming Cutter Summit.
I have found there is some confusion on this topic. I recently posted a blog to address the confusion on my Cutter Blog,
In a previous post, I promised to continue the discussion of measuring the value of software. I have had several discussions over the last weeks. Many practitioners measure the value of software using intangibles such as strategic alignment. This permits staff to set priorities by agreeing that some software has more alignment than others and so should be prioritized first. That approach has its strengths and weaknesses. A strength is that is it is ‘math free’, and so daily consumable. The weakness is that it is ‘math-free’, and so does not provide objective, comparable measures for comparing investment decisions.
The challenge for software and systems is that the future costs and benefits are uncertain. So there are a couple of ways to proceed. One way is to treat the investment as a option and apply option pricing models. Option pricing models has some real advantages, but take some advanced math. There are especially difficult when there are multiple sources of uncertainty (aka volatility). I think we are years away of generally using such methods (although I know one example of an investment bank using such methods).
A simpler measure of an investment that has a flow of future costs and benefits is a risk-aware version of the net present value (NPV) equation. It is how the conversation with the funding stakeholder’s begin. One accounts for the uncertainties by treating the future values in the NPV probabillistically and using Monte Carlo simulation. This uses some math, but not beyond the skill level of our field. I describe this approach here. (There is also this article.)
I just had a good conversation with Zadia Codabux, a graduate student at Mississippi State University and IBM Graduate Fellow. I am her IBM mentor. Her PhD research is on technical debt. We are trying to make sense of the various perspectives across the industry of what exactly is technical debt.
As I have a mentioned, a common definition of technical debt is that is a measure, in some unit, of the deficiencies of the code that may need to addressed in the future. A colleague (nameless since I do want to take the risk of misquoting him) suggests code deficiencies do not become debt until there is commitment to address the deficiencies.
While that insight make sense, I think that it misses a key insight: Creating deficiencies raises the probability of having to make the commitment to fix the code. Ignoring the fact that bad coding adds to the likelihood of having to make the commitment is a a path to ruin. Eventually the commitment will have to made and by then the cost may seriously damage the organization. The cost of addressing the defect grows over time.
What I have been calling technical liability, in lieu of any better name, is a probabilistic view of the distribution of costs that might be assumed. Only by reasoning about the probability, can one drive understand whether the invest in reducing the deficiencies makes good economic sense.
As Ms. Codabux points out one source of technical debt is that the person coding is motivated to get the code done quickly, perhaps cutting corners, and is likely not to assume the debt. In other words, he or she creates the risk of someone else making the commitment. Getting rewarded for creating risk someone else will assume is called moral hazard. It is how banks that are too big to fail make money. They take risks counting on the government to bail them out. I think moral hazard is something we should address as a community.
In a previous post, I wrote about lean analytics. I have over the last few weeks written a long article on how to specify and instrument product flow measures for software and systems in the context of DevOps. Today, that paper was published on the IBM Developer Works site. In that paper I make a few key points.
- DevOps is lean principles applied to business processes that include software
- Unlike manufacturing, software does not have consistency of artifact.
- Software is best managed as an artifact-centric business process.
I apply these observations to show how to instrument for software the product product flow measures found in value stream maps. These are the measures needed to implement the Donald Reinertsen’s flow principles.
Please take a look at the article. You can post comments on the DevWoks site or here.
As promised, I am continuing the discussion of software value.
In the previous posting, I pointed out that software has created immense economic value, but organizations do not measure the value of the software they create.
In this posting, I raise the question, “If one were to measure the value of software, what would be the unit of measurement?” I have repeatedly seen teams try to prioritize software features (or epics or applications) with a value score without agreeing on a unit. They go through brainstorming exercises to prioritize the features by value to agree on the order of delivery. Generally in these meetings, the loudest or more insistent voice prevails since there is no real basis for setting the score. To establish a robust criterion for prioritization is a choice of unit and a way to measure software in terms of that unit. Only then, one can apply an objective criteria for prioritizing the work.
The question then is “what is the unit to use?”
Here is a chain of reasoning: Suppose you were to buy instead of build the software, what you pay? You would reason about the monetary value of the benefits over time accruing over time from the software and the total cost of ownership. You might discount the future benefits to get to a net present value. The difference between the discounted benefits and costs would be a price you might be willing to pay, of course measured in money. In short, you would pay money. This price is a good surrogate for the value of the software to the business.
To reason about the benefits, you would start by listing them. The benefits might include cost savings such as labor avoidance, the revenue from the sale of an app, the revenue from the delivery of a service, or attracting additional customers resulting in business growth. In each case the benefit is money or easily translatable to money. So in these cases, the unit of value is money. I discuss a similar approach in more detail in this CACM article.
This reasoning should flow down to features. Knowing the monetary value of shipping a feature provide a firmer basis than value score for backlog prioritization methods like Weighed Shortest Job First (WSJF).
A key benefit of assigning a monetary value to the software is that the conversation with the business is much simpler. The value is not vague as somehow supporting some strategy or something, the value is described in monetary terms that even a CFO would understand.
So, is the unit of value for software money? The answer is often, but not always. Sometimes benefit of the software is to deliver some public service such as public safety or health. In a military context, the software could enhance force protection. In these cases, the units of value might be lives saved or reduction of infections.
One can argue that one can use monetary units of value for all software, but that would entail. placing a dollar value on human lives. That can and is often done, but unpalatable to many and may not be necessary for managing priorities.
So the net is that the unit of measure of value for most software is money. On occasion, the unit of value is something like lives saved to which one might not want to assign a monetary value.
Actually assigning monetary value to the software is hard. If it were easy, it would be common practice. I argue although it is hard,it is not impossible. I will address how in a later posting