0

Truly, One Size Does not Fit All

(This is a duplicate of a posting I made to the Cutter Blog.)

Software development is a really a single discipline. What comes under the overall field is a combination of disciplines that address a range of problems:

  1. Maintaining and evolving fielded code
  2. Adding significant new features to an existing application or platform
  3. Building an entirely new application or platform

These differ in the amount of innovation required and the amount of information available for delivering a quality system. Teams working on type 1 problems generally are not required to invent anything and they have detailed information on the code change required and available technology. Teams addressing type 2 efforts may need to be innovative in building out and integrating the capability. Also, they usually have incomplete information of the problem to be addressed and the technology to be used. Hence they should do some experimentation. Teams addressing type 3 efforts should plan to have much learning and invention.

The situation is described in figure 1 below. The x-axis is innovation and the y-axis is completeness of information and hence ability to make prediction.

Innovative stairs

Figure 1 The Development Spectrum

In the face of this, it is hardly surprising that there are a plethora of software techniques that have been adopted over the decades. Each of these is effective for some part of the innovation spectrum. None are the answer for the whole spectrum. An example of mapping some key processes to the spectrum is found in figure 2 below.

Landscap

Figure 2 The Development Landscape

A team may be asked to address all three kinds of software efforts, but that is rare. Every software organization has a unique mix of types. So one size does not fit all.

A blog posting is far too short to elaborate on this idea. However, this is among the thoughts that lie behind the Integrative Framework under development by Israel Gat and myself. This frame will be discussed in some detail during our workshop at the upcoming Cutter Summit. There we will discuss how to select and adapt the appropriate set of techniques to your development mix.

If you cannot make the Summit and are interested in your own workshop, please contact sales@cutter.com.

0

Finding Team Velocity Using Bayes Nets

One of challenges of sprint planning is settling on a good choice of velocity. One simple, but imprecise and approach uses burn-up charts. A clear explanation of dealing with uncertainty of velocity using burn-up charts can be found late in  this  video. This technique may not be good enough, especially in the days of the project or if the project never settles in with a nearly constant velocity. Here is an explanation of applying some math to get a better answer if you need it.

Those who have followed me through the years may know that I have been an advocate of using Bayesian reasoning for development. The key thought is that interesting development problems deal with uncertain quantities such as team velocity or time to completion, and that Bayesian analysis is the way to reason about uncertain quantities. This last statement is contraverial is some circles (for a history click here). In any case, without getting too deep in the discussions, Bayes is a very practical answer to many problems that arise in development.

I have built an example using the AgenaRisk Bayesian Net tool (A free version can be found here.). The example is described in this document, Bayes for Velocity. The AgenaRisk file used in the document can be found here. You can download the free version of AgenaRisk to play with the model yourself. The model is based on a parameter learning example file delivered with the tool.

I hope to write a comparison of this method with the more simplistic methods in a future blog.

As always, comments welcome!

1

Will there be a convergence of approaches to software development?

As a senior consultant of the Cutter Consortium, I am asked every December to make my predictions for the following year. The request gives me an opportunity to reflect on the current trends development and project them forward.

Israel Gat and I have been discussing the big picture of development for a while now. We see the convergence of prescriptive, follow-the-recipe methods such as the various flavors of Agile and Lean and the more foundational, principle-based approaches such as those described by Donald Reinertson. We wrote a joint prediction: Going forward we expect to progress  to reaching the right balance between the  follow-the-recipe and apply-the-principles approaches to development. You can find our prediction here. Please take a look. We will expand on this prediction at the upcoming Cutter Summit.

0

Continuing the ‘value of software’ discussion

In a previous post, I promised to continue the discussion of measuring the value of software. I have had several discussions over the last weeks. Many practitioners measure the value of software using intangibles such as strategic alignment. This permits staff to set priorities by agreeing that some software has more alignment than others and so should be prioritized  first. That approach has its strengths and weaknesses. A strength is that is it is ‘math free’, and so daily consumable. The weakness is that it is ‘math-free’, and so does not provide objective, comparable measures for comparing investment decisions.

The challenge for software and systems is that the future costs and benefits are uncertain. So there are a couple of ways to proceed. One way is to treat the investment as a option and apply option pricing models. Option pricing models has some real advantages, but take some advanced math. There are especially difficult when there are multiple sources of uncertainty (aka volatility). I think we are years away of generally using such methods (although I know one example of an investment bank using such methods).

A simpler measure of an investment that has a flow of future costs and benefits is a risk-aware version of the net present value (NPV) equation. It is how the conversation with the funding stakeholder’s begin. One accounts for the uncertainties by treating the future values in the NPV probabillistically and using Monte Carlo simulation. This uses some math, but not beyond the skill level of our field. I describe this approach here. (There is also this article.)

0

Technical Debt, Technical Liability, and Moral Hazard.

I just had a good conversation with Zadia Codabux, a graduate student at Mississippi State University and IBM Graduate Fellow. I am her IBM mentor. Her PhD research is on technical debt. We are trying to make sense of the various perspectives across the industry of what exactly is technical debt.

As I have a mentioned, a common definition of technical debt is that is a measure, in some unit, of the deficiencies of the code that may need to addressed in the future. A colleague (nameless since I do want to take the risk of misquoting him) suggests code deficiencies do not become debt until there is commitment to address the deficiencies.

While that insight make sense, I think that it misses a key insight: Creating deficiencies raises the probability of having to make the commitment to fix the code. Ignoring the fact that bad coding adds to the likelihood of having to make the commitment is a a path to ruin. Eventually the commitment will have to made and by then the cost may seriously damage the organization.  The cost of addressing the defect grows over time.

What I have been calling technical liability, in lieu of any better name, is a probabilistic view of the distribution of costs that might be assumed. Only by reasoning about the probability, can one drive understand whether the invest in reducing the deficiencies makes good economic sense.

As Ms. Codabux points out one source of technical debt is that the person coding is motivated to get the code done quickly, perhaps cutting corners, and is likely not to assume the debt. In other words, he or she creates the risk of someone else making the commitment. Getting rewarded for creating risk someone else will assume is called moral hazard. It is how banks that are too big to fail make money. They take risks counting on the government to bail them out. I think moral hazard is something we should address as a community.