Friday, March 31, 2006

Can or Should you Measure Software Development Productivity?

Lots of businesses are trying to measure their productivity these days. In the software development space, I can tell you it is being attempted in many ways. I would have to say at this point that all of them are flawed.

IT leaders, in many cases, are being pressured to measure their development productivity, to show improvement over time, just like other measurements in business. Business leaders, measure financials, and processes that are repetitive in nature, and they think that IT should be able to do the same thing. This is where the rub is, and the complete misunderstanding of what software development is. It is even where things like CMM go wrong.

If you have a manufacturing process, that process is repeated, with the exact same steps over and over again. These processes can easily be measured with throughput metrics, quality metrics, based on component, assembly and final product testing. They produce the exact same thing every time. Other business processes, such as picking product in a warehouse, are similar in nature. A person is instructed, usually through some software based system, where to go in the warehouse, what to get, and where to put it. Once again, a highly repeatable process, that produces the same outcome (at least when done correctly) every time. You can easily measure it, and not affect the outcome (or at least not affect it in a negative fashion).

That final phrase, "and not affect the outcome", is a very important phrase where software development is concerned. Also, "repeatable" is also very important to understand. Is software development repeatable? Can you measure it without affecting the outcome?

The answer to those two questions are the key on whether you can even try to measure software development productivity. Let's take the first question, and see where it leads us.

Whenever you embark on doing software development, you always have new requirements. Based on those requirements, the logic has to be different than what has been done before. Based on the people working on the project, their personal experiences and knowledge dictate the implementation choices that are made, even if the requirements have been implemented by someone else in another project. External forces, like technology changes in surrounding hardware and software, as well as things like corporate standards and direction changes, all influence how the software solution will be implemented. I am just scratching the surface here, on the myriad forces that work on a software development project. When you take these things into account, along with many other things within the typical software development project, how can anyone expect that this is a repeatable process? I don't believe that you can! By definition, software development is a creative act by human beings, hence the outcome will be different each and every time it is done.

To draw an analogy, if you took the same person, and sat them down in front of the same scene, and asked them to draw or paint it. Then have them do it again. Would it be the same the next time? The answer is obvious, it wouldn't. Now expand this analogy to include multiple artists, working on the same work of art, each dividing the work into some manageable piece. Now, what would you expect? Now expand it again, to periodically change out some of the artists for different artists (a common occurrence on software development projects), and what would you expect? I think the answer is clear. At no point would you end up with the exact same drawing or painting. Continue to extend this analogy to include new scene elements every time (like new system requirements), but they have to be incorporated into the same drawing or painting, and you start to get a good picture of what ongoing development on the same code base includes. Go even further, and have some of the scene elements be in direct conflict with others that used to be in the scene. I think you are probably getting the picture (pun intended)!!!!

That leads us to the second question. Can you measure it without affecting the outcome? The most prominent measure for software development productivity is function points. Considering that software development is a human creative act, all humans being measured will want to understand how the measure is calculated, and what is expected of them. When you consider that function point counting, counts things (e.g. number of unique interfaces, number of database tables, etc.), the more of those things that you produce the higher the value, and supposedly you have produced more functionality for your business. Do you see the inherit conflict?

To truly drive productivity in a process where human beings are endeavored in a creative act, you should be striving to do less, not more! The least amount of work to meet the requirements of the system should be the goal. As soon as you put a system in place that incents people to do more, you end up with a much more complicated implementation. Knowing that you are being measured based on the number of things you produce, and you are confronted with a design decision. One option has fewer of what is being counted, and one has more, which one do you think will be chosen? So the answer is clear, that these types of measurements simply incent the inverse behavior that you are looking for, and certainly affect the outcome. Not only do they affect the outcome, but they affect it in a negative way, especially where quality is concerned. More software in a system, creates more opportunities for errors in the implementation. It is a given that quality will suffer, and probably suffer dramatically. Of course, then all the project manager types out there are thinking, we will just do more testing, or better testing. Now you have just elongated your process, and once again are going in the opposite direction you intended.

On final question. How does counting things like interfaces, tables, etc., equate to the value that a software system has in the first place? It doesn't have anything to do with it at all! You could create this huge software system that would have lots of function points, but if your business doesn't find any value in it then it is not worth anything! It is what the software enables for your business that makes it valuable or not. The center of what we measure should be value to our businesses, nothing more, and nothing less!

2 comments:

Anonymous said...

Andy -
Todd Sherman here. Excellent write-up, I agree strongly with what you've written. My team continues to move to Agile with Scrum. Because I'm being forced to report on the team's development "productivity", I'm using back-log items completed per sprint. This definitely has its flaws! But, assuming that you've worked closely with your business owner to appropriately identify and prioritize the backlog items, and assuming that over time the work associated with each back log item averages out to be about the same, in my mind this measure begins to get towards something resembling value delivered to the business. Of course, the team is now incented to create very small (as it relates to the work required) backlog items. :-) Which isn't necessarily a horrible thing.

Andrig T Miller said...

Certainly, having the small backlog items is good. I think anything you can do to encourage small incremental change, which speeds time to value is a good thing. That type of work environment, where you are consistently delivering small incremental change smoothes the IRR (Internal Rate of Return) or ROI (Return on Investment), and will actually reduce the cost (less overhead with small incremental development) and increase the return rates, because you no longer wait an extended period of time to start getting a return (even though it is a smaller return initially).