I have observed that many lessons from software development experience can be applied to management of software development teams and that the opposite is also true -- many lessons from management of software development teams can be applied in hands-on software development. In this blog entry, I want to examine how lessons learned from software performance monitoring can be applied to appropriate monitoring of software development progress.
There are several issues that make it difficult to accurately measure software performance. Brian Goetz has written extensively on some of the difficulties. In Dynamic Compilation and Performance Management, Goetz outlines how dynamic compilation (such as is used with Java) complicates performance testing. This is one of the more subtle impacts of performance testing. Other and related problems measuring software performance include the fact that performance measurement directly impacts the performance itself (though many have gone to great lengths to reduce this effect), performance metrics can be collected in unrealistic situations (different hardware, different load, different actual running software, etc.), and performance metrics can be misinterpreted.
All of these problems that lurk in performance monitoring (monitoring affecting performance itself, unrealistic tests, and misinterpreted metrics) have counterparts in the similar effort to measure software development progress. Just as one must be careful when measuring software performance, measurement of software development progress must be approached carefully as well.
Too much focus on collecting software development metrics can actually slow down the very software development process that is being measured. Just as using resources to measure software performance impacts that very software's performance, measuring the development progress has some effect on that progress. In measuring software performance, we have learned to use tools and techniques that reduce the impact of the measurements on the performance itself. We need to similarly approach our software development metrics and ensure that the collection of metrics has only a minimum impact on that software development.
One way to reduce the effect of software development metrics on the development is to keep the number of requests for metrics down. Another obvious approach is to only request data that is easily provided and does not require significant effort to collect, organize, and present. There are many tools that are marketed to help reduce the impact on software development that is incurred for metrics collection, but even these can have a detrimental effect on the development progress when used improperly. For example, these tools may require developers to take extra steps or follow extra processes to ensure that their progress is adequately captured. The amount of time spent collecting and preparing reports on the progress of the software development effort can grow to be very expensive and add significant delays and hurdles to the development progress.
Just as software performance tests are useless or even dangerous (because they lead to bad decisions) when they are obtained against situations and environments that are not representative of the actual production environment, measuring the wrong things in software development progress can lead to useless and even detrimental results. I blogged previously on how using lines of code as too granular of a metric can lead to negative consequences because of the unintended consequences of this motivator. Similarly, other poorly chosen metrics can actually lead to bad decisions of developers who are trying to satisfy the metric rather than developing the best code.
Finally, misinterpreted performance results can lead to unnecessary optimizations. In the worst cases, these misinterpreted results might even lead to "optimizations" that actually make the real problem even worse. This can be the case with measuring software development progress as well. Lines of code, number of classes, and similar metrics can be misleading and misinterpreted. Poor decisions can be made on these inadequate metrics that actually hinder the software development process rather than helping.
While measuring the performance of software and measuring software development progress can both be difficult to perform properly, we still attempt to measure these things. In fact, as difficult as they are, we do need to measure them. We need our software to perform to certain levels depending on the context and the expectations of the software's users. Similarly, we need to deliver software by certain agreed dates to meet expectations of customers and potential customers. The key is to perform both of these measurements carefully and to constantly strive to reduce the impact of the measurement itself on what is being measured, to ensure that we are measuring the appropriate things, and to ensure that we interpret the metrics carefully.
The negative consequences of overzealous software development metrics collection has been a known problem for some time. In the software development classic The Mythical Man-Month, Frederick P. Brooks, Jr., articulates on this concept with vivid examples and illustrations obviously earned from his own personal experiences.
So, why don't we do a better job at this in many cases? Perhaps the most plausible explanation is that it is far more difficult to measure appropriately than it is to use the easiest measurement techniques that come to mind. It is easier to test our software's performance without concern for minimizing the impact on the performance itself and it is easier to measure our progress without trying to carefully craft metrics collection techniques that have minimal impact on the developers.
Similarly, it is easier to just test our performance in the first environment that is available rather than putting in the extra effort to replicate the environment and load as accurately as possible. It is also easier to count some arbitrary items such as classes, lines of code, etc. than it is to really try to measure delivered functionality that is not as easy to quantify.
Finally, misinterpretation of results of performance metrics or development progress metrics tends to happen when we are unwilling to put extra effort into really understanding why we are seeing the results. It is always easier to go with the first thing that comes to mind as we look at the results than it is to actually try to dig down into the real meaning of the results.
Many of the lessons we have learned from measuring and optimizing software execution performance can be applied to measuring and optimizing software development progress. Unfortunately, these lessons learned in one side don't always seem to be applied to the other side.