Best Foot Forward
There are many approaches that teams can take to improve. One thing that any team needs to understand before they should even try to get better is to understand why they would even bother to do so. There needs to be a clear and compelling reason for doing something different than they are currently doing, or any attempt to change is doomed to failure. Usually, that ‘why’ is sitting right under their noses, hiding in plain sight.
These reasons can take on many forms, such as team morale or delivered quality or increased competitiveness, but one stands out among all of these as compelling for the people who have the ultimate say over change. If we can show that we are wasting a significant amount of the money we are investing in development, the people at the top will sit up and take notice.
Usually. There have been situations where even extraordinary evidence of massive waste won’t budge someone with other vested interests, but that’s a story for another time.
To be able to come up with measures that indicate the current cost of non-quality, we need to find something in the current approach that is being at least reasonably managed. In most shops, even if they do a pretty good job of analyzing their products, there are challenges in dealing with changes in scope that would rule out any semblance of requirements measurement as being defendable.
As there is a relatively poor expression of scope up front, that doesn’t bode well for the estimates and tasks that would make up any schedule, either. Plenty of teams either don’t work to a schedule, produce a schedule that doesn’t reflect the reality of what they will do, or force all their work to fit the time and budget constraints they have been given. Measurements based on any of this is fantasy.
Tracking of time can be a challenge as well, as teams either don’t track their time, or track at the end of the day or week with rough recollection of what they had worked on, or track at the wrong level to be able to provide any meaningful information. On top of that, it seems that time tracking usually gets the strongest resistance of any proposed change in behaviour. Even though I have never heard any pushback from anyone who has actually looked at where their time goes, even for a day or two.
One thing that seems to be relatively well managed in most shops, big to small, is defects. Almost every team I work with (though not all, unfortunately) is most religious about using a robust defect tracking system, and most developers and testers have been conditioned to be pretty good at capturing what they find into the tracking tool, and working down those defects as a means of progressing on the project. For those teams that use the same system to track new work as additional tasks in the same system (which makes a great deal of sense for many teams), so much the better.
With a defect tracking system that has been in place for some time, we have a wealth of information we can use to understand how effectively we are working, with just a little bit of analysis. For starters, it is reasonable to look at a subset of all the defects logged, perhaps the last 25 that have been closed. You should have at your fingertips an understanding of who entered the defect, who fixed it, and roughly where it was found in the project lifecycle.
It doesn’t hurt to get them involved in the analysis at this point to try and get a deeper story. One data point that will often come out of this is that a number of these defects had actually been detected and reported by a client, even if entered by someone internally. Critical information here.
Looking at the defect itself, try to identify when it was injected into the system. While most are found in unit or integration test (or acceptance test or by the customer, oh my!), many will have been injected into the system much earlier. These are the ones that cost us the most of our time, as they often cascade into other problems as they lie latent in the system. Is it defect that is externally visible? Think of that as a requirements or scope defect. Is it an architectural issue, or an issue that is pervasive or inconsistent throughout the system? That’s probably a design flaw.
You will probably find that these two categories can even represent the lion’s share of the defects you are looking at, a sure sign that there are opportunities for improvement here.
Watch for those that were reported as a problem, but turned out to be a flaw in how the system was tested. These also often point to ambiguities in the requirements, rather than simply being test issues.
In many projects, you are also likely to easily recall a few really nasty problems. These might be embarrassing problems that the client found and wasn’t too happy about, or nasty internal problems that had the team stumped for some period of time. These can benefit from a separate root cause analysis, and will often point to a flaw further upstream, some that was skipped in the name of speed on the project (perhaps a skipped peer review or regression test?)
As we start to understand the profile of the defects we are seeing on our projects, we often find that early stage issues get found much later. It is these areas that we need to shore up to reduce the costs and uncertainty on our projects. Ask yourself what could have been done to find them earlier, and take simple steps to weed these defects out of the system earlier (or prevent them from being injected in the first place).
Significant improvement doesn’t require massive changes, but it does require a compelling reason for people to understand the need for change. For many shops, that need can quickly be made apparent by using the most rigorous system that is already in place and part of the culture. You can start our efforts by putting your best foot forward. – JB