Hunting the Elusive Defendable Estimate

May 11, 2003 by
Filed under: Process, Project management 

Software estimation appears to be a difficult thing to do well. This is not because software is inherently different from anything else we need to estimate, but because we have been seduced into thinking that complex approaches and involved analysis somehow makes our estimates more defendable. What often happens is that we make a simple activity difficult, fail to focus on the important elements, and end up disappointed in the results.

Often, at the early stages of a project where we have little insight into the actual activities that will take place (or even who will be performing those activities) we will try to estimate the project by constructing a detailed network. We will, after the laborious exercise, have a precise indication of the effort, duration and cost that has little to do with reality, and no understanding of the risk and uncertainty associated with the project. A pretty GANTT chart, not a defendable estimate.

With a little more insight, we will eschew the bottom-up approaches for top-down methods, acknowledging we do not have detailed information to work from. With a sound body of literature supporting approaches like COCOMO II and the Putnam parametric model, often supported by Monte Carlo simulation, there is a tendency to assume that the estimate produced is defendable because of the technique. Alas, with little emphasis on the right information going in the front end and a lack of understanding of what is going on under the hood in many cases, the estimates produced can appear correct, but can be just as difficult to defend as they are to refute. “I guess it looks OK” is not a strong position to work from.

Identifying the size of what you are estimating is an important step in the right direction, but there are caveats here as well. While the IFPUG will suggest high accuracy from Function Point counters armed with a good specification, these results are not scalable down to the uninitiated. Other approaches to sizing may be more applicable to what you are building, but the bottom line here is that you need consistent application of the technique and a sound body of data from your experience to support the approach.

Well established models and techniques or detailed information does not automatically make a defendable estimate, just as power tools do not automatically generate fine Victorian furniture.

What makes a good estimate? A key thing to understand about early estimates is that the uncertainty is more important than the initial line in the sand, something few of us in software acknowledge. When I recently asked my daughter how many blades of grass there were in our lawn, she quickly and authoritatively said “1000” (indicating that she is destined for a career in software). Being clear about the huge uncertainty would have made the estimate defendable – “there could be anywhere between 1000 and 10000000 or more blades of grass in the lawn” is a reasonable estimate. Knowing the huge source of uncertainty allows us to do a little analysis and quickly improve the estimate – with close to 4000 square feet of lawn, we were clearly low by at least several orders of magnitude.

A good estimate is defendable if the size of the product is identified in reasonable terms that make sense for the application. Without serious experience, estimating Lines of Code for a substantial application can be meaningless, so stick to what makes sense. If I am building a fence and estimate that it will require 9 posts with 8 panels, I’ve got a good quantification of size. I can estimate how long it will take for each element (and how much it will cost), and I can easily refine my estimate as I progress and generate historical data while building that fence – I don’t have to wait years for a substantial base of data.

An estimate is defendable if it is clear how it was achieved. If the estimate simply came from windage (or a guesstimate, or a WAG, or whatever sugar-coated term you would like to give for an undefendable number), that information itself gives us an understanding of the legitimacy we can apply to the numbers, and we should expect a large uncertainty. If established models were used, we can then astutely check the underlying data for validity and applicability. If it was achieved by taking the business targets and simply suggesting we can fit all the work into the available time, we can send the estimator back to the drawing board. As we look back and review our estimates, we can understand how effective the technique was at generating a good estimate.

A good estimate allows all the stakeholders to understand what went into the estimate, and agree on the uncertainty associated with that estimate. With that, realistic business decisions can be made. If there is any black magic along the way, or if there is a suggestion that you can precisely predict the future, you are in for trouble. Usually in software development, numbers like that are more correctly called wishes – or lies. – JB

If you enjoyed this post, make sure you subscribe to my RSS feed!

Comments

Feel free to leave a comment...





  • Search by Topic

  • What’s Happening

    September 15, 2016: Thanks to the Vancouver chapter of the IIBA for inviting me to present a talk about Real Analysis agility - the bottom line is thoughtful application of effective analysis over Cargo Cult application of the latest fashionable approach! - fabulous interaction, great feedback!
    November, 2016 - A workshop series to help you develop resilience in the workplace and in your life!
    Next open enrolment sessions start soon - contact us to get involved!
  • On The Road Again

    Jim frequently travels across Western Canada for engagements, and welcomes opportunities to meet, run a workshop, Diagnostic or Lunch and Learn session.


    Contact Jim if you would like to connect around any of the upcoming dates:

    • March 1-3, 2017 - Winnipeg, MB
    • March 6-8, 2017 - Edmonton, AB
    • March 15-17, 2017 - Victoria, BC
  • What People are Saying

    We found the diagnostic very effective in providing a way for our teams to express their views on areas that we need to improve. At the same time, seeing where we were doing relatively well offered some great perspective and encouragement.

    — Michael Nienhuis, VP Operations, Class Software Solutions Ltd.