Name: Precision Estimation
Developers continually get asked to provide an estimate with a high degree of accuracy. They are expected to spend a fixed period of time to produce the estimate.
Project managers like estimates. Estimates allow people to plan, and planning is good. “Gold Donors” like estimates because it lets them have an idea of how much they are going to have to pay.
This isn’t bad. The problem is that estimates have a certain level of confidence, and that confidence can be expressed in several ways. The most common is a fixed window: e.g. feature X will cost $100,000 +/ 10%. This is a typical format for things that have a highly predictable cost.
There is, however, a second aspect to this estimate. This aspect is how confident you are that it will fit inside this window. By implication, you are 100% confident that it will fit in this window. So you will be equally surprised if feature X costs $110,001 as if it costs $200,000… both values are outside of your “window of confidence”.
This brings up the question of precision vs. accuracy. Precision is related to how tight your window is. Saying that feature X will cost $100,000 +/- 10% is a precise</eM estimate. If feature X costs $109,999, then the estimate is accurate. If it costs $110,001, then the estimate is inaccurate.
Needless to say, increasing the precision of an estimate will lower the accuracy. To increase accuracy, you need to do research: you need to actually prove things. For highly predictable and repeatable tasks, then you have little research to do. If I'm building a house, of a set plan, then the only research that has to be done is to check the soil conditions to see what kind of foundation I need. After that, I can have a highly accurate and precise estimate.
Research takes time. The amount of time it takes relates to the novelty and complexity of the task.
When you estimate, you typically break a task down into several chunks. These chunks themselves will be easy to estimate. Then you add them up and say you're done. But this effects precision and accuracy.
Given a fixed quantity of time, you can end up with an accurate estimate or a precise estimate. The accurate one will be wide. The precise one will be narrow. They will probably both be useless. 🙂
Express estimates with _both_ accuracy and precision confidence. Rather than saying that feature X will cost $100,000 +/- 10%, say you have 90% confidence that it will. Give some hints as to whether it is likely to be over or under if you’re wrong.
Focus on breaking estimates down into small tasks. Develop these tasks incrementally. As the the tasks get built, adjust the remaining estimate. Compare the accuracy of the estimate with the actual value. Learn!
If you can not give an estimate that is confident, don’t. If the business doesn’t like that, explain that you need more time. Ideally, invest that time by developing the sub-tasks in an incremental fashion.
Pay attention to your “confidence” estimate. If you regularly say you have a 50% confidence in an estimate, then you should be wrong about half the time. If you are only wrong a third of the time, then something is wrong.
4 thoughts on “Estimation Anti-Pattern”
I do know confidence intervals from lectures at the Universities, and it certainly makes sense when you are working with formulas and the numbers that one starts with, are based on certain statistical data.
But the way that most people estimate software tasks is by starting from their gut/experience. I think that using percentages to display confidence in this context, is a lot like “Potemkin villages”. It gives the impression of accuracy, where there simply is none: On what arguments would you base that some estimation has a 90% confidence level rather than a 89% confidence level? (Without going the route of some kind of “numerology” that is).
I think qualifiers such as “highly confident” or “moderately confident” will do a better job.
Another thing is that below a certain confidence level for (relatively) undividable tasks, your accuracy window for any subsystem (that is composed by a dozen of such tasks) will “explode” and cover a very wide area. This will have the effect that only estimates with a high level of confidence will be accepted, which makes the remaining differences even more arbitrary…
Also note that confidence can come in different flavours. In particular the one which often bites us on the butt in this biz is changing requirements.
Eg if the house is 60% complete, but the owner decides that they now want the pool to be where the garage is, the garage should be moved somewhere else (they *say* they don’t care where, but if you guess wrong there will be heck to pay), it should have a helipad on top… oh, and please replace all the insulation and wiring with fibre optic cable, because my buddy George was talking about how wonderful fibre optic is at our last golf game. And fibre optic sounds like fibreglass, so they must be interchangeable, right?
In that scenario, you would doubt the homeowners sanity. If they wanted all that, in the same time frame, without increasing expenditure or resourcing, you would just laugh at them.
Whereas in the Software world, such a scenario is all too common.
So what is our confidence level in the requirements? The more detailed they are, the longer it would have taken to gather them, and the more likely they are to have changed in that time, so the more detailed the requirements are the less confident we are in them.
Then there is the companion Anti-patterns to this one. Such as not refining the estimate, doing the estimate at the start, (when you know the least), and holding the developers to the original estimate even if the requirements change.
If the sponsor takes the original estimate and then starts making promises based on that, but doesn’t also actively protect those requirements, then their promises are threatened.
But these are just symptoms of the real problem, which is lack of power by developers.
I am a developer. I know estimation can be beneficial to project management. However, I don’t like to estimate. I don’t like to estimate because no matter how it is perfumed, this is a commitment given by the developer. And it is going to be used to harm developer.
I do believe managers should be the one providing the estimation. It is probably the most important foundation of Taylor’s management priciple that managers should observe, collect data, and analyze. If it is the case, why not managers take time to review the history of his team’s capacity, and he/she will have the most accurate estimate in his mind.
I tend to disagree with Sencer about using probability language. You can use probability distributions to give a precise picture of your expectation. You can explain that it’s only your judgement you’re displaying and not something that’s verified by thousands of projects. And saying “There’s an 80% chance of completion by date X” is no more a commitment than the weatherman saying there’s an 80% chance of rain. I’ve never seen this approach used in a serious project, but it could be if the customers didn’t object.
Of course, they want a commitment. I’d think that’s where things like late penalties come in. “If my estimates are right, I have an 85% chance to complete by week X for a 14% profit or better, a 10% chance to complete by week Y for a 7% profit, a 5% chance to complete by week Z and break even, and a 5% chance of serious trouble.”
And if they say “But we have to have it by week X” then the obvious response is “OK, let’s look for ways to improve the odds.” You can look for ways to make the specifications easier to meet, or split the project into pieces and give each piece to a separate contractor, with a well-reputed group writing the interface tests. That has its own risks, but when the other contractors aren’t your subcontractors you have a better chance to get your own part complete and tested on time.
But when the customer prefers to go with somebody who’ll make a definite promise he can’t keep, that may wind up hurting the reputation of the industry as a whole but — that’s what the customer wants.