I find it surprising how often I end up reinventing the wheel when it comes to project estimation. I suspect that there are others out there who end up doing the same, and so I decided that it was past time that I standardised my personal approach to project estimation using a model spreadsheet. Approaches and standards for estimation vary, but my preferred approach to a second cut estimate is:
- Work out the set of scenarios that apply to the project being built.
- Estimate each using a complexity rating (High, Medium, Low etc.).
- Adjust this according to how complex the project is likely to be.
- Use this to work out the likely number of man-days that the project is likely to take, based on appropriate experience and best-guesses.
- Map this to an effort estimate, and hence to a likely team size and duration.
It is this approach that I have implemented as a spreadsheet model for future, and general, use. The spreadsheet is available for download here:
Over a series of posts I will document the usage of the model, how it calculates its figures, the reason behind its structure, and slowly work through an example of the usage of the model. The model is intended to be easy to use and widely applicable. Please feel free to use it, and give me feedback about it. Leave a comment, call me on +44 7887 536 083 or email me at email@example.com.
This is very interesting. This got me thinking about MY ways of doing such estimations…
A thought about steps #2-4.
When I do *technical* project estimations, I always try to come up with a rating based on 3 things: complexity, priority and severity and then adjust them according to the perceived number of unknowns, etc, etc. I call my rating “risk indicator”. Thus far, nothing new…and I think I understand why you omitted the two latter ones from your spreadsheet.
But looking at you spreadsheet, I realized how one could “scientifically” calculate the actual risk! I think this could really help with #3-4 because I think you are doing this calculation outside of the spreadsheet and then using this mental indicator to tweak #3-4. I might be reinventing a well known truth here but nonetheless:
Risk is NOT a COMBINATION of complexity, priority and severity.
Risk is a MISALIGNMENT / DISCREPANCY between complexity, priority and severity!
Just think about it. Risk grows not due to your complexity, priority and severity being high but rather, it grows in cases where e.g. your severity is high but your priority is low – and you end up doing things in a technologically wrong (priority) order.
If you were to assign complexity, priority and severity ratings to technological scenarios (based on the same scale), just identifying the misallignments between the ratings of a single scenario would give you a good idea of the RISK involved.
The same is probably true about business scenarios but since in business, priority (of initiative) and severity (of impact) seem to go together, I would probably stick with just priority and complexity…
A bit more about risk.
Risk indicator is good for 4 things:
1. Influencing project feasibility rating
2. Influencing project scope (e.g. descope the most risky artifacts)
3. Influencing prioritization (e.g. do risky things first)
4. Translating risks into time. Specifically, it is useful to translate risks into a contingency buffer because no matter what you do, some risks will get realized and will end up costing time.
Hmmm… I think I might’ve got carried away when saying
Risk is also that.
Also, it occurred to me that calculating risk warrants a separate spreadsheet similar to the one you created. One of the inputs into this sheet would be the scenarios. One of the outputs would be, in a way, some of the weights and scales in your project estimation spreadsheet.
The sort of parameters one would manipulate in such a spreadsheet would be: complexity, priority, severity, competition factor (could be quite complex & elaborate e.g. based on Gartner quadrant + market share), likelihood of scenario occurrence / scenario depreciation, integration / handover / adoption problems, staff & skill-set parameters, perceived amount of unknowns.
I think the output of such calculations should ultimately be the “percent of certainty of success” (I won’t garbage up your blog about what success criteria is, I promise) where 0% is a doom and 100% is a sure thing. It would be useful to get such output per scenario, per logical group of scenarios and per project.
Putting my pen down now…