October 15, 2009
As a means for discussing the model I will follow through an example of the model being used. To do this the needs to be an example usage scenario. In this case I have chosen to use the following:
“A banking web site has retail and corporate clients, and well as a set of automated processes that must be completed overnight in the bank’s overnight batch window. The site has a standard Java web architecture, with the batch processes being initiated using a batch process at the application server. The purpose behind the model is to examine the capacity required in the major system components, and to make ensure the ongoing capability of the host systems for the site.”
If you have followed through the performance model series (here), you will note that this is exactly the same scenario. I have chosen this in order to be able to compare and contrast the application of the performance and project estimation models. There are clear differences in approach that need to be considered. For those that like to skip to the end I have provided a copy of the final estimation example filled out here:
Estimation project estimate
September 22, 2009
I find it surprising how often I end up reinventing the wheel when it comes to project estimation. I suspect that there are others out there who end up doing the same, and so I decided that it was past time that I standardised my personal approach to project estimation using a model spreadsheet. Approaches and standards for estimation vary, but my preferred approach to a second cut estimate is:
- Work out the set of scenarios that apply to the project being built.
- Estimate each using a complexity rating (High, Medium, Low etc.).
- Adjust this according to how complex the project is likely to be.
- Use this to work out the likely number of man-days that the project is likely to take, based on appropriate experience and best-guesses.
- Map this to an effort estimate, and hence to a likely team size and duration.
It is this approach that I have implemented as a spreadsheet model for future, and general, use. The spreadsheet is available for download here:
Project Estimation Model
Over a series of posts I will document the usage of the model, how it calculates its figures, the reason behind its structure, and slowly work through an example of the usage of the model. The model is intended to be easy to use and widely applicable. Please feel free to use it, and give me feedback about it. Leave a comment, call me on +44 7887 536 083 or email me at email@example.com.
September 21, 2009
I have recently read with interest various Gartner hype-cycle reports. There is an example here, and here is wikkipedia’s comment on it. The idea is fairly simple, and based on the adjustment trend that new technology tends to go through towards gaining mainstream adoption. Once a technology is started it tends to gain an undeserved (according to its current capability) reputation to be the best thing that will save the world. As this continues people realise it actually has some limitations, and so it loses credibility rapidly. The story then continues as people realise that it is useful, even with its limitations, and so the reputationbuild again. To anyone that has been around a bit none of this is news. Read the rest of this entry »
September 9, 2009
Single point failures
From: David Lacey’s IT Security Blog | September 02, 2009
Indicates that the author believes that the use of Cloud Computing style technologies is introducing significant single points of failure. His evidence for this a failure in Google’s gmail. Surely such failures are not inevitable, as this technology is design to reduce such failures through multiple redundancy. The failures do happen, but I would propose they are not inevitable. They are more a symptom of pressures to circumvent due development process, thus reducing cost and time to market.
In my mind, the question really is whether Google accepted that there was a significant risk of system outage in order maintain reduced cost for Gmail delivery? If so, was this an appropriate choice for the market?
June 15, 2008
A problem the industry has struggled with for years is the level of project failures, whether this is defined as cost over run, schedule slips or cancelled projects. It has been recognised across many disciplines that a way to move forward on this is to develop a standard approach for doing things that is relevant to your business, and then to become highly skilled at replicating this in different environments. This has come into IT in many different guises, such as CMMI and ITIL, but it can be surprising how often organisations are not applying the ideas to their projects. Read the rest of this entry »
May 15, 2008
Performance issues can seem very mysterious when first encountered and often the only way to solve them is to use a very methodical approach. I have touched on this before, but it seems relevant to add a little more detail. Read the rest of this entry »
April 15, 2008
It is interesting that the issue of how green IT can be viewed as is coming to the fore. I was recently sent a notification of the existence of the following site:
Among its articles is this one about the green IT:
The article concentrates surveys about the state of green IT, and the lack of trust that IT purchasers have in the “Green” claims of suppliers. The rush to green wash products is leaving many, me included, sceptical that the benefits extend to the environment and purchasers bank balance – rather than the suppliers. Read the rest of this entry »
March 16, 2008
Having defined the key system functions and the user population it is now necessary to define how the users place a functional demand on the system. (The online version of this article has more detail again.) The first part of this definition is to lay out how the usage of the system varies over time. Thus the relative usage of the system needs to be defined. The initial definitions are on an intra-hour, hourly, daily, weekly and monthly basis. These figures effectively have no units and simply provide a relative level of usage. Thus any metrics that are available to allow calibration of this data can be used. As an example, estimated percentage utilisation of the system for the time period could be provided. Alternatively, there may be historical data available from a production system. Read the rest of this entry »
March 15, 2008
I find it interesting that both Google and Yahoo are getting involved in super computers – and hiring or loaning out the results to others. The following article makes the point:
Yahoo! outsources! India’s! giant! supercomputer!
This is a HP/Yahoo initiative that seems to be a match for a recent IBM/Google move. Read the rest of this entry »
March 15, 2008
It is interesting that in many IT organisations “Strategy” has a bad reputation. In one organisation that I worked with there was a comment made that anything that was strategic would be removed the next year, whereas a tactical solution would still be there in thirty years. Read the rest of this entry »