I ran into this article recently:
The promise of cloud computing is the ability to scale to meet any demand level almost instantly, saving money along the way by only ever needing the power available that is required now. To do this it uses a combination of virtualisation and grid-based clustering technology. The potential is enormous. To spice it up further then by using a platform such as Gigaspace it is possible to improve performance and scalability to a degree that thousands of transactions a second are possible, based on in-memory database technology. Read the rest of this entry »
I was recently asked to comment on the concepts of project assurance, solution assurance and turn-around. The following is a summary of my response. Read the rest of this entry »
Gartner is trying to help us all out with our strategy again, or maybe this is an early “next year prediction” article. Either way Gartner’s top 10 strategic technologies have been published: here. To be honest, my biggest surprise is that there isn’t something newer in here. They seem to have selected only technologies that are relatively mature, and some of them are what I would consider to be positively mainstream. I suppose that in recommending strategy to major corporate customers they are not going to select technology on the bleeding edge. This selection is more “look what you should have been doing this year” than “get on this band wagon now”.
I find it surprising how often I end up reinventing the wheel when it comes to project estimation. I suspect that there are others out there who end up doing the same, and so I decided that it was past time that I standardised my personal approach to project estimation using a model spreadsheet. Approaches and standards for estimation vary, but my preferred approach to a second cut estimate is:
- Work out the set of scenarios that apply to the project being built.
- Estimate each using a complexity rating (High, Medium, Low etc.).
- Adjust this according to how complex the project is likely to be.
- Use this to work out the likely number of man-days that the project is likely to take, based on appropriate experience and best-guesses.
- Map this to an effort estimate, and hence to a likely team size and duration.
It is this approach that I have implemented as a spreadsheet model for future, and general, use. The spreadsheet is available for download here:
Over a series of posts I will document the usage of the model, how it calculates its figures, the reason behind its structure, and slowly work through an example of the usage of the model. The model is intended to be easy to use and widely applicable. Please feel free to use it, and give me feedback about it. Leave a comment, call me on +44 7887 536 083 or email me at firstname.lastname@example.org.
I have recently read with interest various Gartner hype-cycle reports. There is an example here, and here is wikkipedia’s comment on it. The idea is fairly simple, and based on the adjustment trend that new technology tends to go through towards gaining mainstream adoption. Once a technology is started it tends to gain an undeserved (according to its current capability) reputation to be the best thing that will save the world. As this continues people realise it actually has some limitations, and so it loses credibility rapidly. The story then continues as people realise that it is useful, even with its limitations, and so the reputationbuild again. To anyone that has been around a bit none of this is news. Read the rest of this entry »
Single point failures
From: David Lacey’s IT Security Blog | September 02, 2009
Indicates that the author believes that the use of Cloud Computing style technologies is introducing significant single points of failure. His evidence for this a failure in Google’s gmail. Surely such failures are not inevitable, as this technology is design to reduce such failures through multiple redundancy. The failures do happen, but I would propose they are not inevitable. They are more a symptom of pressures to circumvent due development process, thus reducing cost and time to market.
In my mind, the question really is whether Google accepted that there was a significant risk of system outage in order maintain reduced cost for Gmail delivery? If so, was this an appropriate choice for the market?
A problem the industry has struggled with for years is the level of project failures, whether this is defined as cost over run, schedule slips or cancelled projects. It has been recognised across many disciplines that a way to move forward on this is to develop a standard approach for doing things that is relevant to your business, and then to become highly skilled at replicating this in different environments. This has come into IT in many different guises, such as CMMI and ITIL, but it can be surprising how often organisations are not applying the ideas to their projects. Read the rest of this entry »
Performance issues can seem very mysterious when first encountered and often the only way to solve them is to use a very methodical approach. I have touched on this before, but it seems relevant to add a little more detail. Read the rest of this entry »
It is interesting that the issue of how green IT can be viewed as is coming to the fore. I was recently sent a notification of the existence of the following site:
Among its articles is this one about the green IT:
The article concentrates surveys about the state of green IT, and the lack of trust that IT purchasers have in the “Green” claims of suppliers. The rush to green wash products is leaving many, me included, sceptical that the benefits extend to the environment and purchasers bank balance – rather than the suppliers. Read the rest of this entry »