The case for predicting the future

A new computer system goes into production and starts with a pilot. A set of load testing is done to make sure that the live user levels can be supported, with some problems. These problems are resolved and maybe some extra hardware purchased to ensure performance at live volumes is acceptable, and then a full roll out is started. The project team are on site for the first few weeks, and resolve the problems that are experienced initially. The system is handed over to support along with a development team member and then the project team is disbanded. The support team are left to complete the roll out process, and the development team member moves on after a while.

This is a model for a new system implementation that is played out, with a few variations, for the majority of new system implementations. I have left out the blood, sweat and tears part of the story for brevity. I have also left out the successful implementation memos and party. I have left this part out because I would suggest that whether the implementation was successful won’t really be known for quite a while.

The rest of the life of the system will generally be complex, and involve change requests, and support calls. From a performance perspective, however, it is all too common for a system to get gradually, or dramatically, slower during its lifetime. Initially this may not be noticeable, but by a year or two into a system’s lifetime complaints start that the system is slowing down to the point that it is impacting the business. The way that the initial project performed, approached and managed the system’s performance and the monitoring and management techniques used dictate how likely this scenario is.

In the “Principles of Capacity Management” document I have indicated that the expected volumes of usage will need to be examined over time. As systems are used they will tend to grow both in data volume and levels of usage. This means that a system’s capacity requirements will expand over time, as will the stress on the underlying algorithms of the system. I will skip an outline of the Order of algorithms (e-mail us if you would like this included in a future bulletin), but depending on the way the system is produced this additional volume can cause complete failure. To avoid this it is important to make sure that the likely levels of future usage and data volume are estimated and built into the performance testing. Thus separate “Volume Tests” are needed to check that the system can cope with the predicted data levels for the future.

The “Generic Performance Model” supports the analysis of a 10 year time window. In the case of a new system, therefore, this can be used to project the likely performance over a 10 year lifetime. In the case of an existing system it is probably more effective to model between 2 and 3 years of past and current behaviour, with the remainder extending into the future. This model uses as its inputs the expected number of users per year and their behaviour profile. From the changes in volume figures it is then possible to calibrate the model so as to estimate the increase in resource utilisation over time. To this is added the expected architecture in order to estimate the likely response times from the system.

If you need help in working out how to approach analysing the likely future performance then e-mail me or phone me on +44 (0)7887 536083. It can be difficult at first to understand how best to approach this sort of modelling and the approaches needed to calibrate and validate the modelling. The long term benefits of this approach, however, are substantial and so well worth the initial effort.

Leave a comment