Example: Defining users and locations

February 16, 2008

Specification of the user structure within the model

Location definition

For the purpose of this example the primary locations to be modelled will be the UK, New York, Europe and the primary and secondary data centres. These locations have been chosen to illustrate the use of different usage patterns across time zones, as well as to allow the effect of adding disaster recovery planning to be considered. For later use each of the locations also has a network node defined. In this case the nodes have simply been numbered sequentially for each of the locations. This will be discussed in more detail later. Read the rest of this entry »

Do your users think?

February 15, 2008

When designing a set of performance tests it is necessary to consider pauses between user interactions. This is usually referred to as “think time”, and represents the time between the system presenting the results of an action and the next action being taken. If a load test script doesn’t include any think time at all then the system will be bombarded with requests at a rate that isn’t humanly possible. Depending on the system there are likely to be between 10 and 100 times the interactions generated with zero think time than with realistic scenarios. Read the rest of this entry »


Performance and cost of ownership

January 15, 2008

What is the relationship between the performance of a system and its cost of ownership? This isn’t the start of a bad joke, but a question that I have needed to consider in detail recently. The question is more involved than it may seem initially, because of the factors that are implicit in the necessary analysis: Read the rest of this entry »


Is innovation good for performance?

November 15, 2007

A question I have been considering recently is whether innovation is good for performance. If I was writing about business performance the answer would hopefully be yes – but I am considering IT systems performance. Read the rest of this entry »


You can’t manage performance until…

October 15, 2007

A difficult challenge that comes up regularly is the idea that performance assurance starts with Volume and Performance testing. The assertion is that unless detailed performance data for the solution is available then there is little that can usefully be done. I have touched on an alternative approach in my “Principles of Capacity Management” document, where I examine what can usefully be done at different project stages. Read the rest of this entry »


Stakeholders in performance management

September 15, 2007

I have observed that few organisations really buy in to performance management until they have a performance problem. When an organisation has a performance problem they want to fix it. Now. A lot of work then goes into fixing the problem and then the team is disbanded or looses focus until the next performance problem arises. This reactive approach is not universal, but it is common. The approach is often augmented by a volume and performance test stage as part of the final testing, which is then inevitably squeezed out. Read the rest of this entry »


Customer relationships and sales pressure

August 15, 2007

I was recently in a meeting where a project was being initiated that needed a test facility for integration of different packages. I won’t go into the detail since it isn’t relevant to the overall discussion, and performance wasn’t the primary issue. I was struck by the usual “we can’t be the first people to need this” feeling and so, to cut a long story short, ended up calling Compuware to find out what they could offer. Read the rest of this entry »


Does a large database need an index?

July 15, 2007

It is standard wisdom that if you want a database to perform well then you carefully design a set of indices for the tables. Thus, by careful design of the database tables based on their contents and the common queries you can build a database that performs well. The index structures make such a difference that if they are not used on tables with large numbers of rows the performance is unusable. When this approach works then it works really well. There are problems, however, in environments that can lead to real issues, though I won’t go into detail here. Read the rest of this entry »


Benchmarking COTS software

June 15, 2007

When planning to deliver a system based on commercial software it is common to start by using manufacturer’s data on the capability of the product to do initial sizing. (e.g. This will support 200 parallel connected users per CPU.) This data often comes from benchmarking and is valid as long as the test conditions are realistically comparable to your production environment. Read the rest of this entry »


Local disks vs Storage server

May 15, 2007

A strategy that is commonly employed at present and which often seems strange on first sight is the use of storage servers. This is common, and is often justified for a host of reasons centring around the management of the data on the disks. It is strange, however, that it is also claimed to improve performance, rather than being implemented at the expense of performance. It common to assume that to improve performance you should remove client-server network hops, and network storage goes against this. Can it be true, or is this a myth? Read the rest of this entry »