Ideally, benchmarks should be conducted using parameters that accurately represent production data sources, production usage, and an operating environment that mirrors production, such as a mock run. Barring that, benchmarks can be conducted with custom query workload profiles based on the characteristics of the existing batch query workload profiles.
Benchmarks conducted in a controlled environment are essential to the process of capturing consistent statistics. This means that the database servers and machines in the physical architecture must have planned downtimes built into the schedule to allow for controlled benchmarks. Only then can we determine if there is a repeatable pattern of utilization at high, low, and average peaks for expected load. Benchmarks have to be repeated without controls to determine how capacity is affected when other applications are competing for resources.
The benchmarking process is outlined below:
As a prerequisite to benchmarks, all queries have to be validated.
We have to establish a starting point that allows us to determine the legitimate capacity of each component.
Once the capacity criteria has been met through this iterative process, you have a baseline physical architecture and baseline configuration for server components that can legitimately handle n number of requests.