This paper covers methods used by the Office Performance team for retrieving and analyzing benchmark data against large Windows based applications. The contents include the objectives of the team, how we narrowed down our key goals and objectives, and how we set about, from both a hardware and software perspective, resolving each one.
The goal of our team is to gather and report comprehensive performance information about the applications that ship with Office for the purpose of tracking, identifying, and isolating performance issues that occur during the product cycle.
Given the many ways available to extract performance times we create a list of goals in priority order so that we will design the correct system to suite our needs. At first glance it may appear that simulating a real world user system, and providing close to accurate numbers would be our goal. Unfortunately since the data cannot consistently be reproduced, we are unable to track down and fix issue that are found, or even determine if the anomalies are in fact real issues. With that in mind we prioritize consistency of the data (or reproducibility), at the top of the list. The next priorities are being able to isolate an issue once it is found, then accuracy, and last real world.
The methodology we used in obtaining these goals will be covered including the following areas. The impact of hardware, specifically memory, processors, disk, and net works, on the application performance. How Operating system concerns such as configuration, virtual memory, disk layout and thrashing can affect your results. I will also cover our experiences and how we resolved the automation needs including preparing and starting the test, driving the application and reporting/logging the results with minimal effect on the data. Lastly I will briefly discuss tips on reporting and analyzing the results.
(To Be Supplied)