Poor Man's Performance Timing
There have been times that I've written some code that I would have liked to get performance timing on. There have even been times when I wanted to get performance of an application that I didn't have the source for. The application may have been crunching numbers or talking with hardware and through debugging I had determined the locations I needed to measure. I just needed a way that I could do the measuring.
It is easy when you have the source to just modify the code with QueryPerformanceCounter and OutputDebugString and then recompile. After all, I just want an estimate and the code I'm attempting to measure isn't a few instructions but rather a more complex operation. The performance will always vary depending on the machine and the system load so to the nanosecond accuracy isn't necessary.
The flaw with modifying the code is that it would only work for that piece of code. It wouldn't work with applications I don't have the source to and I would be required to modify all locations that I want to time. I could buy a tool, but perhaps the tool is overkill for what I need and I don't want to spend the money. What about writing a debug extension?
The real question is how much accuracy is lost when using the debug extension as opposed to modifying the source code. In order to find out, we need to measure the timing of some simple code, shown in Example 1.
for(uiIndex = 0;uiIndex < g_Loops; uiIndex++) { dSampleX *= dSampleY; }
The data shown in Table 1 is a comparison of two different methods to retrieve performance data. The first listed is "QPC" which is short for "QueryPerformanceCounter" API. This is an abstraction over an operating system-implemented method for querying a performance counter. The second is RDTSC and while the first method could possibly use RDTSC as well, there would be some more overhead involved over using the instruction yourself. The first method, however, could be implemented in a variety of methods including ACPI hardware ports.
Modified Source | Debug Extensions | |||
# of Loops | QPC | RDTSC | QPC | RDTSC |
10 | 0.006705 ms | 0.005133 ms | 0.569346 ms | 0.564379 ms |
100 | 0.051403 ms | 0.049727 ms | 0.633879 ms | 0.604160 ms |
1000 | 0.498667 ms | 0.495724 ms | 1.074438 ms | 1.042093 ms |
10000 | 5.010413 ms | 4.993571 ms | 5.577245 ms | 5.498423 ms |
100000 | 50.437975 ms | 50.366829 ms | 51.285289 ms | 49.725790 ms |
1000000 | 508.223811ms | 507.440129 ms | 508.854617 ms | 495.261641 ms |
Accuracy and precision are the two factors that make up performance timing. Accuracy is how close your numbers are to the actual time and precision is the interval in which you are measuring. In the above, our precision is milliseconds and when doing performance monitoring via a debug extension I would only suggest to measure items that would require no less than a millisecond resolution.
Modified Source | Debug Extensions | |||
QPC | RDTSC | QPC | RDTSC | |
Overhead | 0.006705 ms | 0.000051 ms | 0.435530 ms | 0.421219 ms |
The data listed in Table 2 is the performance timing of doing nothing and essentially illustrates how much overhead each method typically has. Example output is shown in Figure 3 when using the RDTSC debug extension.
The following are some of the key factors when interpreting the performance numbers:
- The Hardware — The hardware is one of the factors in determining the speed of a segment of code. The performance monitoring I performed was using a Pentium 4M 1.8 GHz laptop.
- The Machine Load — The system load will affect the outcome of performance numbers. You can show this by moving your mouse or clicking on an application while performance numbers are being displayed. You can attempt to be more accurate by increasing the priority of the timing thread up to "real time," however depending on what that thread is doing, you may end up skewing the results or locking up the system. The thread being timed may be doing other work for I/O on another thread for example.
- What is being timed — The other factor is essentially what is being timed. If you are timing a hardware operation, for example, context switching may skew your results less, as your thread will essentially go into a "WAIT" period. The hardware may then crunch or move the data independent of the CPU and when it's finished the thread will be scheduled again. In an example of straight number crunching however as demonstrated here, a context switch will interrupt the processing and thus is more likely to skew the final time with these interruptions.
In my timing you will note that I generally did not get one straight time for any of these samples. In fact they can vary by several milliseconds due to context switching, for example. You will notice that as the operation you are timing becomes longer the less effect the overhead of using a debug extension skews the results. In fact you may notice that in some cases the result was actually lower with the debug extension than the modified code! This may be a result of the debug extension running in the context of another thread and as such not sharing its time slice with the thread being timed.
This example demonstrates that if you are looking simply for estimates within a few milliseconds then a debug extension may just do the trick. However, if you need more accurate timing then a more isolated environment using increased thread priorities and possibly specialized tools may be a more appropriate solution.