c# – How to accurately measure the execution time of a C # operation?

Question:

For laboratory work, you need to measure the time to complete several operations. The code:

StartTime = Environment.TickCount;
for (int i = 0; i < 2499; i++)
{
    LQL.Rem();
}
ResultTime = Environment.TickCount - StartTime;

public class LinkedStackLarge
{
    LinkedList<LargeData> _LinkedStack = new LinkedList<LargeData>();
    public LargeData Rem()
    {
        LargeData data = _LinkedStack.Last();
        _LinkedStack.RemoveFirst();
        return data;
    }
}

The code works, but the ResultTime at the end is 0 . This is fine?

Answer:

Measuring the operation time is actually a difficult moment, there are many subtleties.

The first subtlety is JIT warm-up: when a method is executed for the first time, the code of this method is JIT compiled. Therefore, in order to obtain the correct time, it is necessary to execute the measured code “idle” before the measurement.

The next subtlety: when launched from under the debugger, the JIT compiler, even in Release mode, does not optimize your code very aggressively so that the necessary variables and call stack are still visible in the debugger. Run your tests from the command line outside of Visual Studio.

The next subtlety: if your measured method does not produce side effects and does not return a value, or the return value is ignored, then the optimizer may throw away its call. Therefore, be sure to display the return value on the screen.

The next subtlety: the method can be very fast, and the total execution time may be within the resolution of the timer you are using. In order to actually measure its speed, you need to perform it N times, and divide the total time by N The number N easiest to find experimentally.

The next subtlety: different timers have different resolutions. Better to take a more accurate timer. I, for example, use Stopwatch , which @Denis Bubnov talks about in his answer.

The next subtlety: execution can be interrupted by various external events. For example, the system thread scheduler can take a slice of time from your thread, as a result, the measured value will be greater than the real running time of the method. Or the garbage collector might freeze your thread. Therefore, it makes sense to measure several times and discard results that are statistically too far from the mean.

The next subtlety: various system caches. For example, if your code reads a file, then after the first read, the file will be in the operating system cache, and subsequent executions of the same method will be faster. Therefore, in this case, each new iteration must read a new file.


In practice, it's too lazy to take these things into account in your code every time. Therefore, it makes sense to use a benchmarking framework. For example, you can use BenchmarkDotNet . In this case, your code will look like this:

class Program
{
    static void Main(string[] args)
    {
        var summary = BenchmarkRunner.Run<Tester>();
        Console.ReadKey();
    }
}

public class Tester
{
    [Benchmark]
    public void Test()
    {
        for (int i = 0; i < 2499; i++)
        {
            LQL.Rem();
        }
    }
}
Scroll to Top