How to Measure the Performance of Algorithms?

by Carson
18 views

If you’re writing a program to do something, you have to design or use an algorithm. But which algorithm is the best? It might be a challenging process, but in this article, we’ll walk through the steps you can take to make sure your algorithm is the fastest for your purpose.

Time Complexity

Obviously, when measuring the performance of an algorithm, we need to take care of how much time it takes. Specifically, we have to calculate how the running time will increase when the size of the input array goes up, and this measurement is represented with big-O notation. Note that this concept, time complexity, is separate from that of actually measuring the runtime. This will be discussed later once we go through all the notations.

If you’re making a program that determines if an element is in an array, you need to scan through every item, and stop when it finds the item. In the best case, the desired thing is the first item, so you only need to go through one iteration. That’s constant time, O(1). But in the worst case, the target is the last thing in the array. Therefore, you need to scan every item before you stop your program. That’s linear time, O(n), where n is the size of the input. Generally, when we refer to time complexity, we refer to the worst-case time complexity, since the best case is usually just constant time.

Or, if you are searching through a sorted array of numbers, you can use a binary search. First, take an initial guess with the median value. If the desired value is smaller than the median, take the median of the lower half as the new guess. If it’s larger, use the median of the upper half. Then continue this process until the desired item is found. In the worst-case scenario, this takes roughly as many steps as the base-2 logarithm of the input size, approaching the value more and more closely as the input size increases. The time complexity of the algorithm is thus O(log n). Since O(log n) < O(n), O(log n) is actually a sublinear complexity. Thus, the binary search algorithm is one of the more efficient ways to search through arrays.

Here are the common time complexities of different algorithms:

  • O(1)
  • O(log n) (usually log n is the base-2 log of n in this context)
  • O(n log n)
  • O(n)
  • O(n^2)
  • O(n!)

Space Complexity

Besides taking care of the speed, you also need to mind the memory usage of your program. After all, if a program takes up too much memory, it can take a toll on its own performance, or even get killed from overusing resources. So what should we use to notate space complexity? We use the big-O notation as well.

For example, you need to concatenate all the items in one array into one string. In that case, you have to store all the information in the input to the final output, so the amount of space is directly proportional to the amount of input. Hence, the space complexity of this algorithm is O(n). In fact, if the entire input list is being scanned through a loop, the worst-case space complexity is usually O(n) as well.

Space complexity works almost the same way as time complexity (despite describing different things), and it also measures the performance of an algorithm as the input size goes up.

Measuring Actual Performance

Even though time and space complexity are great metrics for the performance of these algorithms, it’s still different from measuring the actual practicality of these methods. Even though two algorithms might have the same time/space complexity, they can still take a different amount of actual computation.

Take constant time (O(1)), for example. Algorithm A uses just one operation, while algorithm B other always uses ten operations. Obviously, algorithm B will be ten times as fast as algorithm A, providing that all operations take the same time. In fact, an algorithm with a greater complexity might even be faster than that with a smaller one. For example, if algorithm A has a complexity of O(n) and uses n operations, A will still be faster than B if the input size is less than 10. The same thing also goes for space complexity, as it doesn’t directly represent the amount of memory your algorithm will use.

And that’s not the whole situation. For example, the worst-case time complexity might only be approached so rarely that it is seldom helpful. In that case, the average complexity might be a better standard. Also, the amount of time to get one operation done varies according to the specific operation. Reading is, in general, faster than writing. The actual amount of computation also depends on the particular implementation of the algorithm, the programming language, and compiler options used to generate the executable file.

This is what makes testing the performance of algorithms a highly nonlinear process, even when powerful indicators are present. Thus, if we want to find the best algorithm, try implementing different ones. Ultimately, the one that runs the fastest on test cases should prevail.

Conclusion

In this article, we have introduced how to measure the performance of algorithms. Most of the time, we can use the big-O notation to determine time and space complexity, which denotes the way that the amount of computation and the amount of data to store increases as the input gets larger. At the end of the day, though, the theoretical aspect is complicated, and the best way to calculate the performance is to use an implementation and time it.

Related Posts

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.