Big O Explained

Time complexity

Big O is such an important concept, I’m devoting a weekly article specifically for this topic.

Big O

The questions we’ve looked over so far deal with finding an efficient solution to problems. But how do you measure efficiency? The way you determine how fast or slow a method is is where Big O comes in.

An example you can think about to make better sense of Big O is a scenario involving delivering a letter. Here’s are some options for delivering (in order of the slowest to fastest delivery time)

  • Walking over to the person’s house (slowest)

  • Running over

  • Driving over

  • Sending via mail

  • E-mailing

  • Texting (fastest)

As you read down the list, you come across a faster, more efficient way to deliver. The premise behind Big O comes from the ability to perform an action (in this case, a program) faster.

Time Complexity

Big O attempts to define the runtime, or time complexity, of the solution. Let’s think about the runtime of two “algorithms” from above:

  1. E-mailing: O(s), where s is the size of the file. This may be a relatively long or short time depending on the size of the email.

  2. Walking over to the person’s house: O(1), which denotes a constant time. No matter what the size of the file is, it will take the same amount of time to walk to the person’s house.

It’s interesting to note that, depending on the size of the file, walking over may be much faster than e-mailing. This is why Big O is such a powerful concept.

There are other runtimes that commonly come up. Examples are O(log N), O(N log N), O(N^2), and O(2^N).

In academia, Big O is described as an upper bound on time. You can think of this as a less than or equal to relationship, or in other words the absolute fastest time an algorithm can take.

Best Case, Worst Case, Expected Case

Let us take the example of Quick Sort, a common algorithm to sort a set of numbers. Quick sort selects a random element as a “pivot” and then swaps values in the array such that the elements less than the pivot appear before the pivot, and elements greater than the pivot appear after the pivot. Then it recursively calls itself on the left and right sides, until the array is sorted.

  • Best Case: If the elements are all equal, quick sort will just traverse the array once. This is O(N), where N is the size of the array.

  • Worst Case: If the array is sorted in reverse order, and the leftmost element is chosen as the pivot, we come across the worst case. The partition to bring all elements to the left of the current element will occur for all N elements, giving us the runtime O(N^2).

  • Expected Case: Usually, these best or worst case scenarios won’t happen. We can assume an expected runtime of O(N log N). Since on average, the Quick Sort method will be called on the left and right halves of the array, it will be called log N times, and each method takes O(N) time to run, giving us a total runtime of O(N * log N), or O(N log N).

Dropping Constants

Since Big O primarily describes the rate of increase, we can drop any constants that are added to our runtime. An algorithm that can be described as O(2N) is actually O(N).

What about non-dominant terms? If we have a runtime of O(N^2 + N), should we drop the N in this notation? Yes. We may still have a sum in a runtime, such as O(N + M).

Add vs Multiply

A good rule of thumb of whether to add or multiply variables together:

  • If you do something, then do something else, you should add those runtimes together.

  • If you do something for each time you do something else, you should multiply those runtimes.

Useful Links

I won’t go too in-depth into more complicated time complexities, so here are some useful links that dive deeper into time complexity:

This site also has great practice questions on runtime, complete with explanations.