Big-O Notation: One Byte Explainer

Joshua Gracie - Jun 18 - - Dev Community

This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer.

Explainer

Big-O notation is a worst case runtime. An algorithm of O(n^2), with n=200 inputs, will at worst take 40,000 iterations to run. Big-O is useful in determining how optimized an algo is. An algo of O(2^n) will take longer to run than an algo of O(log(n)).

Additional Context

There’s plenty more to runtime analysis than just Big-O. For instance, Big-O is focused on the overarching runtime of an algorithm (the part of the algo that takes the longest). It does not, however, concern itself with the exact amount of time an algo will take (otherwise we'd be looking at stuff like O(2n+37)).

To demonstrate, consider the below loops. Both loops have the same Big-O of O(n) (which is called linear time) since they iterate through a list of numbers from 0-n one time. But, technically, the first loop will run a smidge faster since it has less operations (i.e. it isn't doing the extra if-else branching).

from time import time

def timer_func(func): 
    # This function shows the execution time of  
    # the function object passed 
    def wrap_func(*args, **kwargs): 
        t1 = time() 
        result = func(*args, **kwargs) 
        t2 = time() 
        print(f'Function {func.__name__!r} executed in {(t2-t1):.4f}s') 
        return result 
    return wrap_func  

@timer_func
def basicLoop1(x):
    sumX = 0
    for i in range(x):
        sumX += i

    return sumX

@timer_func
def basicLoop2(x):
    sumX = 0
    for i in range(x):
        if i > 5:
            sumX += i * 2
        else:
            sumX += i

    return sumX

if __name__ == '__main__':
    basicLoop1(10000000) # Takes ~0.43s
    basicLoop2(10000000) # Takes ~0.88s
Enter fullscreen mode Exit fullscreen mode

Another thing to consider is that Big-O is focused on the worst case scenario. There may be instances where, on average, an algorithm runs faster than its Big-O runtime.

We denote this average runtime as Big-θ (big theta). There is also a chance that, for really good cases, it may run even faster. Best case runtimes can be denoted using Big-Ω (big omega).

A simple example of why this is important is when comparing merge sort to insertion sort.

Merge sort is O(nlogn), while insertion is O(n^2). So, when looking at large reverse-sorted lists (our worst case scenario for insertion sorting), merge sort is always better.

But what about when the list is already sorted? Merge sort still takes nlogn time, but insertion sort is now Ω(n) time.

You’ll notice that, when lists are mostly if not fully sorted, insertion sort will tend to run faster than its merge sort counterpart. Because of this, it may be reasonable to use insertion sort instead of merge sort if we are reasonably sure the lists we are getting are mostly sorted to begin with.

There is obviously a lot more to Big-O and runtime analysis than what I've covered here. If you'd like a more thorough explanation, I highly recommend this video from HackerRank as a starting point.

Hopefully this helped a bit with your understanding of Big-O. Thanks for reading and happy coding!

. . . . . .