"Big-O" – The Reality

"Big-O" – The Reality
Algorithmic Complexity, specifically the time complexity, is probably one of the most dreaded beasts, especially for those from the non-tech background. And this often leads them to misuse the Big-O to arrive at inaccurate and potentially harmful conclusions. This blog post highlights some of the common mistakes that people make while interpreting Big-O, and the remedies. 

Myth 1

We will take an example of an algorithm that tries to search for an element in a list by traversing it linearly from one end to another.Myth: An O(N) algorithm will take N steps.
Reality: An O(N) algorithm will take a “maximum of” N steps. Although it may get done in a single step, if the element sought happens to be the first one.

What This Means: What this means is that the actual time taken by an algorithm doesn’t solely depend on the Big-O, but also the dataset on which it is applied. One may be able to increase the efficiency of an algorithm by investing more on the initial data setup and then applying the algorithm on the ordered set, multiple times. That decision, however, will change from situation to situation.

 

Myth 2

Myth: An O(N logN) algorithm is always better than an O(N squared) algorithm.
Reality: An O(N logN) algorithm is more scalable that O(N squared) algorithm.What This Means: The Big O notation tells you how an algorithm is going to scale as the size of the dataset increases. It denotes worst case time taken by an algorithm when applied to a dataset of N elements. But at the same time, a carefully crafted algorithm which is worse scaling according to Big-O (O(N squared) in this case) may perform better for the given dataset. In other words, better Big-O scalability is not equal to better efficiency. This also means that it is important to know the minimum value of a worst case scenario (Big-Omega). This lets you determine the range within which the algorithm will operate. This range enables us to make better decisions regarding which algorithm to use. This is further detailed here.

 

Myth 3

Myth: Algorithm analysis ends at determining the Big-O and friends.

Reality: Big-O and friends, from a base to the actual algorithm analysis.

What This Means: Once the Big-O has been identified, there are always overheads that need to be determined and balanced on a case specific basis. For example, a particular task may require you to search through a dataset every so often during the execution. In this case, it is prudent to invest more initial setup time to arrange the data as a binary tree, so that the subsequent search operations are quicker. The actual complexity of this algorithm will be: x + y(logN), where x is the initial setup time overhead, and y is the per step overhead (depending upon virtual memory and paging etc.). With this complete picture only, algorithms can be compared on the case specific basis, for most appropriate implementation.

In essence the time complexity of an algorithm is the most general representation of its scalability and efficiency. However, as we start to apply it to more specific problem solving, we need to take into account the nature of data in our decision making to achieve the most optimum results.

Leave a comment

I’m Prashant

With over two decades of experience in technology leadership, cloud strategy, and digital transformation, I have had the privilege of working with some of the most dynamic enterprises, including Amazon, Wipro and ThoughtFocus. From modernizing legacy systems to enabling AI-driven innovations, I thrive at the intersection of technology and business.

This blog is my space to share insights, experiences, and lessons learned from my career in cloud computing, digital transformation, and enterprise technology. I aim to break down complex topics into actionable strategies for technology leaders, professionals, and enthusiasts alike.

Let’s connect