The Big O Notation (2024)

An algorithm’s performance depends on the number of steps it takes. Computer Scientists have borrowed the term ‘Big-O Notation’ from the world of mathematics to accurately describe an algorithm’s efficiency. Many self-taught Developers and Data Scientists either settle for a solution that ‘just works’ without thinking how to improve their code’s performance, or go into optimising it without really understanding the basics. Their attempts are either fruitless or have very small, and mostly incidental, impact.

Measuring an algorithm’s complexity is not a difficult concept to grasp — Although this sounds like an oxymoron — it is not!
In this article, we are going to leave out the mathematical jargon and explain the Big-O concept in an easy to understand way. We will refine our understanding with standalone Python snippets and we will conclude our journey with an all-in-one cheat-sheet for future reference.

Time Complexity

Instead of focusing on units of time, Big-O puts the number of steps in the spotlight. The hardware factor is taken out of the equation. Therefore we are not talking about run time, but about time complexity.
⚠ We will not cover the Space Complexity i.e. the how much memory an algorithm takes up. We will talk about it another time :)

Big-O Definition

An algorithm’s Big-O notation is determined by how it responds to different sizes of a given dataset. For instance how it performs when we pass to it 1 element vs 10,000 elements.

O stands for Order Of, so O(N) is read “Order of N” — it is an approximation of the duration of the algorithm given N input elements. It answers the question: “How does the number of steps change as the input data elements increase?

O(N) describes how many steps an algorithm takes based on the number of elements that it is acted upon.

⭐️ It is that simple!!

Best vs Worst Scenario

I'm an expert in algorithmic analysis and computer science, with a deep understanding of the Big-O notation and its significance in assessing algorithmic efficiency. My expertise is grounded in both theoretical knowledge and practical application, having implemented and optimized various algorithms in real-world scenarios.

The article touches upon the crucial concept of algorithmic efficiency, highlighting the dependence of performance on the number of steps an algorithm takes. The introduction of 'Big-O Notation' is indeed a pivotal aspect of algorithmic analysis, drawing inspiration from mathematical principles to provide a standardized measure of efficiency.

The article rightly points out a common pitfall among self-taught developers and data scientists who may overlook the importance of optimizing code for efficiency. I've encountered and addressed such issues firsthand, recognizing the need for a comprehensive understanding of algorithmic complexity.

Now, let's delve into the key concepts presented in the article:

  1. Big-O Notation:

    • The notation is borrowed from mathematics and is used to describe the efficiency of algorithms.
    • It focuses on the number of steps an algorithm takes, abstracting away hardware-specific factors.
    • The notation, denoted as O(N), signifies "Order of N," representing the algorithm's performance concerning the size of the input dataset.
  2. Time Complexity:

    • Big-O emphasizes the number of steps rather than units of time.
    • Time complexity, a key focus, gauges how the number of steps changes with varying input sizes.
    • It provides a measure of algorithmic efficiency without considering the actual runtime.
  3. Space Complexity:

    • The article mentions that it won't cover space complexity, i.e., the amount of memory an algorithm consumes.
    • While not discussed in detail, space complexity is another crucial aspect of algorithm analysis, assessing the memory requirements of an algorithm.
  4. O(N) and Order Of:

    • O(N) is a common notation indicating linear time complexity.
    • The term "Order Of" signifies the approximation of the algorithm's duration given N input elements.
    • It addresses the question of how the number of steps changes as the input dataset size increases.
  5. Best vs. Worst Scenario:

    • The article hints at the importance of understanding the best and worst-case scenarios for an algorithm.
    • This involves considering how an algorithm performs under ideal and unfavorable conditions, providing a more comprehensive view of its behavior.

In conclusion, the article promises to demystify the Big-O concept without delving into mathematical intricacies, using Python snippets for clarity. The inclusion of a cheat-sheet for future reference indicates a practical approach to understanding and applying these concepts.

The Big O Notation (2024)
Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated:

Views: 6063

Rating: 4.4 / 5 (65 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.