Big O Linear Time Complexity (2024)

Is there a computer science topic more terrifying than Big O notation? Don’t let the name scare you, Big O notation is not a big deal. It’s very easy to understand and you don’t need to be a math whiz to do so. In this tutorial, you’ll learn the fundamentals of Big O notation linear time complexity with examples in JavaScript.

Big O Linear Time Complexity (2)

Be O(#1). Grab your copy of The Little Book of Big O.

Retrieval Practice

Retrieval practice is the surest way to solidify any new learning. Attempt to answer the following questions before proceeding:

  • What is Big O notation?

  • How does Big O notation work?

  • What is O(1), or constant time complexity?

This is the second in a series on Big O notation. If you’re just joining us, you will want to start with the first article in this series, What is Big O Notation?

What is Big O Notation?

Big O notation is a system for measuring the rate of growth of an algorithm. Big O notation equips us with a shared language for discussing performance with other developers (and mathematicians!). Big O notation mathematically describes the complexity of an algorithm in terms of time and space.

The O is short for “Order of”. So, if we’re discussing an algorithm with O(n), we say its order of, or rate of growth, is n, or linear complexity.

What Problem(s) Does Big O Notation Solve?

Big O notation helps us answer the question, “Can we do better?”

How?

Big O notation measures the upper bound, or worst-case scenario.

Why?

Because we don’t know what we don’t know.

We want to know just how poorly our algorithm will perform so we can compare it to other solutions.

Remember this table?

OComplexityRate of growth
O(1)constantfast
O(log n)logarithmic
O(n)linear
O(n * log n)log linear
O(n^2)quadratic
O(n^3)cubic
O(2^n)exponential
O(n!)factorialslow

It lists common orders from fastest to slowest.

As you can see, we are not proceeding in linear fashion. We learned O(1), or constant time complexity, in What is Big O Notation?. We’re going to skip O(log n) for the time being. It will be easier to understand after learning O(n), linear time complexity, and O(n^2), quadratic time complexity.

Before getting into O(n), let’s begin with a quick refreshser on O(1), constant time complexity.

O(1): Constant Time Complexity

Constant time compelxity, or O(1), is just that: constant. Regardless of the size of the input, the algorithm will always perform the same number of operations to return an output.

Here’s an example we used in the previous tutorial:

const isEven = num => num % 2 === 0;

Our algorithm checks whether or not a number is even or odd and will return true or false accordingly. It only needs to perform one operation regardless of the size of the number.

Say you’re working with an API that returns a users full name in an array, like so:

[“Jared”, “Nielsen”];

Your task is to get the users first name. Easy, in JavaScript:

const getFirstName = data => data[0];

No matter how many times you run your ‘algorithm’, it only needs to perform one operation to return the desired value. That’s O(1), or constant time.

One more example:

const grader = score => { if (score < 60) { return "Fail!"; } else if (score > 60) { return "Pass!"; } else { return "Living on the edge!"; };}

What is our best-case scenario for this algorithm?

If score is less than 60, we will only perform one operation and return.

That would be O(1).

What if score is greater than or equal to 60?

It’s still O(1).

Why?

Even though we check multiple conditions before returning, the rate of growth is constant. We know the upper bound, or worst-case scenario, in advance, and we know it will not change.

Let’s Get Meta 🧠

  • What do we mean by linear time complexity?

  • What is a constant term and why do we drop it?

  • What is amortized analysis?

O(n): Linear Time Complexity

If O(1) performs the same number of operations regardless of the size of the input, what is O(n)?

The heading above gives it away.

Why linear time complexity?

The rate of growth scales in direct proportion to the input.

For n inputs, our algorithm might perform n operations.

It might also perform fewer than n operations, but we don’t know what we don’t know so we calculate its upper bound, or worst-case scenario.

How does O(1) differ from O(n)?

Math O’Clock 🧮 🕝

Remember linear equations from algebra?

y = mx + b

This is the slope intercept form, where m is the slope, or direction and steepness of the line, and b is the y-intercept, or the point at which the line crosses the vertical axis. In this example, x is the variable, unknown value that we want to ‘solve for’ (the ‘solution’ being y) and m and b are coefficients, or parameters, that will influence x.

Let’s plug in some values.

y = 2x + 1

If we chart this, the result will be:

If we change our coefficients:

y = 1/2x - 1

And chart the equation:

What if we drop our coefficients entirely and simply chart x?

Still a straight line.

What if we charted 1?

Also a straight line.

But!

Because the value is constant, not variable, our line does not represent a rate of change, or growth, over time. It’s horizontal. It would be constant whether we charted 1, 10 or 1,000,000.

Speaking of time, math o’clock is over. Back to Big O.

Big O & Constant Terms

Now forget everything you just saw.

The only chart you need to think about is this one:

Big O Linear Time Complexity (7)

We want to think about our algorithms in the abstract, not in terms of a specific implementation.

Why?

We want to know the order of a function so we can determine whether or not our algorithm is a sufficient solution for our problem or if we need to find a more efficient alternative.

We’re not interested in the details. So we drop the constant terms. They don’t provide any meaningful additional information. As we saw above, whether we chart 2n + 1 or just n, we still get a linear rate of growth.

Big O & Upper Bound

What if our algorithm, say a search function, returns its parameter after one operation? Would that be O(1)?

No. It’s still O(n).

Why?

Remember, with Big O, we measure the worst case scenario. Because we don’t know what we don’t know, our assumption for any algorithm is its upper bound. In a worst case scenario, an O(n) algorithm needs to perform its specified operations on every value in the input. When making our time complexity calculation, we want to know just how poorly an algorithm is going to perform.

Say, for example, we have an array of animals:

const animals = [“ocelot”, “octopus”, “opossum”, “orangutan”, “orca”, “oriole”, “oryx”, “osprey”];

And let’s say our task is to find the location of a specific animal in the array based on user input:

for (let i = 0; i < animals.length; i++) { if (animals[i] === userInput) { return `Found ${userInput} at ${i}`; };};

If our user searches for “ocelot”, how many operations are performed?

One. It’s the first item in our array, so our program will return after one operation.

That specific operation would be O(1).

But, if our user searches for “osprey”, how many operations are performed?

Eight. That’s our upper bound and worst case scenario. For eight inputs, our algorithm will perform eight operations.

What if our array contained 100 animals?

The worst-case scenario would be 100 operations.

What about 1,000?

10,000?

100,000?

1,000,000?

All O(n).

You can see how linear complexity is fine for small inputs, but becomes a consideration when the size of the input increases.

If we want to know the lower bound, or the tight bound, we use two different notations: Big Omega, or Ω, and Big Theta, or Θ. We’ll look at them later, in What’s the Difference Between Big O, Big Omega, and Big Theta?.

pop() Quiz

Here are two ‘trick questions’ to test your knowledge of linear time complexity.

Big O & Conditional Statements

What’s the order of this function?

const lowerList = arr => { if (arr.length === 0) { return 'Nothing to be done'; } else { return arr.map(i => i.toLowerCase()); }}

If our condition is met, it will perform one operation, so it’s constant, or O(1).

If our condition is not met, then we map our array and return it. The length of that array is unknown, but for every element in the array, we perform an operation, so it’s linear, or O(n).

That means our function is either O(1) or O(n).

Which of these is the worst-case scenario, or upper bound?

n

We could calculate it as O(1) + O(n).

But what is 1?

A constant. So we drop it.

Big O & Successive Iterations

What if our algorithm performed multiple, successive iterations?

const forwardBack = (num) => { for(let i = 0; i <= num; i+= 1){ console.log(i); } for(let i = num; i >= 0; i-= 1){ console.log(i); }}

What is the order of this?

It’s still O(n).

Were you tempted to calculate it as O(2n)? Because O(n) + O(n) = O(2n)?

What is 2?

A constant. So we drop it.

We only want to know the order of our function.

But what if our algorithm uses nested iterations?

We’ll find out in the next tutorial.

Big O Linear Time Complexity in JavaScript

Does O(n) scale?

We can do better and worse.

In this tutorial, you learned the fundamentals of Big O linear time complexity with examples in JavaScript. Stay tuned for part three of this series where we’ll look at O(n^2), Big O Quadratic Time Complexity.

If you want to increase your rate of growth, get a copy of The Little Book of Big O.

Big O thanks to Rob Conery

As a computer science enthusiast with a deep understanding of algorithmic complexity, I can confidently guide you through the intricacies of Big O notation. I've not only studied the theory but also applied these concepts in real-world scenarios, particularly in JavaScript development.

Now, let's delve into the concepts discussed in the provided article:

1. Big O Notation:

  • Definition: Big O notation is a system used to measure the rate of growth of an algorithm in terms of time and space complexity.
  • Purpose: It provides a common language to discuss and analyze the performance of algorithms, facilitating communication among developers and mathematicians.

2. Purpose of Big O Notation:

  • Question: "Can we do better?" Big O helps answer this question by measuring the upper bound or worst-case scenario of an algorithm's performance.
  • Measurement: It quantifies algorithmic complexity in terms of time and space.

3. Common Orders in Big O Notation:

  • Orders of Complexity:
    • O(1) - Constant time complexity
    • O(log n) - Logarithmic time complexity
    • O(n) - Linear time complexity
    • O(n * log n) - Log-linear time complexity
    • O(n^2) - Quadratic time complexity
    • O(n^3) - Cubic time complexity
    • O(2^n) - Exponential time complexity
    • O(n!) - Factorial time complexity
  • Rate of Growth: From fastest to slowest.

4. O(1): Constant Time Complexity:

  • Definition: Regardless of input size, the algorithm always performs the same number of operations.
  • Example: Functions like checking if a number is even or odd, or extracting the first element from an array.

5. Linear Time Complexity - O(n):

  • Definition: The rate of growth scales in direct proportion to the input size.
  • Example: Searching for an element in an array, where the worst-case scenario involves iterating through each element.

6. Dropping Constant Terms:

  • Math O’Clock: Emphasizes the abstraction of algorithms for comparison, dropping constant terms for simplicity.
  • Importance: Focus on the order of a function to determine efficiency rather than specific implementation details.

7. Upper Bound in Big O:

  • Calculation: Big O measures the worst-case scenario, assuming the algorithm performs its specified operations on every value in the input.
  • Example: Searching for an element in an array with a worst-case scenario of iterating through all elements.

8. Tricky Scenarios in Linear Time Complexity:

  • Conditional Statements: Analyzing functions with conditions, recognizing whether they are O(1) or O(n).
  • Successive Iterations: Clarifying the order of functions with multiple iterations, including nested iterations.

9. Next Steps:

  • Upcoming Tutorial: Part three will explore O(n^2), Big O Quadratic Time Complexity.
  • Learning Resource: "The Little Book of Big O" is recommended to deepen your understanding.

By understanding and applying these concepts, you'll be well-equipped to analyze and optimize algorithms for various scenarios. Stay tuned for more insights into algorithmic complexity!

Big O Linear Time Complexity (2024)
Top Articles
Latest Posts
Article information

Author: Geoffrey Lueilwitz

Last Updated:

Views: 6053

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Geoffrey Lueilwitz

Birthday: 1997-03-23

Address: 74183 Thomas Course, Port Micheal, OK 55446-1529

Phone: +13408645881558

Job: Global Representative

Hobby: Sailing, Vehicle restoration, Rowing, Ghost hunting, Scrapbooking, Rugby, Board sports

Introduction: My name is Geoffrey Lueilwitz, I am a zealous, encouraging, sparkling, enchanting, graceful, faithful, nice person who loves writing and wants to share my knowledge and understanding with you.