Essential Programming | Time Complexity (2024)

Essential Programming | Time Complexity (3)

In computer programming, as in other aspects of life, there are different ways of solving a problem. These different ways may imply different times, computational power, or any other metric you choose, so we need to compare the efficiency of different approaches to pick up the right one.

Now, as you may know, computers are able to solve problems based on algorithms.

Algorithms are procedures or instructions (set of steps) that tell a computer what to do and how to do it.

Nowadays, they evolved so much that they may be considerably different even when accomplishing the same task. In the most extreme case (which is quite usual by the way), different algorithms programmed in different programming languages may tell different computers with different hardware and operating systems to perform the same task, in a completely different way. That’s crazy, isn’t it?

The thing is that while one algorithm takes seconds to finish, another will take minutes with even small data sets. How can we compare different performances and pick the best algorithm to solve a particular problem?

Fortunately, there are ways of doing this, and we don’t need to wait and see the algorithm at work to know if it can get the job done quickly or if it’s going to collapse under the weight of its input. When we consider the complexity of an algorithm, we shouldn’t really care about the exact number of operations that are performed; instead, we should care about how the number of operations relates to the problem size. Think about it: if the problem size doubles, does the number of operations stay the same? Do they double? Do they increase in some other way? To answer these questions, we need to measure the time complexity of algorithms.

Time complexity represents the number of times a statement is executed. The time complexity of an algorithm is NOT the actual time required to execute a particular code, since that depends on other factors like programming language, operating software, processing power, etc. The idea behind time complexity is that it can measure only the execution time of the algorithm in a way that depends only on the algorithm itself and its input.

To express the time complexity of an algorithm, we use something called the “Big O notation”. The Big O notation is a language we use to describe the time complexity of an algorithm. It’s how we compare the efficiency of different approaches to a problem, and helps us to make decisions.

Big O notation expresses the run time of an algorithm in terms of how quickly it grows relative to the input (this input is called “n”). This way, if we say for example that the run time of an algorithm grows “on the order of the size of the input”, we would state that as “O(n)”. If we say that the run time of an algorithm grows “on the order of the square of the size of the input”, we would express it as “O(n²)”. But what does that mean exactly?

The key to understanding time complexity is understanding the rates at which things can grow. The rate in question here is time taken per input size. There are different types of time complexities, so let’s check the most basic ones.

Constant Time Complexity: O(1)

When time complexity is constant (notated as “O(1)”), the size of the input (n) doesn’t matter. Algorithms with Constant Time Complexity take a constant amount of time to run, independently of the size of n. They don’t change their run-time in response to the input data, which makes them the fastest algorithms out there.

Essential Programming | Time Complexity (4)

For example, you’d use an algorithm with constant time complexity if you wanted to know if a number is odd or even. No matter if the number is 1 or 9 billions (the input “n”), the algorithm would perform the same operation only once, and bring you the result.

Also, if you wanted to print out once a phrase like the classic “Hello World”, you’d run that too with constant time complexity, since the amount of operations (in this case 1) with this or any other phrase will remain the same, no matter which operating system or which machine configurations you are using.

To remain constant, these algorithms shouldn’t contain loops, recursions or calls to any other non-constant time function. For constant time algorithms, run-time doesn’t increase: the order of magnitude is always 1.

Linear Time Complexity: O(n)

When time complexity grows in direct proportion to the size of the input, you are facing Linear Time Complexity, or O(n). Algorithms with this time complexity will process the input (n) in “n” number of operations. This means that as the input grows, the algorithm takes proportionally longer to complete.

Essential Programming | Time Complexity (5)

These are the type of situations where you have to look at every item in a list to accomplish a task (e.g. find the maximum or minimum value). Or you can also think about everyday tasks like reading a book or finding a CD (remember them?) in a CD stack: if all data has to be examined, the larger the input size, the higher the number of operations are.

Linear running time algorithms are very common, and they relate to the fact that the algorithm visits every element from the input.

Logarithmic Time Complexity: O(log n)

Algorithms with this complexity make computation amazingly fast. An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size. This means that instead of increasing the time it takes to perform each subsequent step, the time is decreased at a magnitude that is inversely proportional to the input “n”.

Essential Programming | Time Complexity (6)

What’s the secret of it? These type of algorithms never have to go through all of the input, since they usually work by discarding large chunks of unexamined input with each step. This time complexity is generally associated with algorithms that divide problems in half every time, which is a concept known as “Divide and Conquer”. Divide and Conquer algorithms solve problems using the following steps:

1. They divide the given problem into sub-problems of the same type.
2. They recursively solve these sub-problems.
3. They appropriately combine the sub-answers to answer the given problem.

Consider this example: let’s say that you want to look for a word in a dictionary that has every word sorted alphabetically. There are at least two algorithms to do that:

Algorithm A:

  • Starts at the beginning of the book and goes in order until it finds the contact you are looking for.

Algorithm B:

  • Opens the book in the middle and checks the first word on it.
  • If the word that you are looking for is alphabetically bigger, then it looks in the right half. Otherwise, it looks in the left half.

Which one of both is faster? While algorithm A goes word by word O(n), algorithm B splits the problem in half on each iteration O(log n), achieving the same result in a much more efficient way.

Logarithmic time algorithms (O(log n)) are the second quickest ones after constant time algorithms (O(1)).

Quadratic Time Complexity: O(n²)

In this type of algorithms, the time it takes to run grows directly proportional to the square of the size of the input (like linear, but squared).

In most scenarios and particularly for large data sets, algorithms with quadratic time complexities take a lot of time to execute and should be avoided.

Essential Programming | Time Complexity (7)

Nested For Loops run on quadratic time, because you’re running a linear operation within another linear operation, or n*n which equals n².

If you face these types of algorithms, you’ll either need a lot of resources and time, or you’ll need to come up with a better algorithm.

Exponential Time Complexity: O(2^n)

In exponential time algorithms, the growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements. Any time an input unit increases by 1, it causes you to double the number of operations performed. This doesn’t sound good, right?

Algorithms with this time complexity are usually used in situations where you don’t know that much about the best solution, and you have to try every possible combination or permutation on the data.

Essential Programming | Time Complexity (8)

Exponential time complexity is usually seen in Brute-Force algorithms. These algorithms blindly iterate an entire domain of possible solutions in search of one or more solutions which satisfy a condition. They try to find the correct solution by simply trying every possible solution until they happen to find the correct one. This is obviously a not optimal way of performing a task, since it will affect the time complexity. Brute-Force algorithms are used in cryptography as attacking methods to defeat password protection by trying random strings until they find the correct password that unlocks the system.

As in quadratic time complexity, you should avoid algorithms with exponential running times since they don’t scale well.

Generally speaking, we’ve seen that the fewer operations the algorithm has, the faster it will be. This looks like a good principle, but how can we apply it to reality?

If we have an algorithm (whatever it is), how do we know its time complexity?

In some cases, this may be relatively easy. Let’s say you have an outer For Loop that iterates through all the items in the input list and then a nested inner For Loop, which again iterates through all the items in the input list. The total number of steps performed is n * n, where n is the number of items in the input array.

But how do you find the time complexity of complex functions?

To find the answer, we need to break down the algorithm code into parts and try to find the complexity of the individual pieces. Yes, sorry to tell you that, but there isn’t a button you can press that tells you the time complexity of an algorithm. You have to do it yourself.

Essential Programming | Time Complexity (9)

As a rule of thumb, it is best to try and keep your functions running below or within the range of linear time-complexity, but obviously it won’t always be possible.

There are different Big O notations, like “best case”, “average case”, and “worst case”, but what really matters is the worst case scenario; those are the ones that can seriously crash everything. They go right to the heart of why time complexity matters and point to why some algorithms simply cannot solve a problem without taking a few billion years to do it.

Worst case analysis gives the maximum number of basic operations that have to be performed during execution of the algorithm. It assumes that the input is in the worst possible state and maximum work has to be done to put things right. For example, for a sorting algorithm which aims to sort an array in ascending order, the worst case occurs when the input array is in descending order. In this case, maximum number of basic operations (comparisons and assignments) have to be done to set the array in ascending order. Think it this way: if you had to search for a name in a directory by reading every name until you found the right one, the worst case scenario is that the name you want is the very last entry in the directory.

To sum up, the better the time complexity of an algorithm is, the faster the algorithm will carry out the work in practice. You should take into account this matter when designing or managing algorithms, and consider that it can make a big difference as to whether an algorithm is practical or completely useless.

Essential Programming | Time Complexity (2024)

FAQs

Essential Programming | Time Complexity? ›

To express the time complexity

complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements.
https://en.wikipedia.org › wiki › Computational_complexity
of an algorithm, we use something called the “Big O notation”. The Big O notation is a language we use to describe the time complexity of an algorithm. It's how we compare the efficiency of different approaches to a problem, and helps us to make decisions.

What are the 4 types of complexity? ›

In my academic research, I have come to learn that there are different forms of complexity. There is moral complexity, behavioral complexity, emotional complexity, and self-complexity.

Is Big O the worst-case? ›

Big O, also known as Big O notation, represents an algorithm's worst-case complexity. It uses algebraic terms to describe the complexity of an algorithm. Big O defines the runtime required to execute an algorithm by identifying how the performance of your algorithm will change as the input size grows.

Which is better, O'n or O'nlogn? ›

So for higher values n, n*log(n) becomes greater than n. And that is why O(nlogn) > O(n). No matter how two functions behave on small value of n , they are compared against each other when n is large enough. Theoretically, there is an N such that for each given n > N , then nlogn >= n .

What is the time complexity of programming? ›

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. It is not going to examine the total execution time of an algorithm.

What are the 4 levels of complexity? ›

In (6) we show that there four levels of complexity are discernable as follows: null level (e.g. outer planar graphs), atetrahedral graphs, free-planar graphs, planar graphs.

What are the three main types of program complexity? ›

What are the three main types of program complexity? Structural complexity, data complexity, conditional complexity.

Is Big O Useless? ›

It's not useless, but it's for analyzing algorithmic complexity, not for analyzing algorithmic performance. Big-O assumes all "operations" are equally costly.

Is Little O better than Big O? ›

A little-o bound is a stronger condition than a big-O bound. Big-O is an upper bound. f(x) is O(g(x)) O ( g ( x ) ) if |f(x)|<cg(x) | f ( x ) | < c g ( x ) for some constant c and sufficiently large x .

What is the disadvantage of Big O? ›

The big- O notation loses some information since it ignores constant multipliers and additive terms. If we have two algorithms, and one of them is a hundred times faster, they still have the same estimate of the running time in the big- O notation.

Is O 1 algorithm the fastest? ›

O(1) means that it is the fastest it can go when contrasted with other, worse case scenarios such as O(n log n). But that doesn't mean it cannot improve. If for some reason an O(1) constant time algorithm has a “side project” then it may be subject to refactoring.

What is the most efficient algorithm complexity? ›

O(1): Constant time complexity — Most efficient time complexity because input size does not affect the algorithm's performance. O(log n): Logarithmic complexity. O(n): Linear complexity — Most common because input size is directly proportional to the algorithm's performance.

Is log n the fastest? ›

No. If one algorithm runs in N/100 and the other one in (log N)*100 , then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.

Which time complexity is the fastest? ›

The common algorithmic runtimes from fastest to slowest are:
  • constant: Θ(1)
  • logarithmic: Θ(log N)
  • linear: Θ(N)
  • polynomial: Θ(N^2)
  • exponential: Θ(2^N)
  • factorial: Θ(N!)

What is the best case time complexity? ›

Best Case: It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time. In this case, the execution time serves as a lower bound on the algorithm's time complexity.

What is superpolynomial time? ›

Superpolynomial time

Using little omega notation, it is ω(nc) time for all constants c, where n is the input parameter, typically the number of bits in the input. For example, an algorithm that runs for 2n steps on an input of size n requires superpolynomial time (more specifically, exponential time).

What are the four complexity classes? ›

NP-complete class
Complexity ClassCharacteristic feature
NPYes, answers can be checked in polynomial time.
Co-NPNo, answers can be checked in polynomial time.
NP-hardAll NP-hard problems are not in NP and it takes a long time to check them.
NP-completeA problem that is NP and NP-hard is NP-complete.
1 more row
Oct 3, 2023

What are the four forms of complexity in systems thinking? ›

We make a distinction between four forms of complexity that also help shed light on different dimensions of systems thinking: dynamic, architectural, relational and generative complexity.

What are the four characteristics of complexity? ›

Four Important Characteristics of Complexity: Self-Organization. Non-Linearity. Order/Chaos Dynamic.

What are the four types of project complexity? ›

According to project management experts Remington and Pollack, there are four types of complexity that determine the selection of projects. These include structural, technical, temporal, and directional complexity.

Top Articles
Latest Posts
Article information

Author: Ms. Lucile Johns

Last Updated:

Views: 6007

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.