In Big-O notation, you’re trying to measure the potential for growth of an algorithm.

An algorithm is a set of steps for a computer to take. It takes data in, processes it and provides the output. Since your input can vary in size, so does your output. Changing the size of your input can change the time it takes for your algorithm to complete. The time to complete could grow. Big O notation is a way of measuring that growth.

### Why use Big O Notation at all?

You might think that you could just run your code and measure how long it took. Since your algorithm is just a set of steps, it can be faster or slower based on the size of your input. The reason programmers use Big O Notation instead of just measuring the time taken is because they rarely know the size of the input their algorithm will run. Not knowing the size of the input makes measuring the time to complete impossible. However, regardless of the input size, the rate at which the algorithm grows will be consistent. Due to its consistency, the growth rate (Big O Notation) is used instead. In addition to being a handy way of measuring the speed of an algorithm Big O Notation is often used to measure the quality of algorithm and by extension the skill level of the developer who wrote it.

When writing an algorithm, you don’t know the quality of your input. For example, if you’ve written a sorting algorithm then the more scrambled your input, the longer your sorting algorithm will take to complete regardless of input size. For this reason, Big O Notation assumes your algorithm is always operating on the worst input possible.

### How do I calculate it?

Below, I’ve provided a list of some of the most common notations with basic code samples that illustrate how to calculate the correct notation for them. The examples below are very simple to emphasize the idea behind each notation. Code in the real world is almost never this simple. You will encounter algorithms with a mix of all the below notations (and more). When that happens remember that in Big O Notation, the biggest/slowest notation always wins. All coefficients and smaller/faster annotations are dropped. This is because as the input size grows the growth rate of the larger/slower notation will far outpace the smaller/faster notation, making it irrelevant. For example, you have 2N+N² then the notation for that piece of code is: O(N²).

### Constant Time: O(1)

public void yourConstantTimeAlgoMethod (String arg){ // This algorithm/method runs in constant time // because no matter how big the string, // it will take the same amount of time to print it. // The time is constant because the input size has no // effect on the number of steps taken. System.out.println(arg); } public void yourConstantTimeAlgoMethod2 (String arg){ // The content of this method can also be said to run in constant time. System.out.println(arg); System.out.println(arg); System.out.println(arg); // This method is constant as well because the same time is taken // to complete regardless of imput size. }

###

### Linear Time: O(n)

public void yourLinearTimeAlgoMethod (List<Integer> nums){ // This algorithm/method runs in linear time // because the time it takes to complete it's // steps is dependent on the size of the input // it's given. // For example, if this method takes one second to print // one item then it will take two seconds to print // two items and so on... for(Integer n : nums){ System.out.println(n); } }

###

### Quadratic time: O(n²)

public void yourQuadraticTimeAlgoMethod (List<Integer> nums){ // This algorithm/method runs in quadratic time // because the time it takes to complete is // the input size squared. // For each element in the nums list, this method // will print the element n raised to the power of 2 times. // Ex: nums has 4 elements. Each element is the number 1. // 4^2 = 16 elements will be printed to the screen. // At completion, 16 number 1's will be printed: // Console: 1111111111111111 for(Integer n : nums){ for(Integer n : nums){ System.out.println(n); } } }

### Exponential Time: O(2ⁿ)

public void yourExponentialTimeAlgoMethod (int n){ // This algorithm/method runs in exponential time // because each return from this method makes // two calls to itself until the number one is reached. // If the size of n is greater than 1, then for // each int that n is above 1 two recursive calls // to this same method will be made. // Essentially there will be 2 method calls for every n // which evaluates to 2 to the power of n method calls or 2^n if(n <= 1){ return 1; } return yourExponentialTimeAlgoMethod(n-1) + yourExponentialTimeAlgoMethod(n-1); }

### Logarithmic Time: O(log n)

public int yourLogarithmicTimeAlgoMethod (int locateThisNum, int [] nums){ // This logarithmic time is difficult for me // to illustrate simply. I'll use a binary // search algorithm to demonstrate. // Parts of this example will be psuedocode because // a complete implementation may lose meaning and // also because I'm lazy. int positionOfSearchedNum; int middleValueOfNums = getMiddleValue(nums); if(locateThisNum > middleValueOfNums){ int rightHalfOfNums = getRightHalfofNums(nums); positionOfSearchedNum = yourLogarithmicTimeAlgoMethod(locateThisNum, rightHalfOfNums); } if(locateThisNum < middleValueOfNums){ int leftHalfOfNums = getLeftHalfofNums(nums); positionOfSearchedNum = yourLogarithmicTimeAlgoMethod(locateThisNum, leftHalfOfNums); { return positionOfSearchedNum; // This algorithm/method runs in logarithmic time // because each call results in the number of // steps required to complete being halved. }

### Also…

Big O Notation is also used to calculate the space taken up by an algorithm during its execution (how much memory it uses). Space complexity is important when your code has to run on low spec devices like IoT devices. Space complexity is also important when you’re charged for your usage. For example if your code it running on a cloud service provider like AWS. Businesses care about Big O Notation, cause at scale, it saves them money. Since they care about it, if you’re going to be a dev you have to care about it too.

I’m not covering space complexity here but from what I can gather, the rules are nearly about the same. You shift from thinking about how many steps are taken to how many instances of this object currently exist.