Big-O is a way of expressing time and space complexity of an algorithm. Complexity is the rate of growth given variable inputs. Complexity is implicitly expressed in worst-case unless otherwise stated.
For example:
function printArray(n) {
for(var i = 0; i < n.length; i++) {
console.log(n[i])
}
}
printArray
in this instance is O(n)
time complexity and O(1)
space complexity.
We’ve derived this by observing the loop for(var i = 0; i < n.length; i++)
. This loop goes means we go through every single element within our array in a linear fashion 1, 2, 3 ... n
.
We store nothing throughout the runtime of this algorithm, so we get constant O(1)
space.
What Big-O Isn’t
The key word that’s important to remember when we described Big-O is rate of growth – this means, Big-O does not tell us that an O(n)
algorithm is always faster than an O(n^2)
algorithm, instead it tells us that given enough inputs, our O(n)
algorithm will be quicker, i.e. how well our algorithm scales with the number of inputs.
This is the main reason why we drop constants and non-dominant terms. For example, O(2n + log_n)
gets reduced to O(n)
– we just dropped our constant 2
and non-dominant term log_n
.
Recognising Big-O in Code
O(1) / constant
Constant algorithms are algorithms that can be completed the at the same time regardless of input n
. This can be done by computing for a result rather than doing any kind type of iteration.
An example would be accessing any ith item of an array. An array would have the following structure [{0,obj1*},{1,obj2*}{3,obj*}]
. Assuming the integer index and the pointer are 32 bits each, we can simply compute where in memory the ith
item is by doing simple computation i (32 + 32)
.
O(n) / linear
O(n)
is the easiest one to spot, as all this means is you’re walking all of the elements within n
.
for(const i = 0; i < n.length; i++) {
const item = n[i]
// do something with item...
}
Remember that in Big-O, it non-significant terms get dropped, so this could be:
for(const i = 0; i < n.length; i++) {
const item = n[i]
// do something with item...
}
for(const i = 0; i < n.length; i++) {
const item = n[i]
// do something else with item...
}
for(const i = 0; i < n.length; i++) {
const item = n[i]
// do something else with item...
}
In this instance, O(n3)
gets reduced to O(n)
. In Big-O land, this doesn’t matter, but as with a lot of things, treat it as a guide rather than a rule, you wouldn’t really write code like this as you’re still wasting CPU cycles in practice as you’re doing multiple passes.
O(n^2) / quadratic
For every item in n, you’re iterating n times. This is commonly seen in brute force de-duplication in a collection.
function quadratic(n) {
var k = 0;
for(var i = 0; i < n; i++) {
for(var j = 0; j < n; j++) {
k++;
}
}
console.log(k);
}
quadratic(1); // 1 x 1 = 0
quadratic(2); // 2 x 2 = 4
quadratic(3); // 3 x 3 = 9
quadratic(4); // 4 x 4 = 16
quadratic(100); // 100 x 100 = 10,000
O(n!) / factorial
A classic example of O(n!)
is string permutations. There are n \cdot (n-1) \cdot (n-2) \cdot (n-3) ... 1
possible ways of arranging a string of characters. So, given a string of length of 5. There are 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 120
possible ways of arranging the string.
O(log\hspace{1px}n) / logarithm
Logarithm with base 2
is described below through implication as:
This in layman terms means that log_2Q
=
how many times we can divide P
by 2
before we can get to the product of 1, or you could also work it the other way, what can we substitute p
in the equation 2^p
to get to a value that is >=
to P
.
Logarithmic time complexity is quite common in algorithms. This happens when on each iteration, we cut the problem space in half:
\frac{n}{2}, \frac{n}{4}, \frac{n}{8} ... 1A classic example is binary search.
For example, given an ordered array [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
, and a search value of 1
.
The code for our search algorithm would look like:
function binarySearch(haystack, needle) {
return _binarySearch(haystack, needle, 0, haystack.length - 1);
}
function _binarySearch(haystack, needle, start, end) {
console.log(haystack.slice(start, end + 1)); // for demo purpose only
const middle = Math.floor((end - start) / 2);
console.log(`middle: ${middle}`);
if(haystack[middle] === needle) {
console.log(`found: ${haystack[middle]}`);
return haystack[middle];
}
if(needle > haystack[middle]) {
return _binarySearch(haystack, needle, middle, end);
}
if(needle < haystack[middle]) {
return _binarySearch(haystack, needle, start, middle);
}
}
binarySearch([1,2,3,4,5,6,7,8,9,10], 1);
This would give us:
// iteration: 1
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
middle: 4
// iteration 2
[1, 2, 3, 4, 5]
middle: 2
// iteartion 3
[1, 2, 3]
middle: 1
// iteartion 4
[1, 2]
middle: 0
found: 1
O(n\hspace{1px}log\hspace{1px}n)
A good example of an O(n\hspace{1px}log\hspace{1px}n)
algorithm is merge sort.
void Main()
{
var list = new int[] { 1, 2, 3, 5, 7, 1, 3 };
MergeSort(list);
Console.WriteLine(string.Join(", ", list));
Console.ReadLine();
}
static void MergeSort(int[] arr)
{
MergeSort(arr, 0, arr.Length - 1, new int[arr.Length]);
}
static void MergeSort(int[] arr, int start, int end, int[] tmp)
{
if (start >= end) return;
var middle = (start + end) >> 1;
MergeSort(arr, start, middle, tmp);
MergeSort(arr, middle + 1, end, tmp);
Merge(arr, start, end, tmp);
}
static void Merge(int[] arr, int start, int end, int[] tmp)
{
var left = start;
var mid = (end + start) >> 1;
var right = mid + 1;
var endIndx = left;
while(left <= mid && right <= end)
{
if(arr[left] > arr[right])
tmp[endIndx++] = arr[right++];
else
tmp[endIndx++] = arr[left++];
}
// copy rest of left part
while (left <= mid)
tmp[endIndx++] = arr[left++];
// copy rest of right part
while (right <= end)
tmp[endIndx++] = arr[right++];
// place back into our original array
for(var i = start; i <= end; i++)
arr[i] = tmp[i];
}
We can observe that we are splitting the input in half for every recursive call MergeSort
, and every time we split, we also do a call to Merge
. Merge
by itself is O(n)
. So in summary, we’re going through the recursive call with O(log\hspace{1px}n)
time complexity, and every time we do this, we also Merge
, which has time complexity of O(n)
, giving us O(n\hspace{1px}log\hspace{1px}n)
.
Relevance In Everyday Programming
One of the great things about learning Big-O is that it’s a fundamental concept of computing, meaning it’s something that never changes and applies to every technology that you work with.
This gives us knowledge as to what’s computationally possible – and we have a pretty good guide as to how an algorithm will scale given length of input(s) and how the input is ordered.
Having knowledge of Big-O also makes us better at problem-solving in general at both micro (code level) and macro (design and infrastructure) level. We’re better equipped to pick the optimal data structure for our problems, and we’re able to think well as to what the best possible outcome could be and try to get as close to it as possible.
As with anything else, just because you can, doesn’t mean you should – optimising upto a point is always acceptable, but there is such a thing as premature optimisation and over-optimisation. One of the things with optimising is that it costs time and cognitive effort. There’s also more cognitive effort needed to read over optimised algorithms – so it’s always best to measure first beforehand and know the exact problem first. It’s just as important to know when to optimise just as much as how to optimise.