Big-O Notation

Updated Feb 06, 2024#dsa#algorithms

Itā€™s useful to know how fast an algorithm is and how much space it needs. This allows you to pick the right algorithm for the job.

Big-O notation gives you a rough indication of the running time of an algorithm and the amount of memory it uses. When someone says, ā€œThis algorithm has worst-case running time of O(n2)O(n^2) and uses O(n)O(n) space,ā€ they mean itā€™s kinda slow but doesnā€™t need lots of extra memory.

Categories

Figuring out the Big-O of an algorithm is usually done through mathematical analysis. Weā€™re skipping the math here, but itā€™s useful to know what the different values mean, so hereā€™s a handy list of categories. Number n refers to size of input data that youā€™re processing; For example, when sorting an array of 100 items, n = 100.

  1. Constant O(1)O(1)

This is the best. The algorithm always takes the same amount of time, regardless of how big input data. The most common example is accessing an array index, another is pushing and popping from stack.

let value = array[5]
  1. Logarithmic O(logā”n)O(\log{n})

Pretty great. These kinds of algorithms halve the amount of data with each iteration. If you have 100 items, it takes about 7 steps to find the answer. With 1,000 items, it takes 10 steps. And 1,000,000 items only take 20 steps. This is super fast even for large amounts of data.

Binary search is an example of O(logā”n)O(\log{n}) complexity. Instead of simply incrementing, j is increased by 2 times itself in each run.

var j = 1
while j < n {
  // do constant time stuff
  j *= 2
}
  1. Linear O(n)O(n)

Good performance. If you have 100 items, this does 100 units of work. Doubling the number of items makes the algorithm take exactly twice as long (200 units of work). Array traversal and linear search are examples of O(n)O(n) complexity.

for i in stride(from: 0, to: n, by: 1) {
  print(array[i])
}
  1. Linearithmic O(nlogā”n)O(n\log{n})

Decent performance. This is slightly worse than linear but not too bad. Merge Sort and Heap Sort are examples of O(nlogā”n)O(n\log{n}) complexity.

for i in stride(from: 0, to: n, by: 1) {
var j = 1
  while j < n {
    j *= 2
    // do constant time stuff
  }
}

// OR
for i in stride(from: 0, to: n, by: 1) {
  func index(after i: Int) -> Int? { // multiplies `i` by 2 until `i` >= `n`
    return i < n ? i * 2 : nil
  }
  for j in sequence(first: 1, next: index(after:)) {
    // do constant time stuff
  }
}
  1. Quadratic O(n2)O(n^2)

Kinda slow. If you have 100 items, this does 1002 = 10,000 units of work. Doubling the number of items makes it four times slower (because 2 squared equals 4). Example: algorithms using nested loops such as insertion sort or bubble sort, traversing a simple 2-D array.

for i  in stride(from: 0, to: n, by: 1) {
  for j in stride(from: 1, to: n, by: 1) {
    // do constant time stuff
  }
}
  1. Cubic O(n3)O(n^3)

Poor performance. If you have 100 items, this does 1003 = 1,000,000 units of work. Doubling the input size makes it eight times slower. Example: matrix multiplication.

for i in stride(from: 0, to: n, by: 1) {
  for j in stride(from: 1, to: n, by: 1) {
    for k in stride(from: 1, to: n, by: 1) {
      // do constant time stuff
    }
  }
}
  1. Exponential O(2n)O(2^n)

Very poor performance. You want to avoid these kinds of algorithms, but sometimes you have no choice. Adding just one bit to the input doubles the running time. Example: traveling salesperson problem.

Algorithms with running time O(2n)O(2^n) are often recursive algorithms that solve a problem of size n by recursively solving two smaller problems of size n - 1. The following example prints all the moves necessary to solve the famous Towers of Hanoi problem for n disks.

func solveHanoi(n: Int, from: String, to: String, spare: String) {
  guard n >= 1 else { return }
  if n > 1 {
    solveHanoi(n: n - 1, from: from, to: spare, spare: to)
  } else {
    solveHanoi(n: n - 1, from: spare, to: to, spare: from)
  }
}
  1. Factorial O(n!)O(n!)

Intolerably slow. It literally takes a million years to do anything.

func nFactFunc(n: Int) {
  for i in stride(from: 0, to: n, by: 1) {
    nFactFunc(n: n - 1)
  }
}

Best practices

Often you donā€™t need math to figure out what the Big-O of an algorithm is but you can simply use your intuition. If your code uses a single loop that looks at all nn elements of your input, the algorithm is O(n)O(n). If the code has two nested loops, it is O(n2)O(n^2). Three nested loops gives O(n3)O(n^3), and so on.

Note that Big-O notation is an estimate and is only really useful for large values of nn. For example, the worst-case running time for the Insertion Sort algorithm is O(n2)O(n^2). In theory that is worse than the running time for Merge Sort, which is O(nlogā”n)O(n\log{n}). But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already.

If you find this confusing, donā€™t let this Big-O stuff bother you too much. Itā€™s mostly useful when comparing two algorithms to figure out which one is better. But in the end you still want to test in practice which one really is the best. And if the amount of data is relatively small, then even a slow algorithm will be fast enough for practical use.