2. Big Oh notation #

Created Sat May 4, 2024 at 5:54 PM

[!NOTE] Take-Home Lesson The Big Oh notation and worst-case analysis are tools that greatly simplify our ability to compare the efficiency of algorithms.

Why Big-Oh #

The worst, best or average case, are mathematical functions over the input size. However, it’s difficult to work precisely with these functions, because they tend to:

  1. Have too many bumps. Explain by example - a binary search works a bit faster is the size of the array is n=2k-1 (where k is an integer), because array partitions work out nicely. This detail is not very important, but it does hint that the mathematical function may be too complicated, with little bumps up and down.
  2. Difficult to specify precisely - coming up with a complicated worst case like T(n) = 12754n2 + 4353n + 834lg2n + 13546 would almost do away gains we had with simplicity of the RAM model, and we’d actually be doing the same work as if the program was machine dependent (since we would consider so much detail). But we can observe that algorithm grows “quadratically with n”. Other parts are cumbersome and not helpful.

This means that it’s much easier to talk in terms of upper and lower bounds of total steps taken (i.e. of a mathematical function).

3 notations #

There are 3 variations of the notation:

Note: