There are many possible ways to measure how successful an algorithm is. Correctness, efficiency or simplicity are all valid aspects to evaluate. In this course, we will be looking in-depth at the runtime of an algorithm.
The runtime can vary widely based on several factors such as the input, the code implementation and the machine used to execute the program. We will be looking at different algorithms at a high level; to account for different languages and implementations we examine these algorithms in pseudocode.
Worst case vs average
When measuring runtime, there are many different perspectives that we can take.
The worst case of an algorithm specifies the maximum number of computations performed on an input of size n. This can be good at providing an upper bound but it is not very accurate when trying to predict real-world statistics.
The average of an algorithm specifies the average number of computations performed on an input of size n. This can be a more realistic estimation of an algorithm but there is debate about what ‘average’ actually means, as well as this calculation being much more complicated or even sometimes impossible.
Big-O notation
The official definition is:
Let f,g:N→R be functions. We say that f is O(g) if there is some n0∈N and some c>0 from R such that for all n≥n0 we have:
0≤f(n)≤cg(n)
What this translates to, is that by ‘f is O(g)’ we are saying that ‘for sufficiently large n’, f∈O(g). This tells us the growth rate of the runtime of an algorithm f is no greater than that of g.
When writing this in practice, we use the form f=O(g). For example:
3n3=O(n3) with c=3 and n0=0
3n2+6=O(n2) with c=9 and n0=1
lg(n)=O(n) with c=1 and n0=1
8n2+150n+10000=O(n2) with c=10158 and n0=1
There may be more than one correct set of values for c and n0 in each case.
Laws of big-O
For any constant a>0 in R:f1=O(g1)⇒af1∈O(g1)
f1=O(g1) and f2=O(g2)⇒f1+f2=O(g1+g2)
f1=O(g1) and f2=O(g2)⇒f1f2=O(g1g2)
f1=O(g1) and g1=O(g2)⇒f1=O(g2)
For any d∈N: if f1 is polynomial of degree d with positive leading coefficient then f1=O(nd)
For any constants a>0 and b>1 in R:na=O(bn)
For any constants a>0 in R:lg(na)=O(lg(n))
For any constants a>0 and b>0 in R:lga(n)=O(nb)
Big-Ω notation
The official definition is:
Let f,g:N→R be functions. We say that f is Ω(g) if there is some n0∈N and some c>0 from R such that for all n≥n0 we have:
0≤cg(n)≤f(n)
Similar to big-O setting an upper bound, this tells us the growth rate of the runtime of an algorithm f is no less than that of g. Big-Ω sets a lower bound.
Big-Θ notation
Now that you can work out the upper and lower bounds of a given algorithm, there is one more notation to be aware of. An algorithm f is said to be Θ(g) if f=O(g) and $f = \Omega(g)$.
The Master Theorem
The Master Theorem is used for asymptotic analysis of algorithms in the form of recurrence relations. The theorem is given below:
Let n0∈N,k∈N0 and a, b∈R with a>0 and b>1, and let T : N→N satisfy the following condition:
T(n) = ⎩⎪⎪⎨⎪⎪⎧Θ(1),aT(bn)+Θ(nk), if n<n0 if n ≥n0
Let e=logb(a); we call e the critical exponent. Then:
T(n) = ⎩⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎧Θ(ne),Θ(nelg(n)),Θ(nk), if k <e if k =e if k >e
In-Place
An algorithm is said to be in-place if it can implement on the input array with only O(1) extra space. For example, Quick Sort and Insertion Sort.
Stable
An algorithm is said to be stable if for every pair of indices with A[i].key=A[j].key and $i \lt j, $ the entry A[i] comes before A[j] in the output array. For example, Insertion Sort and Merge Sort.