# Complexity

Rather than keeping an exact count of operations, use order of magnitude count of complexity.

Ignore differences which are constant - e.g., treat n and n/2 as same order of magnitude.

Similarly with 2 n2 and 1000 n2.

In general if have polynomial of the form a0 nk + a1 nk-1 + ... + ak , say it is O(nk).

Definition: We say that g(n) is O(f(n)) if there exist two constants C and k such that |g(n)| <= C |f(n)| for all n > k.

Equivalently, say g(n) is O(f(n)) if
there is a constant C such that for all sufficiently large n, | g(n) / f(n) | <= C.

Most common are

O(1) - for any constant

O(log n), O(n), O(n log n), O(n2), ..., O(2n)

Usually use these to measure time and space complexity of algorithms.

Insertion of new first element in an array of size n is O(n) since must bump all other elts up by one place.

Insertion of new last element in an array of size n is O(1).

Saw increasing array size by 1 at a time to build up to n takes time n*(n-1)/2, which is O(n2).

Saw increasing array size to n by doubling each time takes time n-1, which is O(n).

Make table of values to show difference.

Suppose have operations with time complexity O(log n), O(n), O(n log n), O(n2), and O(2n).

And suppose all work on problem of size n in time t. How much time to do problem 10, 100, or 1000 times larger?

size 10 n 100 n 1000 n
O(log n) >3t 10t >30t
O(n) 10t 100t 1,000t
O(n log n) >30t 1,000t >30,000t
O(n2) 100t 10,000t 1,000,000t
O(2n) ~t10 ~t100 ~t1000
TIME TO SOLVE PROBLEM

*Note that the last line depends on the fact that the constant is 1, otherwise the times are somewhat different.

Suppose get new machine that allows certain speed-up. How much larger problems can be solved? If original machine allowed solution of problem of size k in time t, then

speed-up 1x 10x 100x 1000x
O(log n) k k10 k100 k1000
O(n) k 10k 100k 1,000k
O(n log n) k <10k <100k <1,000k
O(n2) k 3k+ 10k 30k+
O(2n) k k+3 k+7 k+10
SIZE OF PROBLEM

We will use big Oh notation to help us measure complexity of algorithms.

# Searching

Searching and sorting are important operations and also important example for use of complexity analysis.

Only deal with searches here, come back to do sorts.

Code for all searches is on-line in Sorter program example

## Linear search

Pretty straightforward. Compare element looking for with successive elements of the list until either find it or run out of elements.

If list has n elements, then n compares in worst case.

• Average n/2 compares if element is in the list.
• Get n compares if element not in list.
• O(n) compares in all these cases.

## Binary search

Binary search cleverer on ordered list. Look at middle element:
• If middle elt is search elt then done.
• If middle elt smaller than search elt, then do binary search of bigger elts.
• If middle elt larger than search elt, the do binary search of smaller elts.
Notice this is recursive.

With each recursive call do at most two compares.

What is maximum number of recursive calls?

• Each time make recursive call, divide size of array to be searched in half.

• How many times can divide number in half before only 1 elt left?

• If start with 2k then => 2k-1 => 2k-2 => 2k-3 => ...=> 20 = 1; divide k times by 2.

• In general can divide n by 2 at most log n times to get down to 1. In this course, write log n for log2 n
At most (log n) + 1 invocations of routine & therefore at most 2*((log n) + 1) comparisons. O(log n) comparisons.

Concrete comparison of worst cases: # of comparisons:

Search\# elts 10 100 1000 1,000,000
linear 10 100 1000 1,000,000
binary 8 14 20 40
Can actually make faster if don't compare for equality until only 1 elt left!