Exam-Prep 2 Algorithms

Missed the first one to coof unfortunately.

Requested Topics

Amortized Analysis

The example started with a cache. If we compute something, we store the result with the arguments as key in some form of storage. If we ever try to compute the same thing twice, we reduce the computation time by checking the cache first. If the arguments are not found, we perform the actual computation and store the answer.

Cache Pseudocode

func cache(Callable function, String args) {
    var result = db.get(args);
    if result != null {
        return result;
    }
    result = function(args);
    db.put(args, result);
    return result;
}

At the time of writing I see that I have forgot making syntax-highlighting a part of my hugo theme…

Basic Analysis

Worst case: O(n)
Average case: (O(1) + O(n)) / 2
Best case: O(1)

O(1) = (1 + n) / L # TODO: <- find out what that means

Amortized Analysis?

The amortized analysis basically just adds another “case” to the previous. It’s essentially a more realistic average result estimation arrived at by taking into count a real series of calls. Instead of calculating a “naive” average, one essentially simulates what happens in a series of calls with real values.

In the case of our cache above the average might be pessimistic for a system where the same values are often used. In other words, the average will be much closer to O(1) than O(n).

Insertion Pseudocode

The next example was insertion

def insert(item):
    if isFull:
        expand()
    # do insertion

In the worst case we only keep inserting and we will continuusly have to keep expanding. The best operation is where we only insert (n). The worst operation is where we both expand and insert (2n). Here the result is not as conditionally dependent on the data inserted, but the amortized analysis will probably be different from the naive average anyway.

Simple Definitions

Average Case Analysis: Average over all possible calls
Amortized Analysis: Average over a meaningful sequence of calls

Recursion

The first example is a minimum-finder:

proc minimum(sec[number]: data): number =
    n = data[0]
    rest = data[1:-1]
    if n < minimum(rest):
        return n
    else:
        return minimum(rest)

Here the function will call itself until we have the minimum. That makes it a recursive function.

Iterative Counterpart

This could be implemented the following way:

int i = 0; // This algorithm starts from the beginning...
int smallest = data[i];
int current;

while (i < len(data)) {
    current = data[i];
    if (current < smallest) { smallest = current; }
    i++; // ...and moves to the end
}

Another Example

In this example we look at the following tree:

project/
|
|---src/
|   |
|   |---App.java
|   |
|   |---Utility.java
|
|---tests/
    |
    |---Test.java

If we wanted to check if the file Utility.java exists in the tree, we could use recursion. Let’s posit vaguely that I have found the file, if it exists in one of the subdirectories. Then interestingly, it’s our “edge-case” (that the root is the file) that will determine the end-result.

fun exists(root: Node, filename: String): Boolean =  
    if isFile(root):
        return root.name == filename:
    for entry in root.entries:
        return exists(entry, filename)
    

Exam Preparation

This was difficult to write down, added some notes

Question: What is the complete runtime complexity of the Bubble sort algorithm? Why?

Considering a hash-table with separate chaining:
Worst case: linked list
Best case: O(1)