OS Exam Prep

Random notes from my exam prep I’m saving for the future.

CPU can only read directly from RAM.
To read from disk, disk must be read into RAM first.

Virtual memory solves three problems:

  1. Not enough memory (cant access addr beyond max addr)
  2. Memory fragmentation (hard to write programs that split memory)
  3. Security (programs could corrupt each others state)

Virtual memory creates individual (logical) address space for each program. Virtual address space is mapped to physical address space (with translation lookaside buffers). Virtual memory can map data across multiple devices (like disk if mem space exceeded)

Page Faults

A frame (in main memory) can hold one page.

If mem exceeded, then load from swap. Before loading from swap, eject a page from memory (which depends on the replacement algorithm) Write this page to disk if dirty (having been written to after loading from disk) Then load desired page from disk and continue execution. When a page is ejected, it is set to invalid in the page table.

Over-allocation is more or less a page fault, which produces page replacement

In context of page replacement, the following algoithms mean the following:

  1. FIFO = the first (still present) frame that entered main memory is replaced
  2. LRU = the first (still present) frame that was requested is replaced
  3. OPTIMAL = the page (in main memory) that not will be used for the longest time is replaced

That means FIFO and LRU looks backwards, while OPTIMAL looks forwards. LRU and OPTIMAL both share that they are based on the reference string. FIFO is based on keeping a queue. If no pages currently in memory are requested in the future, OPTIMAL can fallback to FIFO. OPTIMAL is however theoretical (CPU’s can’t predict the future).

Why CPU Scheduling?

  1. CPU Scheduling is necessary for running more than one program concurrently.
  2. To do many things at once, the CPU must constantly switch tasks with minimally wasted time.
  3. Scheduling algorithms are important for doing this as efficiently as possible.

Definitions

Waiting time is the sum of all time processes spend not being processed. Turnaround time is the time it takes to complete a process. Throughput is the amount of processes a CPU can complete in a given time. Throughput time is the time it took to complete a number of processes.

Deadlock refers to the following:

  1. Mutual Exclusion (processes competing over resource only one can use at a time)
  2. Hold and Wait (processes holding resources can request other resources and wait until its available)
  3. No Preemption (a process must release its resources voluntarily)
  4. Circular Wait (process A holds resource A waiting for resource B held by process B waiting for resource A to become available)

Stack is not shared between threads in a multithreaded process.

Stats for Pre-emptive priority scheduling example:

PID Arrival Burst Priority
P1 0ms 11ms 2
P2 5ms 28ms 0
P3 12ms 2ms 3
P4 2ms 10ms 1
P5 9ms 16ms 4

How I think it would evaluate:

  1. P1 for 2ms (9 left)
  2. P4 for 3ms (7 left)
  3. P2 for 28ms (done)
  4. P4 for 7ms (done)
  5. P1 for 9ms (done)
  6. P3 for 2ms (done)
  7. P5 for 16ms (done)

Turnarund Time = Completion time - Arrival time Waiting Time = Turnaround Time - Burst Time

Completion Time

Turn-around Time

Waiting Time

Deadlock prevention Banker’s

Available = Amount of resource instances - Allocated Amount of resource instances = 5

PID Allocation Request System Available A - R
A B C A B C A B C A B C A B C
P0 1 2 1 1 0 3 5 5 5 0 1 2 X
P1 2 0 1 0 1 2 2 1 3 0 0 2 (P1 first)
P2 2 2 1 1 2 0 3 3 4 1 1 0 (P0 second)
5 5 5 2 1 4 (P2 third)

P1 -> P0 -> P2 (no deadlock)