Scheduling: The Multi-Level Feedback Queue (MLFQ)
Crux: design a scheduler that both minimizes response time for interactive jobs while also minimizing turnaround time without a priori knowledge of job length
MLFQ: Basic Rules
The MLFQ has a number of distinct queues, each assigned a different priority level. At any given time, a job that is ready to run is on a single queue. MLFQ uses priorities to decide which job should run at a given time: a job with higher priority (i.e., a job on a higher queue) is chosen to run.
For jobs in the same queue, and thus have the same priority, use round-robin scheduling.
The key is how the scheduler sets priorities. MLFQ varies the priority of a job based on its observed behavior. For an interactive job, MLFQ will keep its priority high. For a job that uses the CPU intensively for long periods of time, MLFQ will reduce its priority.
Summary: MLFQ will try to learn about processes as they run, and thus use the history of the job to predict its future behavior
- Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
- Rule 2: If Priority(A) = Priority(B), A & B run in RR
Attempt #1: How to Change Priority
- Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue)
- Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (moves down one queue)
- Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level
As the OS doesn't know whether a job will be a short job or a long-running job, it first assumes it might be a short job, thus giving it high priority. If it actually is a short job, it will run quickly and complete; if it isn't, it will slowly move down the queues, and thus soon prove itself to be a long-running more batch-like process. In this manner, MLFQ approximates SJF.
Problems Now
1. Starvation: if there are "too many" interactive jobs in the system, they will combine to consume all CPU time, and thus long-running jobs will never receive any CPU time
2. Game the scheduler: programmer can do something sneaky to trick the scheduler into giving you more than your fair share of the resource
3. Change behavior: a CPU-intensive job may transition to a phase of interactivity, and vice versa
Attempt #2: The Priority Boost
Periodically boost the priority of all the jobs in system.
Rule 5: After some time period S, move all the jobs in the system to the topmost queue
Problem 1 and 3 are thus solved.
In Figure 8.5, on the left, the long-running job gets starved once the two short jobs arrive; on the right, there is a priority boost every 50ms, and the long-running job gets boosted to the highest priority every 50ms and thus getting to run periodically.
Setting S: If S is set too high, long-running jobs could starve; too low, interactive jobs may not get a proper share of the CPU.
Attempt #3: Better Accounting
Crux: Rules 4a and 4b
Solution: Perform better accounting of CPU time at each level of the MLFQ. Once a process has used its allotment, it is demoted to the next priority queue. Whether it uses the time slice in one long burst or many small ones does not matter. Here comes the new rule 4:
Rule 4: Once a job uses up its time allotment at a given level (re- gardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue)
In Figure 8.6, on the left, without any protection from gaming, a process can issue an I/O just before a time slice ends and thus dominate CPU time. On the right, with better accounting, regardless of the I/O behavior of the process, it slowly moves down the queues, and thus cannot gain an unfair share of the CPU.
Tuning MLFQ And Other Issues
How to parameterize: number of queues, length of the time slice, frequency of priority boost
Varying time-slice length: higher priority, shorter time slice
Summary
MLFQ observes the execution of a job and prioritizes it accordingly, instead of demanding a priori knowledge of the nature of a job. In this way, it can deliver excellent overall performance (like SJF/STCF) for short-running interactive jobs, and is fair and makes progress for long-running CPU-intensive workloads.