Unit III: Process Scheduling
Introduction
▪ An important aspect of multiprogramming is
scheduling.
▪ The resources that are scheduled are IO and
processors.
▪ The goal is to achieve
• High processor utilization
• High throughput
• number of processes completed per unit time
• Low response time
• time elapse from the submission of a request to the
beginning of the response
The CPU-I/O Cycle
▪ We observe that processes require alternate use of processor
and I/O in a repetitive fashion
▪ Each cycle consist of a CPU burst (typically of 5 ms)
followed by a (usually longer) I/O burst
▪ A process terminates on a CPU burst
▪ CPU-bound processes have longer CPU bursts than I/O-
bound processes
The CPU-I/O Cycle
▪ Maximum CPU utilization
obtained with multiprogramming
▪ CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
▪ CPU burst followed by I/O
burst
▪ CPU burst distribution is of main
concern
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – No. of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)
Process Scheduling Queues
▪The OS maintains a separate queue for each of the process states.
▪When the state of a process is changed, its PCB is unlinked from
its current queue and moved to its new state queue.
▪The Operating System maintains the following important process
scheduling queues −
[Link] queue − This queue keeps all the processes in the system.
[Link] queue − This queue keeps a set of all processes
residing in main memory, ready and waiting to execute. A new
process is always put in this queue.
[Link] queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
Process Scheduling Queues
Conditions used to make scheduling
decisions
▪ Scheduling algorithms may use different criteria for selecting
process from the ready list.
1. When process switches from running state to waiting state.
2. When process switches from the running state to ready state.
3. When process switches from waiting state to the ready state.
4. When process terminates.
Process scheduler
▪Assigns time slot to particular process.
▪The process manager’s activity is process scheduling, which
involves removing the running process from the CPU and
selecting another process based on a specific strategy.
▪Schedulers are special system software which handle
process scheduling in various ways.
▪ Their main task is to select the jobs to be submitted into the
system and to decide which process to run.
▪To determine which process should get CPU and which one
will wait.
▪Two ways in which scheduling take place- Pre-emptive and
Non Pre-emptive scheduling.
Pre-emptive and Non pre-emptive scheduling
1. Pre-emptive Scheduling -
▪ OS allocates the resources to a process for a fixed amount of time.
▪ During resource allocation, the process switches from running state
to ready state or from waiting state to ready state.
▪CPU can be taken away from running process.
▪Scheduler has choice to swap processes in and out from running
state.
▪ This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the
running process.
Pre-emptive and Non pre-emptive scheduling
2. Non Pre-emptive Scheduling-
▪ The resource can’t be taken from a process until the process
completes execution.
▪ The switching of resources occurs when the running process
terminates and moves to a waiting state.
▪Process must be selected for execution.
▪Scheduler don’t have choice to swap processes in and out from
running state.
▪ It does not interrupt a process running CPU in the middle of the
execution.
▪Also called as co-operative scheduling.
Types of Schedulers
Schedulers are of three types −
[Link]-Term Scheduler
[Link]-Term Scheduler
[Link]-Term Scheduler
1. Long Term Scheduler
▪ The job scheduler or long-term scheduler selects processes from
the storage pool in the secondary memory and loads them into the
ready queue in the main memory for execution.
▪ The long-term scheduler controls the degree of multiprogramming
i.e. number of processes in memory.
▪It must select a careful mixture of I/O bound and CPU bound
processes to yield optimum system throughput.
▪ The job of the long-term scheduler is very important and directly
affects the system for a long time.
2. Short Term Scheduler
▪ The short-term scheduler selects one of the processes from the
ready queue and schedules them for execution.
▪ A scheduling algorithm is used to decide which process will
be scheduled for execution next.
▪ The short-term scheduler executes much more frequently than
the long-term scheduler as a process may execute only for a
few milliseconds.
▪ The choices of the short term scheduler are very important.
▪ If it selects a process with a long burst time, then all the
processes after that will have to wait for a long time in the
ready queue.
3. Medium Term Scheduler
▪ The medium-term scheduler swaps out a process from main memory.
▪ It can again swap in the process later from the point it stopped
executing.
▪ Also called as suspending and resuming the process.
▪ This is helpful in reducing the degree of multiprogramming
▪ Swapping is also useful to improve the mix of I/O bound and CPU
bound processes in the memory.
Classification of Scheduling Activity
Long-term: which process to admit
Medium-term: which process to swap in or out
Short-term: which ready process to execute next
Key Concepts in CPU Scheduling
• Arrival Time(AT)
Time required for the Process to enter the ready queue. It can be zero
or positive no.
• Burst Time(BT)
the amount of CPU time the process requires to complete its execution
• Completion Time(CT)
Completion time is when a process finishes execution. It is the
summation of the arrival, waiting, and burst times.
• Turnaround Time(TAT = CT – AT )
The time elapsed between the arrival of a process and its completion
• Waiting Time(WT= TAT – BT )
Process’s duration in the ready queue before it begins executing
• Response Time(RT = CPU allocation time - AT)
It is the duration between the arrival of a process and the first time it
runs
Scheduling Algorithms- FCFS(First
Come, First-Served Scheduling)
➢It is simplest CPU-scheduling algorithm.
➢The process that requests the CPU first is allocated the CPU first.
➢The implementation of the FCFS policy is easily managed with a
FIFO queue.
➢When a process enters the ready queue, its PCB is linked onto the
tail of the queue.
➢When the CPU is free, it is allocated to the process at the head of
the queue.
➢The running process is then removed from the queue.
FCFS (continued..)
Consider the following set of processes that arrive at time 0
Process Id Burst Time (ms)
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3 and are served in
FCFS order, we get the result shown in the following Gantt
chart:
FCFS (continued..)
Waiting Time for P1 = 0 ms
Waiting Time for P2 = 24 ms
Waiting Time for P3 = 27 ms
Average waiting Time = (0+24+27)/3= 17 ms
If the processes arrive in the order P2, P3, P1, however the
result will be shown in the following Gantt chart :
Waiting Time for P1 = 6 ms
Waiting Time for P2 =0 ms
Waiting Time for P3 = 3 ms
Average waiting Time = (6+0+3)/3= 3 ms
This reduction is substantial. Thus, the average waiting time
under FCFS policy is generally not minimal and may vary
substantially if the processes CPU burst times greatly.
The FCFS scheduling algorithm is nonpreemptive.
FCFS (continued..)
➢Once the CPU has been allocated to a process, that
process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O.
➢The FCFS algorithm is thus particularly
troublesome for time-sharing systems, where it is
important that each user get a share of the CPU at
regular intervals.
➢It would be disastrous to allow one process to keep
the CPU for an extended period.
FCFS Solved Problem
Consider the set of 5 processes whose arrival time and burst time are
given:
Calculate the average waiting time and average turnaround time, if FCFS
scheduling Algorithm is followed.
Process Id Arrival Time Burst Time
P1 4 5
P2 6 4
P3 0 3
P4 6 2
P5 5 4
Process Arrival Burst
Id Time Time
P1 4 5
P2 6 4
P3 0 3
P4 6 2
P5 5 4
Turn Around time = Completion time-Arrival time
Waiting time = Turn Around time – Burst time
Process ID Completion Time Turnaround Time Waiting Time
P1 9 9-4 =5 5–5=0
P2 17 17 – 6 =11 11 – 4 = 7
P3 3 3-0=3 3–3=0
P4 19 19 – 6 =13 13 – 2 = 11
P5 13 13 - 5 = 8 8–4=4
Turn Around Time = Completion time – Arrival time
Waiting Time = Turn Around time – Burst time
Average Turn Around Time = (5 + 11 + 3 + 13 + 8 )/5 = 40/5 = 8 units
Average Waiting Time = (0 + 7 + 0 + 11 + 4)/5 = 22/5 = 4.4 units
SJF (Shortest Job First)
➢This algorithm associates with each process the length of the
process’s next CPU burst.
➢When the CPU is available, it is assigned to the process that
has the smallest next CPU burst.
➢If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
➢The SJF can be either preemptive or nonpreemptive.
➢The more appropriate term for this scheduling will be
Shortest-Next-CPU-Burst Algorithm
because scheduling depends on the length of the CPU burst of
a process, rather than its total length.
Example of SJF Scheduling(Non-Preemptive)
Consider the following set of processes, with the length of the CPU burst given in
milliseconds:
Process Id Burst Time
Waiting Time for P1 = 3 ms
P1 6
Waiting Time for P2 = 16 ms
P2 8
Waiting Time for P3 = 9 ms
P3 7
Waiting Time for P4 = 0 ms
P4 3
Average Waiting Time = (3+16+9+0)/4 = 7ms
Gantt Chart
By comparison if we use FCFS scheduling, then average wating time would be
10.25 ms
SJF Scheduling (Pre-emptive)- Shortest Remaining Time
First(SRTF)
Consider the following set of processes, with the length of the CPU burst given in
milliseconds and processes arrive at given time
Process Arrival Burst Waiting Time for P1 = (10-1-0) = 9 ms
Id Time Time
P1 0 8 Waiting Time for P2 = (1-0-1) = 0 ms
P2 1 4 Waiting Time for P3 = (17-0 -2) = 15 ms
P3 2 9 Waiting Time for P4 = (5-0-3) = 2 ms
P4 3 5 Average Waiting Time = (9+0+15+2)/4 = 6.5ms
Waiting Time = Total Waiting Time – No. of milliseconds Process Executed – Arrival Time
Preemptive SJF scheduling is sometimes called Shortest-Remaining-Time-First
Scheduling
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
◦ Preemptive
◦ Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
Problem Starvation – low priority processes may never execute
Solution Aging – as time progresses increase the priority of the
process
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
Average waiting time = 8.2 msec
Round Robin (RR)
▪Each process gets a small unit of CPU time (time quantum
q), usually 10-100 milliseconds.
▪After this time has elapsed, the process is preempted and
added to the end of the ready queue.
▪If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits
more than (n-1)q time units.
▪Timer interrupts every quantum to schedule next process
▪Performance
◦ q large FIFO
◦ q small q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 msec
Multiple-Processor Scheduling
▪ CPU scheduling more complex when multiple CPUs are available
▪ Processing load distributed among multiple CPUs
P1 P1 P3 P4 P5 P6 P7
Shared Memory
CPU1 CPU2 CPU3 CPU4
▪ Multiprocessor systems may be heterogeneous (different kinds of CPUs)
or homogenous (the same CPU).
▪ Types-
1. Asymmetric multiprocessing
2. Symmetric multiprocessing
Multiple-Processor Scheduling Conti….
1. Asymmetric multiprocessing -
▪ Master – Slave system
▪ Only one processor accesses the system data structures, alleviating the need for
data sharing
▪Master handles all scheduling policies, I/O operations, Resource allocation
▪Slave executes only user code.
P1 P1 P3 P4 P5 P6
P7
Shared Memory
CPU1 CPU2 CPU3 CPU4 Scheduler
▪Dedicated CPU to run scheduler, decides which process will execute on which CPU
▪Simple to implement
▪Performance degrades as all CPUs are waiting for scheduler CPU
Multiple-Processor Scheduling Conti….
2. Symmetric multiprocessing (SMP) –
▪ All CPUs are equal
▪ All share IO bus and memory
▪ Each processor is self-scheduling
▪ All processes in common ready queue(Global), or each has its own
private queue(Local) of ready processes
Processor affinity –
▪ Process has affinity for processor on which it is currently running
▪ Also known as "cache affinity“
▪It can bind and unbind a process or a thread to a CPU or a group of CPUs and
ensure that the process or thread executes only on the designated CPU and not on
any other CPU.
Multiple-Processor Scheduling Conti…
Types of Processor Affinity
1. Soft affinity
▪ In soft affinity, the operating system can still schedule the process on other
processors if necessary.
▪ However, it will try to keep the process on the specified processor(s) as
much as possible.
1. Hard affinity-
▪ In hard affinity, the process is strictly bound to the specified processor(s).
▪ The operating system will not schedule the process on any other processor,
even if it would be more efficient to do so.
If SMP, need to keep all CPUs loaded for efficiency
▪ Load balancing attempts to keep workload evenly distributed
▪ Push migration – periodic task checks load on each processor, and if found
pushes task from overloaded CPU to other CPUs
▪ Pull migration – idle processors pulls waiting task from busy processor
Real-Time CPU Scheduling
▪ Designed for Real Time Applications
▪ Real Time application is set of coordinated tasks
▪To execute tasks in timely manner with minimum delay
▪Scheduler assigns priorities either Static or Dynamic based on their importance and
urgency
▪Categories-
Soft Real-time systems –
• Allows some delay but requires the task to be completed within time frame
• E.g. Multimedia transmission, Websites and services
Hard Real-time systems –
• Task must be serviced by its deadline
• Delay results in system failure or loss of data
• E.g. Air traffic control, vehicle subsystem control
Rate Monotonic Scheduling
▪ Rate (which means it depends on the period) and monotonic (increases
or decreases only)
▪Static priority allocation
▪A priority is assigned based on the inverse of its period
▪Shorter periods = higher priority
▪Longer periods = lower priority
▪Example: An example to understand the working of Rate monotonic
scheduling algorithm.
Processes Execution Time (C) Time period (T)
P1 3 20
P2 2 5
P3 2 10
Priorities- P2>P3>P1
Rate Monotonic Scheduling Conti….
▪Representation and flow is as follows:
▪The above figure says that Process P2 will execute two times for every 5 time
units, Process P3 will execute two times for every 10 time units and Process P1
will execute three times in 20 time units.
▪Gantt Chart-
Earliest Deadline First Scheduling (EDF)
▪Dynamic priorities allocation
▪Priorities are assigned according to deadlines
▪Priority of a task is inversely proportional to its absolute deadline
▪ The earlier the deadline, the higher the priority
▪ The later the deadline, the lower the priority
▪Executes in preemptive mode i.e. currently executing task is preempted whenever
another task with earliest deadline becomes active.
▪Period: Processes always occur at regular intervals of time.
▪Deadline: The time by which operation of the process must complete
▪Execution time: Time required to execute the process i. e. CPU time.
Earliest Deadline First Scheduling Conti….
▪Example
▪Here deadline is not mentioned so deadline=period
▪Hyper period= LCM(3,4,8)= 24
▪Periodic Task Timing diagram-