CSE-316 Unit 2(CPU Scheduling)




1. Introduction to CPU Scheduling

Definition:

CPU scheduling is the process by which an operating system (OS) decides which process in the ready queue should be allocated the CPU for execution. Since the CPU is one of the most critical resources, effective CPU scheduling ensures optimal performance by keeping the CPU busy while minimizing waiting time and improving throughput.

Thankyou

Goal of CPU Scheduling:

The main goal is to maximize the utilization of the CPU, minimize response time for interactive systems, reduce waiting time for processes, and ensure fairness among processes.

Types of CPU Scheduling:

  1. Preemptive Scheduling: The OS can interrupt a running process and allocate the CPU to another process. This type of scheduling is used to ensure quick response to critical tasks or events.
  2. Non-preemptive Scheduling: Once a process is allocated the CPU, it cannot be preempted until it voluntarily relinquishes control (e.g., by waiting for I/O) or finishes execution.

2. CPU Scheduler and Dispatcher

CPU Scheduler:

The CPU scheduler is a part of the OS responsible for selecting the next process from the ready queue and allocating the CPU to it. The decision is based on scheduling algorithms, which we will discuss later.

  1. Preemptive Scheduler: This scheduler allows a process to be interrupted and another process to be assigned to the CPU, enabling multi-tasking and responsiveness.
  2. Non-preemptive Scheduler: This scheduler allows processes to run to completion or until they voluntarily release control.

1. Long-Term Scheduler (Job Scheduler)

  • Definition: The long-term scheduler controls the admission of jobs into the system, determining which processes are to be brought into the ready queue from the job pool.
  • Uses: It decides which jobs should be loaded into memory for execution.
  • Why We Use: To control the degree of multiprogramming and maintain system performance.
  • Advantages:
    • Optimizes CPU utilization by maintaining a balanced load.
    • Can prioritize jobs based on specific criteria (e.g., resource requirements).
  • Disadvantages:
    • If it takes too long to decide which jobs to admit, it can lead to increased turnaround time.
  • Example: In a batch processing system, the long-term scheduler decides which jobs from a queue should be executed next.

2. Short-Term Scheduler (CPU Scheduler)

  • Definition: The short-term scheduler selects one of the processes from the ready queue and allocates CPU time to it.
  • Uses: It determines which process runs next when the CPU becomes available.
  • Why We Use: To manage process execution efficiently and ensure responsiveness in interactive systems.
  • Advantages:
    • Fast decision-making leads to better response times.
    • Can quickly adapt to changes in process priorities and system load.
  • Disadvantages:
    • May lead to starvation if certain processes are consistently prioritized over others.
  • Example: When a process is interrupted, the short-term scheduler picks the next process from the ready queue to execute on the CPU.

Thankyou


3. Medium-Term Scheduler

  • Definition: The medium-term scheduler manages the swapping of processes in and out of memory (also called swapping).
  • Uses: It temporarily removes processes from the main memory and keeps them on disk (swapping out) and later brings them back (swapping in) based on resource availability.
  • Why We Use: To control the multiprogramming level and optimize resource usage.
  • Advantages:
    • Allows for better memory management and can free up resources for other processes.
    • Helps maintain a balance between CPU-bound and I/O-bound processes.
  • Disadvantages:
    • Swapping processes in and out can introduce overhead, affecting performance.
  • Example: In a situation where RAM is full, the medium-term scheduler can swap out a less-critical process to allow a higher-priority process to run.

4. Real-Time Scheduler

  • Definition: The real-time scheduler is designed for real-time operating systems that require immediate processing of certain tasks to meet timing constraints.
  • Uses: It manages processes that have specific timing requirements, ensuring that they complete within a defined time frame.
  • Why We Use: To guarantee that critical tasks are executed on time, which is essential in applications like embedded systems or industrial control systems.
  • Advantages:
    • Ensures that time-sensitive tasks are prioritized and executed promptly.
    • Reduces latency for high-priority tasks.
  • Disadvantages:
    • Complexity in managing timing constraints.
    • May lead to starvation of non-real-time processes.
  • Example: In a system controlling a robotic arm, the real-time scheduler ensures that commands are executed within strict timing limits to maintain synchronization.

5. Thread Scheduler

  • Definition: The thread scheduler manages the execution of threads within a process, determining the order in which threads will run.
  • Uses: It allocates CPU time to different threads based on their states and priorities.
  • Why We Use: To improve the responsiveness and efficiency of multi-threaded applications.
  • Advantages:
    • Allows for better utilization of CPU resources.
    • Enables concurrency, improving application performance.
  • Disadvantages:
    • Increased complexity in scheduling decisions.
    • Potential for race conditions if not managed properly.
  • Example: In a web server, the thread scheduler manages multiple threads handling incoming client requests simultaneously.

Dispatcher:

The dispatcher is responsible for the actual switching between processes. It performs tasks such as:

  • Switching context: Saving the state of the currently running process and loading the state of the next process.
  • Switching to user mode: Ensuring the CPU is operating in user mode for normal process execution.
  • Jumping to the appropriate location in the program: Restarting the next process at the correct instruction.

Dispatcher Latency:

This refers to the time it takes for the dispatcher to stop one process and start another. Minimizing this latency is crucial for high-performance systems.




3. Scheduling Criteria

When evaluating scheduling algorithms, several criteria are considered:

  • CPU Utilization: The percentage of time the CPU is busy. The goal is to keep the CPU as busy as possible, ideally close to 100%.
  • Throughput: The number of processes completed per unit of time. High throughput indicates that the system is efficient.
  • Turnaround Time: The total time taken from the submission of a process to its completion. Minimizing this time is desirable, especially in batch systems.
  • Waiting Time: The total time a process spends in the ready queue waiting to be executed. Lower waiting times mean better system responsiveness.
  • Response Time: The time from the submission of a request until the first response is produced. For interactive systems, a short response time is critical.
  • Fairness: Ensuring that each process gets a fair share of the CPU without being starved or delayed excessively.

4. Scheduling Algorithms

a. First Come First Serve (FCFS)

  • Definition: Processes are executed in the order they arrive in the ready queue.
  • Advantages:
    • Simple to understand and implement.
    • Non-preemptive, meaning minimal context switching overhead.
  • Disadvantages:
    • Convoy effect: If a long process is at the front of the queue, all other processes must wait, leading to long average waiting times.
Example: If processes arrive in the order P1 (burst time 10ms), P2 (burst time 5ms), and P3 (burst time 2ms), P1 will be executed first, followed by P2 and then P3, regardless of their execution times.

Given Data:

  • P1: Burst Time (BT) = 10ms
  • P2: Burst Time (BT) = 5ms
  • P3: Burst Time (BT) = 2ms

The processes arrive in the order P1, P2, P3.

Step 1: Calculating Waiting Time (WT)
  • Waiting Time is the time a process spends waiting in the ready queue before it gets executed.

For P1:

  • Since P1 arrives first, it does not have to wait.
    WT(P1) = 0ms

For P2:

  • P2 arrives after P1, so it has to wait for the burst time of P1 to finish.
    WT(P2) = BT(P1) = 10ms

For P3:

  • P3 arrives after P2, so it has to wait for both P1 and P2 to complete execution.
    WT(P3) = BT(P1) + BT(P2) = 10ms + 5ms = 15ms

Step 2: Calculating Turnaround Time (TAT)

  • Turnaround Time is the total time from the arrival of a process to its completion.

For P1:

  • TAT(P1) = BT(P1) + WT(P1) = 10ms + 0ms = 10ms

For P2:

  • TAT(P2) = BT(P2) + WT(P2) = 5ms + 10ms = 15ms

For P3:

  • TAT(P3) = BT(P3) + WT(P3) = 2ms + 15ms = 17ms

Step 3: Summary of Results

ProcessBurst Time (BT)Waiting Time (WT)Turnaround Time (TAT)
P110ms0ms10ms
P25ms10ms15ms
P32ms15ms17ms

Step 4: Calculating Averages

  • Average Waiting Time (AWT) = (WT(P1) + WT(P2) + WT(P3)) / 3
    AWT = (0ms + 10ms + 15ms) / 3 = 25ms / 3 = 8.33ms

  • Average Turnaround Time (ATT) = (TAT(P1) + TAT(P2) + TAT(P3)) / 3
    ATT = (10ms + 15ms + 17ms) / 3 = 42ms / 3 = 14ms

Final Results:

  • Average Waiting Time (AWT) = 8.33ms
  • Average Turnaround Time (ATT) = 14ms

In the FCFS algorithm, P1 is executed first, followed by P2 and then P3.


b. Shortest Job First (SJF)

  • Definition: The process with the shortest burst time is executed first.
  • Advantages: Minimizes the average waiting time by prioritizing shorter processes.
  • Disadvantages:
    • Can lead to starvation: Longer processes may never get a chance to execute if short jobs keep arriving.
    • Difficult to predict burst times accurately.
Example: For processes with burst times P1 (5ms), P2 (3ms), and P3 (2ms), P3 will be executed first, followed by P2, then P1.

Given Data:

  • P1: Burst Time (BT) = 5ms
  • P2: Burst Time (BT) = 3ms
  • P3: Burst Time (BT) = 2ms

The order of execution, based on burst time (Shortest Job First), is:

  1. P3 (2ms)
  2. P2 (3ms)
  3. P1 (5ms)

Step 1: Calculating Waiting Time (WT)

  • Waiting Time is the time a process spends waiting in the ready queue before it gets executed.

For P3:

  • P3 is executed first, so it doesn't have to wait.
    WT(P3) = 0ms

For P2:

  • P2 has to wait for P3 to finish its execution.
    WT(P2) = BT(P3) = 2ms

For P1:

  • P1 has to wait for both P3 and P2 to finish.
    WT(P1) = BT(P3) + BT(P2) = 2ms + 3ms = 5ms

Step 2: Calculating Turnaround Time (TAT)

  • Turnaround Time is the total time from the arrival of a process to its completion.

For P3:

  • TAT(P3) = BT(P3) + WT(P3) = 2ms + 0ms = 2ms

For P2:

  • TAT(P2) = BT(P2) + WT(P2) = 3ms + 2ms = 5ms

For P1:

  • TAT(P1) = BT(P1) + WT(P1) = 5ms + 5ms = 10ms

Step 3: Summary of Results

ProcessBurst Time (BT)Waiting Time (WT)Turnaround Time (TAT)
P32ms0ms2ms
P23ms2ms5ms
P15ms5ms10ms

Step 4: Calculating Averages

  • Average Waiting Time (AWT) = (WT(P1) + WT(P2) + WT(P3)) / 3
    AWT = (5ms + 2ms + 0ms) / 3 = 7ms / 3 = 2.33ms

  • Average Turnaround Time (ATT) = (TAT(P1) + TAT(P2) + TAT(P3)) / 3
    ATT = (10ms + 5ms + 2ms) / 3 = 17ms / 3 = 5.67ms

Final Results:

  • Average Waiting Time (AWT) = 2.33ms
  • Average Turnaround Time (ATT) = 5.67ms

In the SJF algorithm, the processes are executed in the order of P3 (2ms), P2 (3ms), and then P1 (5ms), minimizing the average waiting and turnaround times.


c.Round Robin(RR)


  • Definition: Each process is assigned a fixed time quantum, and the CPU cycles through all the processes in the ready queue.
  • Advantages:
    • Fair for all processes.
    • Preemptive, so every process gets CPU time without excessive waiting.
  • Disadvantages:
    • Performance depends on the size of the time quantum. Too small a quantum leads to excessive context switching; too large a quantum behaves like FCFS.
Example: If three processes P1, P2, and P3 are assigned a time quantum of 4ms, each process will get 4ms to execute before the next one starts.

Given Data:

  • P1: Burst Time (BT) = 10ms
  • P2: Burst Time (BT) = 5ms
  • P3: Burst Time (BT) = 2ms
  • Time Quantum (TQ) = 4ms

Step 1: Understanding the Execution Order

  • Each process is executed for 4ms per turn in the Round Robin order.
  • If a process has more burst time remaining after 4ms, it is placed back in the queue and waits for the next cycle.

Step 2: Calculating Waiting Time (WT) and Turnaround Time (TAT)

We'll track the remaining burst time of each process after each time quantum and calculate the waiting and turnaround times.

ProcessBurst Time (BT)Time Quantum (TQ)Remaining Time after TQ
P110ms4ms6ms
P25ms4ms1ms
P32ms4ms0ms (finished)

After the first cycle:

  • P1 executes for 4ms, with 6ms remaining.
  • P2 executes for 4ms, with 1ms remaining.
  • P3 executes for 2ms and finishes execution (remaining time = 0ms).

Second cycle (only P1 and P2 remain in the queue):

ProcessRemaining TimeTime Quantum (TQ)Remaining Time after TQ
P16ms4ms2ms
P21ms1ms0ms (finished)

After the second cycle:

  • P1 executes for 4ms, with 2ms remaining.
  • P2 executes for 1ms and finishes execution (remaining time = 0ms).

Third cycle (only P1 remains):

ProcessRemaining TimeTime Quantum (TQ)Remaining Time after TQ
P12ms2ms0ms (finished)

After the third cycle:

  • P1 finishes execution in this cycle.

Step 3: Calculating Waiting Time (WT)

  • WT(P1) = Total time spent waiting = (8ms waiting before 2nd cycle) + (9ms waiting before 3rd cycle) = 17ms
  • WT(P2) = Total time spent waiting = (4ms waiting for P1 in the 1st cycle) + (5ms waiting for P1 in the 2nd cycle) = 9ms
  • WT(P3) = Total time spent waiting = (4ms waiting for P1 and P2 in the 1st cycle) = 4ms

Step 4: Calculating Turnaround Time (TAT)

  • TAT(P1) = WT(P1) + BT(P1) = 17ms + 10ms = 27ms
  • TAT(P2) = WT(P2) + BT(P2) = 9ms + 5ms = 14ms
  • TAT(P3) = WT(P3) + BT(P3) = 4ms + 2ms = 6ms

Step 5: Summary of Results

ProcessBurst Time (BT)Waiting Time (WT)Turnaround Time (TAT)
P110ms17ms27ms
P25ms9ms14ms
P32ms4ms6ms

Step 6: Calculating Averages

  • Average Waiting Time (AWT) = (WT(P1) + WT(P2) + WT(P3)) / 3
    AWT = (17ms + 9ms + 4ms) / 3 = 30ms / 3 = 10ms

  • Average Turnaround Time (ATT) = (TAT(P1) + TAT(P2) + TAT(P3)) / 3
    ATT = (27ms + 14ms + 6ms) / 3 = 47ms / 3 = 15.67ms


Final Results:

  • Average Waiting Time (AWT) = 10ms
  • Average Turnaround Time (ATT) = 15.67ms

In the Round Robin scheduling algorithm with a time quantum of 4ms, each process gets a fair share of CPU time, and processes are executed in turns.


d.Priority Scheduling

  • Definition: Processes are scheduled based on their priority, with higher-priority processes getting the CPU first.
  • Advantages: Critical or important tasks can be prioritized.
  • Disadvantages:
    • Starvation: Low-priority processes may be indefinitely delayed if higher-priority tasks keep arriving.
Example: If processes have priorities P1 (priority 2), P2 (priority 1), and P3 (priority 3), P2 will be executed first, followed by P1, then P3.

Given Data:

  • P1: Priority = 2
  • P2: Priority = 1
  • P3: Priority = 3

Execution Order Based on Priorities:

  • P2 has the highest priority (priority 1), so it will be executed first.
  • P1 has the next highest priority (priority 2), so it will be executed second.
  • P3 has the lowest priority (priority 3), so it will be executed last.

Assuming Burst Times:

Let's assume the following burst times for the processes:

  • P1: Burst Time (BT) = 5ms
  • P2: Burst Time (BT) = 3ms
  • P3: Burst Time (BT) = 4ms

Step 1: Calculating Waiting Time (WT)

  • Waiting Time (WT) is the total time a process waits in the ready queue before execution.

For P2:

  • Since P2 is executed first, its waiting time is 0ms.
    WT(P2) = 0ms

For P1:

  • P1 has to wait for P2 to finish execution.
    WT(P1) = BT(P2) = 3ms

For P3:

  • P3 has to wait for both P2 and P1 to finish execution.
    WT(P3) = BT(P2) + BT(P1) = 3ms + 5ms = 8ms

Step 2: Calculating Turnaround Time (TAT)

  • Turnaround Time (TAT) is the total time from the arrival of a process to its completion (Waiting Time + Burst Time).

For P2:

  • TAT(P2) = BT(P2) + WT(P2) = 3ms + 0ms = 3ms

For P1:

  • TAT(P1) = BT(P1) + WT(P1) = 5ms + 3ms = 8ms

For P3:

  • TAT(P3) = BT(P3) + WT(P3) = 4ms + 8ms = 12ms

Step 3: Summary of Results

ProcessPriorityBurst Time (BT)Waiting Time (WT)Turnaround Time (TAT)
P213ms0ms3ms
P125ms3ms8ms
P334ms8ms12ms

Step 4: Calculating Averages

  • Average Waiting Time (AWT) = (WT(P1) + WT(P2) + WT(P3)) / 3
    AWT = (3ms + 0ms + 8ms) / 3 = 11ms / 3 = 3.67ms

  • Average Turnaround Time (ATT) = (TAT(P1) + TAT(P2) + TAT(P3)) / 3
    ATT = (8ms + 3ms + 12ms) / 3 = 23ms / 3 = 7.67ms


Final Results:

  • Average Waiting Time (AWT) = 3.67ms
  • Average Turnaround Time (ATT) = 7.67ms

In Priority Scheduling, the processes are executed in order of their priorities, with P2 (priority 1) being executed first, followed by P1 (priority 2), and P3 (priority 3). This ensures that higher priority processes are executed earlier.


e. Multi-Level Feedback Queue (MLFQ)

  • Definition: Multiple queues are used, each with a different priority level. Processes can move between queues based on their behavior and execution history.
  • Advantages:
    • Adaptive scheduling allows shorter processes to be executed quickly and demotes CPU-bound processes.
  • Disadvantages:
    • Complex to implement and tune.
  • Example: A CPU-bound process might start in a higher-priority queue, but as it continues to use the CPU, it may be moved to a lower-priority queue, while I/O-bound processes are prioritized.

5. Advanced Scheduling Concepts

f. Multiprocessor Scheduling

  • Definition: Scheduling in systems with multiple CPUs, where the OS must balance the load across processors.
  • Advantages:
    • Exploits parallelism, leading to faster processing.
  • Disadvantages:
    • Load balancing can be complex.
  • Example: In a multicore processor system, the OS ensures that tasks are distributed evenly across cores, so no single core is overwhelmed while others are idle.

g. Real-Time Scheduling

Definition: Scheduling for systems with strict timing constraints, like embedded systems or critical applications.

  • Types:

    • Hard Real-Time: Must meet deadlines; missing a deadline can lead to catastrophic failure.
    • Soft Real-Time: Deadlines are important but not critical; missing a deadline reduces system performance but is not disastrous.

Advantages: Ensures processes are executed within their time constraints.

Disadvantages: Hard to implement due to the need for predictable execution times.

Example
: A real-time OS controlling an anti-lock braking system (ABS) in a car must process sensor input within milliseconds to avoid failure.

h. Thread Scheduling

  • Definition: Scheduling individual threads within a process, often in multi-threaded applications.
  • Advantages:
    • Allows multiple threads of a single process to execute concurrently, improving efficiency.
  • Disadvantages:
    • Managing thread synchronization and preventing race conditions can be complex.
  • Example: In a web server, multiple threads handle different client requests simultaneously.


6. Performance Analysis and Problem-Solving

  • Simulate Algorithms:
    • Given a set of processes with arrival and burst times, simulate scheduling algorithms such as FCFS, SJF, and Round Robin.
    • Calculate metrics like average waiting time and turnaround time.
  • Effect of Time Quantum in Round Robin:
    • Analyze how different time quanta affect the number of context switches and system performance.

7. Issues in CPU Scheduling

a. Starvation

  • Definition: A process may never get scheduled if higher-priority tasks keep arriving, leading to indefinite waiting.
  • Solution: Aging is a technique where the priority of a process increases the longer it waits in the queue, preventing starvation.

b. Deadlock and Scheduling

  • Definition: Poor scheduling can contribute to deadlocks in the system, where processes wait on each other indefinitely.

8. Practical Applications and Real-World Implementations

  • Linux CPU Scheduling:

    • Linux uses the Completely Fair Scheduler (CFS), which balances fairness with system performance. It divides CPU time proportionally among processes.
  • Windows CPU Scheduling:

    • Windows uses a priority-based scheduler, where the OS dynamically adjusts the priorities of processes based on their behavior.
  • Mobile OS Scheduling:

    • Mobile operating systems like Android and iOS optimize CPU scheduling to minimize power consumption and maximize responsiveness.
  • Cloud and Virtualization Scheduling:

    • In cloud computing environments, virtual machines (VMs) share physical CPU resources. The hypervisor schedules these virtual CPUs to ensure fairness and efficiency.


9. Recent Trends and Advancements

  • Multi-Core and Distributed Systems:

    • Scheduling algorithms are evolving to handle the complexity of multi-core systems, ensuring that processes can run efficiently across multiple processors.
  • Cloud Environments:

    • Virtual machine scheduling in cloud platforms is critical for balancing performance and resource usage in large-scale data centers.
  • Energy-Aware Scheduling:

    • Modern scheduling algorithms are focusing on reducing energy consumption, particularly in mobile and embedded systems, by dynamically adjusting CPU speed and usage.


10. Summary and Conclusion

  • Overview of Scheduling Algorithms:

    • Different algorithms are suited for different types of systems. Real-time systems prioritize response time, while batch systems focus on maximizing throughput.
  • Choosing the Right Algorithm:

    • The choice of scheduling algorithm depends on the specific needs of the system, such as whether it is interactive, batch, or real-time.
  • Future Directions:

    • Future research in CPU scheduling focuses on better handling of multi-core processors, distributed systems, cloud environments, and energy efficiency.

Thankyou

No comments:

Post a Comment

If you have any doubt, Please let me know.