- CPU scheduling deals with the problem of deciding which of the process in the ready queue is to be allocated to the CPU.
- The major decision among different scheduling techniques is whether they support preemptive or non-preemptive scheduling schemes.
- The scheduling scheme in which, once the CPU has been allocated to a process, the process keeps the CPU until it, is called non-preemptive
- The scheduling scheme in which, once the CPU has been allocated to a process, the process can be taken out, is called preemptive.
Preemptive Scheduling:
- Preemptive scheduling is used when a process switches from running state to ready state or from waiting for the state to ready state. The resources(mainly CPU cycles) are allocated to the process for a limited amount of time and then taken away, and the process is again placed back in the ready queue if that process still has CPU burst time remaining. That process stays in the ready queue still it gets the next chance to execute.
- Example: SJF Algorithm, Round Robin Algorithm
Non- Preemptive Scheduling
- Non-preemptive scheduling is used when a process terminates, or a process switches from a running state. In this scheduling, once the resources(CPU cycles) are allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting state. In case of non-preemptive scheduling does not interrupt a process running CPU in the middle of the execution. Instead, it waits till the process completes its CPU burst time, and then it can allocate the CPU to another process.
- Example: FCFS Algorithm.
CPU utilization:
- It determines how much CPU is kept busy.CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 percent(for a lightly loaded system) to 90 percent(for a heavily used system).
- In short, it is the percentage of time the processor is busy.
Throughput:
- It refers to the amount of work completed in a unit of time. One way to measure throughput is by means of processes that are completed in a unit of time.
- In short, it is the number of processes completed.
Turn around time:
- It may be defined as the interval from the time of submission of a process to the time of its completion
- Turn around time = actual execution time(CPU time) + time spent waiting for resources
- In short, it is how long it takes to complete its task.
Fairness:
- It is the determination that how much each process gets the fair share of the CPU. That is, it determines or checks that no process suffers starvation.
Waiting time:
- In multiprogramming, as several jobs reside in memory at a time, the CPU executes only one job at a time. The rest of the jobs waiting for the CPU.
- In short, it is how long a process spends waiting in the ready queue.
Waiting time = turnaround time – actual processing time(CPU time)
Response time:
- It is the interval of time from the submission of a request until the first response is produced. This actually is the amount of time it takes to start responding, but not the time it takes to output that response. It is mostly considered in time-sharing and real-time systems