close
    這是我自己讀OPERATING SYSTEM CONCEPTS (6th Edition)的筆記,本來讀的進度很緩慢,直到有一天在網路上看到冼鏡光老師分享上課的slide(OS第7版),唸起OS就愈來有趣啦。
    筆記的章節是依OS第6版整理的,圖片則都是從冼鏡光老師的slide貼上來的,感謝冼老師無私的分享,讓我有機會一窺OS的奧妙。
   
(如有著作權相關問題,請惠予通知)



6.     CPU Scheduling
6.1    Basic Concepts
  • CPU scheduling is the basis of multiprogrammed operating system.
  • The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization.
  • CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher.
            6.1.1  CPU-I/O Burst Cycle
    • Process execution repeats the CPU burst and I/O burst cycle.
    • When a process begins an I/O burst, another process can use the CPU for a CPU burst.
    • A process is CPU-bound if it generates I/O requests infrequently, using more of its time doing computation.
    • A process is I/O-bound if it spends more of its time to do I/O then it spends doing computation.
    • A CPU-bound process might have a few very long CPU bursts.
    • An I/O-bound process typically has many short CPU bursts.
    • The histogram of CPU-burst times can help us select an appropriate CPU-scheduling algorithm.
                     
            6.1.2  CPU Scheduler
    • When the CPU is idle, the OS must select another process to run.
    • This selection process is carried out by the short-term scheduler (or CPU scheduler).
    • The CPU scheduler selects a process from the ready queue, and allocates the CPU to it.
    • The records in the ready queues are generally process control blocks (PCBs) of the process.
            6.1.3  Preemptive Scheduling
                     
    • Circumstances that scheduling may take place
      • A process switches from the running state to the waiting state (e.g., doing for I/O, invocation of wait for the termination of one of the child processes)
      • A process switches from the running state to the ready state (e.g., an interrupt occurs)
      • A process switches from the waiting state to the ready state (e.g., I/O completion)
      • A process terminates
    • Non-preemptive scheduling: scheduling occurs only when a process voluntarily enter the wait state (case 1) or terminates (case 4).
      • Simple, but very inefficient.
      • It is the only method that can be used on certain hardware platforms, because it does not require the special hardware (for example, a timer) needed for preemptive scheduling.
    • Preemptive scheduling: scheduling occurs in all possible cases.
      • What if the kernel is in its critical section modifying some important data? Mutual exclusion may be violated.
      • The kernel must pay special attention to this situation and, hence, is more complex.
            6.1.4  Dispatcher
    • The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler.
    • The dispatcher is the last step in scheduling. It
      • Switches context
      • Switches to user mode
      • Braches to the stored program counter to resume the program’s executione.
    • It has to be very fast as it is used in every context switch.
    • Dispatcher latency: the time to switch two processes.
                     
6.2    Scheduling Criteria
  • CPU Utilization
    • We want to keep the CPU as busy as possible.
    • CPU utilization ranges from 0 to 100 percent.
    • Normally 40% is lightly loaded and 90% or higher is heavily loaded.
    • You can bring up a CPU usage meter to see CPU utilization on your system. Or, you can use the top command.
  • Throughput
    • The number of processes completed per time unit is called throughput.
    • Higher throughput means more jobs get done.
    • However, for long processes, this rate may be one job per hour, and, for short (student) jobs, this rate may be 10 per minute.
  • Turnaround Time
    • The time period between job submissions to completion is the turnaround time.
    • From a user’s point of view, turnaround time is more important than CPU utilization and throughput.
    • Turnaround time is the sum of
      • Waiting time before entering the system (memory)
      • Waiting time in the ready queue
      • Waiting time in all other events (e.g., I/O)
      • Time the process actually running on the CPU
  • Waiting Time
    • Waiting time is the sum of the periods that a process spends waiting in the ready queue.
    • Why only ready queue?
      • CPU scheduling algorithms do not affect the amount of time during which a process is waiting for I/O and other events.
      • However, CPU scheduling algorithms do affect the time that a process stays in the ready queue.
  • Response Time
    • The time from the submission of a request (in an interactive system) to the first response is called response time. It does not include the time that it takes to output the response.
    • For example, in front of your workstation, you perhaps care more about the time between hitting the Return key and getting your first output than the time from hitting the Return key to the completion of your program (e.g., turnaround time)
  • What are the goals?
    • In general, the main goal is to maximize CPU utilization and throughput and minimize turnaround time, waiting time and response time.
    • In some systems (e.g., batch systems), maximizing CPU utilization and throughput is more important, while in other systems (e.g., interactive) minimizing response time is paramount.
    • Sometimes we want to make sure some jobs must have guaranteed completion before certain time.
    • Other systems may want to minimize the variance of the response time.
6.3    Scheduling Algorithms
            6.3.1  First-Come, First-Served Scheduling (FCFS)
    • The process that requests the CPU first is allocated the CPU first.
    • This can easily be implemented using a queue.
    • FCFS is not preemptive. Once a process has the CPU, it will occupy the CPU until the process completes or voluntarily enters the wait state.
    • It is easy to get the convoy effect: all the processes wait for the one big process to get off the CPU. CPU utilization may be low. Consider a CPU-bound process running with many I/O-bound process.
    • It is in favor of long processes and may not be fair to those short ones. What if your 1-minute job is behind a 10-hour job?
    • It is troublesome for time-sharing systems, where each user needs to get a share of the CPU at regular intervals.
                     
            6.3.2  Shortest-Job-First Scheduling (SJF, shortest next CPU burst)
    • Each process in the ready queue is associated with the length of its next CPU burst.
    • When a process must be selected from the ready queue, the process with the smallest next CPU burst is selected.
    • Thus, the processes in the ready queue are sorted in CPU burst length.
    • SJF can be non-preemptive or preemptive.
    • Non-preemptive SJF: Example
                     
    • Preemptive SJF: Example
                     
    • Non-preemptive SJF:
                     
    • SJF is provably optimal!
      • Every time we make a short job before a long job, we reduce average waiting time.
      • If the jobs are sorted, job switching is impossible.
                                 
    • How do we know the next CPU burst?
      • Without a good answer to this question, SJF cannot be used for CPU scheduling.
      • So, we try to predict it?
      • Let tn be the length of the nth CPU burst and pn+1 be the prediction of the next CPU burst
        pn+1 = a * tn + (1-a) * pn
        where a is weight value in [0,1]
      • If a = 0, then pn+1 = pn and recent history has no effect. If a = 1, only the last burst matters. If a is 1/2, the actual burst and predict values are equally important.
    • Estimating the next burst: Example
      • Initially, we have to guess the value of p1 because we have no history value.
      • The following is an example with a = 1/2.
                                 
    • It is difficult to estimate the next burst time value accurately.
    • SJF is in favor short jobs. As a result, some long jobs may not have a chance to run at all. This is starvation.
    • The preemptive version is usually referred to as shortest-remaining-time-first scheduling, because scheduling is based on the “remaining time” of a process.
            6.3.3  Priority Scheduling
    • Each process has a priority.
    • Priority may be determined internally or externally:
      • Internal priority: determined by time limits, memory requirement, # of files, and so on.
      • External priority: not controlled by the OS (e.g., importance of the process)
    • The scheduler always picks the process (in ready queue) with the highest priority to run.
    • FCFS and SJF are special cases of priority scheduling.
    • Priority scheduling can be non-preemptive or preemptive.
    • With preemptive priority scheduling, if the newly arrived process has a higher priority than the running one, the latter is preempted.
    • Indefinite block (or starvation) may occur: a low priority process may never have a chance to run.
    • Aging is a technique to overcome the starvation problem.
    • Aging: gradually increases the priority of processes that wait in the system for a long time.
    • Example: If 0 is the highest (resp., lowest) priority, then we could decrease (resp., increase) the priority of a waiting process by 1 every fixed period (e.g., every minute)
                                   6.3.4  Round-Robin Scheduling (RR)
    • RR is similar to FCFS, except that each process is assigned a time quantum.
    • All processes in the ready queue is a FIFO list.
    • When the CPU is free, the scheduler picks the first and lets it run one time quantum.
    • If that process uses CPU for less than one time quantum, it is moved to the tail of the list.
    • Otherwise, when one time quantum is up, that process is preempted by the scheduler and moved to the tail of the list.
    • If time quantum is too large, RR reduces to FCFS.
    • If time quantum is too small, RR becomes processor sharing.
    • Context switching may affect RR’s performance, shorter time quantum means more context switches.
    • Turnaround time also depends on the size of time quantum.
    • In general, 80% of the CPU bursts should be shorter than the time quantum.
                     
                     
            6.3.5  Multilevel Queue Scheduling
    • A multilevel queue scheduling algorithm partitions the ready queue into a number of separate queues (e.g., foreground (interactive) and background (batch)).
    • Each process is assigned permanently to one queue based on some properties of the process (e.g., memory usage, priority, process type).
    • There must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling or time slice algorithm.
    • Each queue has its own scheduling algorithm (e.g., RR for foreground and FCFS for background).
    • A priority is assigned to each queue. A higher priority process may preempt a lower priority process.
                     
    • A process P can run only if all queues above the queue that contains P are empty.
    • When a process is running and a process in a higher priority queue comes in, the running process is preempted.
            6.3.6  Multilevel Feedback Queue Scheduling
    • Multilevel queue with feedback scheduling is similar to multilevel queue; however, it allows processes to move between queues.
    • The idea is to separate processes with different CPU-burst characteristics. If a process uses more (resp., less) CPU time, it is moved to a queue of lower (resp., higher) priority.
    • As a result, I/O-bound (resp., CPU-bound) processes will be in higher (resp., lower) priority queues.
                     
    • Processes in queue i have time quantum 2i.
    • When a process’ behavior changes, it may be placed (i.e., promoted or demoted) into a difference queue.
    • Thus, when an I/O-bound (resp., CPU-bound) process starts to use more CPU (resp., more I/O), it may be demoted (resp., promoted) to a lower (resp., higher) queue.
6.4    Multiple-Processor Scheduling
  • Common ready queue: all processes go into common ready queue and are scheduled onto any available processor
6.5    Real-Time scheduling
  • Hard real-time: critical tasks must be completed within a guaranteed amount of time
    • The scheduler either admits a process and guarantees that the process will complete on-time, or reject the request (resource reservation).
    • This is almost impossible if the system has secondary storage and virtual memory because these subsystems can cause unavoidable delay.
    • Hard real-time systems usually have special software running on special hardware.
  • Soft real-time: critical tasks receive higher priority over other processes
    • It is easily doable within a general system
    • It could cause long delay (starvation) for non-critical tasks.
    • The CPU scheduler must prevent aging to occur. Otherwise, critical tasks may have lower priority.
    • Aging can be applied to non-critical tasks.
    • The system must have priority scheduling, and real-time processes must have the highest priority. The priority of real-time processes must not degrade over time, even though the priority of non-real-time processes may.
    • The dispatch latency must be small.
  • Reduce dispatch latency
    • Many systems wait when serving a system call or waiting for the completion of an I/O before a context switch to occur. This could cause a long delay.
    • One way to overcome this problem is to add preemption points in long-duration calls. At these preemption points, the system check to see if a high-priority process is waiting to run.
    • If there are, the system is preempted by a high-priority process.
    • Dispatch latency could still be high because only a few preemption points can be added. A better solution is to make the while system preemptible.
    • What if a high-priority process needs to access the data that is currently being accessed by a low-priority process? The high-priority process is blocked. by the low-priority process. This is priority inversion.
    • Priority inversion can be solved with priority-inheritance protocol
      • All process, including the one that is accessing the data, inherits the high priority until they are done with the resource.
      • When they finish, their priority values revert back to the original values.
6.6    Algorithm Evaluation
            6.6.1  Deterministic Modeling
    • Analytic evaluation uses the given algorithm and the system workload to produce a formula or number that evaluates the performance of the algorithm for that workload.
    • .Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload.
            6.6.2  Queuing Models
    • Little’s formula: if the system is in a steady state, then the number of processes leaving the queue must be equal to the number of processes that arrive.
            6.6.3  Simulations
            6.6.4  Implementation
6.7    Process Scheduling Models
  • One distinction between user-level and kernel-level threads lies in how they are scheduled.
  • The thread library schedules user-level threads to run on an available LWP, a scheme known as process local scheduling, in which thread scheduling is done local to the application.
  • The kernel uses system global scheduling to decide which kernel thread to schedule.
            6.7.1  An Example: Solaris 2
    • Four classes of scheduling: real time, system, time sharing, and interactive.
    • Interactive processes typically have a higher priority, CPU-bound processes a lower priority.
            6.7.2  An Example: windows 2000
    • A thread selected to run by the dispatcher will run until it is preempted by a higher-priority thread, until it terminates, until its time quantum ends, or until it calls a blocking system call, such as for I/O.
            6.7.3  An Example: Linux
    • Linux provides two separate process-scheduling algorithms. One is a time-sharing algorithm for fair preemptive scheduling among multiple processes; the other is designed for real-time tasks where absolute priorities are more important than fairness.
    • Linux allows only processes running in user mode to be preempted.
    • Part of every process’ identity is a scheduling class, which defines which of these algorithms to apply to the class. The scheduling classes used by Linux are defined in the POSIX standard’s extensions for real-time computing (POSIX.4, now known as POSIX.1b).
    • For time-shared processes, Linux uses a prioritized, credit-based algorithm.
    • Linux implements the two real-time scheduling classes required by POSIX.1b: FCFS, and RR.
    • Linux kernel code can never be preempted by user-mode code.




arrow
arrow
    全站熱搜

    nix 發表在 痞客邦 留言(0) 人氣()