ASSUMPTIONS AND REQUIREMENTS

This paper is based on work performed as part of a larger effort[6, 7] to implement a distributed operating system for a multi-processor system. In this system each processor executes solely from a private memory. A process is created in the private memory of a particular processor and is executed by that processor only. In other words, processes do not migrate and the only scheduling decision global to all processors is deciding in which processor a new process must be created. The latter aspect of scheduling will not be considered explicitly in this paper.

The implementations of some other multi-processor operating systems have taken the approach that a given processor should execute at most one process at a time, thus avoiding the need for a multi-programming mechanism. We felt, however, that this would be too restrictive and that it should be possible to run more processes in the system than there are physical processors to run them.

Nevertheless, a major aim of the system was to achieve ``main-frame'' performance by using as many microprocessors as may be necessary - even if most of the processors are idle most of the time. The multi-programming mechanism described in this paper is therefore based on the assumption that an individual processor will be running tens of processes rather than thousands of processes at any given time. It is further assumed that an individual processor will not usually be overloaded. These assumptions are in marked contrast with the assumptions of many of the older multi-programming mechanisms, and has led to a much simpler mechanism.

The operating system design requires that each process must be able to specify a desired ``minimum rate of execution''. An implementation must see to it that each process progresses no slower than at its minimum rate or, if the processor is overloaded, at as close to its minimum rate as is possible.

Over and above allowing a minimum rate of execution to be prescribed to the scheduler, the design requires the implementation to distinguish among two classes of processes: real-time and ordinary. The distinction is simply that no ordinary process may execute while a real-time process is ready to execute. This allows the designer of a real-time system to ignore any ``non real-time'' processes when calculating whether a given hardware configuration has enough capacity to guarantee the required minimum rates of execution (and hence maximum response times).

The ``minimum rate of execution'' is defined as follows: Let R and C be the numbers on an absolute scale representing execution rates, where R represents the desired minimum rate of execution of a process and C the maximum rate of execution possible on the processor executing the process. The process should then receive at least R/C of the total processor capacity for as long as it remains ready to execute.

For example, if an implementation chooses MIPS as an absolute scale on which to measure execution rates, and if C = 5 for some processor, and R = 2 for some process running on the processor, then it means that the processor has to expend at least 40% of its 5 MIPS capacity on the process in order to let the process progress at its required rate of 2 MIPS. The remainder of the processor's capacity can be used to concurrently execute other processes.

Since all processes must progress ``simultaneously'', it is necessary to put an upper limit on how long a ready process can be denied the processor. The operating system design requires that a process should receive at least R/C of the available processor capacity every 100 milliseconds, while it remains ready to execute. In other words, after the process becomes ready, the processor should spend at least (R/C)100 milliseconds of the next 100 milliseconds executing the process. The same applies to the next 100 milliseconds. And so on, for as long as the process remains ready.

The choice of 100 milliseconds is somewhat arbitrary and can be changed without invalidating the ``rate of execution'' idea. Basically, a 100 milliseconds was chosen because it was felt that executing every ready process every 100 milliseconds would result in individual slices that are large enough to do useful work in, while allowing interactive processes at least 10 processor slices per second. (Real time processes are allowed to override the scheduler when response times of much less than 100 milliseconds must be guaranteed. See the discussion of the routine Execute_next.)

Another assumption that sets this multi-programming mechanism apart from many older ones, is that virtual memory is no longer necessary since real memories are now larger than virtual memories used to be. Not supporting virtual memory has significantly simplified the entire operating system implementation effort, including the multi-programming mechanism.

The multi-programming mechanism is simplified because it can assume that processes are in memory all the time and that there is no need to cater for page faults. Furthermore, there is no need to take paging behaviour into account when making scheduling decisions.


next up previous

Prof Herman Venter
Fri Apr 26 09:15:11 GMT 1996