One of the advantages of using IO in the manner described so far, is that there is no need for the operating system to manage a pool of message buffers. Communication proceeds on a ``hand-shaked'' request-reply basis. And, whenever a process outputs a buffer, its communicating partner has already provided a corresponding input buffer. Moreover, the process doing the output knows the exact length of this buffer (out-len.out). So too does the operating system, and hence the operating system can ensure that a receiving process need not contend with more input than was asked for.
Starting up a communication link so that there is always an input buffer available to hold an incoming message, is somewhat tricky:
The solution is to treat the first I/O call on an I/O port differently from subsequent calls: Should the first I/O call not do any output (out-len.in = 0), it proceeds normally. That is, the parameters are strung into a message which is sent off to the communicating partner. However, should the first I/O call involve output (out-len.in 0), its initiation is delayed until the communicating partner successfully initiates its first I/O operation. That is, the parameters are sent off to the communicating partner, only after a message is received from the partner. Moreover, this synchronization message is not returned to the caller in the form of the out parameters of IO, but is effectively discarded.
The implication of this strategy is that a server process, which normally speaks only when spoken to, must initialize a port linked to a client by performing an input operation on it. The message resulting from this operation will allow the first output operation by the client to proceed. And since the server's first message is discarded, the client need not deal with an unexpected message from the server.
Unfortunately, things are more complicated when the client chooses to start communicating with an input operation. If the message resulting from the first input operation on a port is always discarded, then the server will not receive the first request made by the client. But if the message is discarded only when received at a port whose first operation is an output operation, then the client will have to deal with an unsolicited message from the server.
The only way out of this is to insist that all communication links be initialized by one process performing an output operation and the other an input operation, thus avoiding the complications. In other words, a client must always perform an initial output operation.
If need be, the server will have to provide an additional dummy output operation to be used by the client for initializing the link. Note that a suitably defined subroutine library can make the whole procedure transparent to client programmers.
It should also be noted that this difficulty arises chiefly because of the need to have a single set of rules apply to all styles of communication, and to all I/O operations performed by user processes. This simplifies things once communication is initialized.
Prof Herman Venter