Whether or not to make virtual processors interruptible is a somewhat contentious issue. Anyone who has programmed an interruptible processor will recall at least one extremely hard-to-find error caused by subtle interactions between non-deterministic interrupts. None but the best and bravest (or most foolish) programmers will venture anything of substance that the final ``debugged'' system running on an interruptible processor will be completely free of errors.
The main argument for having interrupts is that they allow fast responses to external events and avoid the need to check repeatedly whether the event has occurred. However, these properties can be obtained in a non-interruptible system by packaging interrupts as arriving messages, by providing a process with suitable mechanisms for suspending its execution pending the arrival of a message, and by providing mechanisms that enable a process to respond quickly to the receipt of a message.
The only processes for which these ``no interrupt'' mechanisms are clearly less suitable than interrupt-based mechanisms are the device driver processes that must ultimately deal with the real interrupts on the real machine. These interrupts should preferably be dealt with as rapidly as the hardware allows, and the less software packaging is involved, the better. However, since device driver processes can reasonably be required to be designated as ``privileged and trusted'', it suffices to give such processes access to the real interrupts via a simple (but dangerous) mechanism involving no overhead for other processes.
As will be seen below, the programmer who really wants an ordinary process to be asynchronously interruptible can readily achieve this by means of a protocol between the process and its parent. Furthermore, the entire mechanism and the overhead of the mechanism are firmly under the control of the programmer.