Virtualization
Anyone paying attention to the usual sources knows that virtualization has become the next best thing since virtual memory, before which multiple programs on your computer used to stomp all over each other. One thing I don't understand is: why virtualization?
Here are some of the benefits commonly cited for virtualization:
But why do these things in the OS? It seems analogous to most other elements of the monolithic kernel. Because the OS is already a mature environment. We already have tools like ps(1) to list processes, debuggers like mdb to debug the kernel, etc. People have had to develop a separate infrastructure for managing and debugging VMs and VMMs, and it hasn't been easy. Moreover, in most use-cases today, OS's are not even aware that they're not running on the "bare metal" - if they were (which is the case with paravirtualization), perhaps they could make much better performance decisions. And if there weren't such a separation, there could be some real opportunity to innovate in what is now the hypervisor/OS interface.
At least there's reasonable support for the idea of virtualization technologies in the OS. Solaris 10 introduced Zones (the article is skeptical, but technically good), which sort of behave like very lightweight virtualization containers. And Linux is adding something called the Kernel-based virtual machine, which is a more complete solution inside the system.
Here are some of the benefits commonly cited for virtualization:
- Server consolidation (moving applications that used to run on several physical machines onto one box). This is probably the biggest single reason, but it assumes that you can't run more than one serious application on an OS instance. While this may be true for some (even many) platform/app combinations, this is not an inherent problem for an OS. It seems like the simpler, more performant and elegant solution is to build better OS abstractions and systems for managing installed software so that this isn't a problem.
- Sandboxing applications (for isolating faults, and for security). How is it not the job of the OS to keep applications from stomping on each others' toes, intentionally or otherwise?
- Resource management (limits and guarantees). This is supposed to be one of the OS's main jobs!
But why do these things in the OS? It seems analogous to most other elements of the monolithic kernel. Because the OS is already a mature environment. We already have tools like ps(1) to list processes, debuggers like mdb to debug the kernel, etc. People have had to develop a separate infrastructure for managing and debugging VMs and VMMs, and it hasn't been easy. Moreover, in most use-cases today, OS's are not even aware that they're not running on the "bare metal" - if they were (which is the case with paravirtualization), perhaps they could make much better performance decisions. And if there weren't such a separation, there could be some real opportunity to innovate in what is now the hypervisor/OS interface.
At least there's reasonable support for the idea of virtualization technologies in the OS. Solaris 10 introduced Zones (the article is skeptical, but technically good), which sort of behave like very lightweight virtualization containers. And Linux is adding something called the Kernel-based virtual machine, which is a more complete solution inside the system.
0 Comments:
Post a Comment
<< Home