Main Page | Recent changes | View source | Page history

Printable version | Disclaimers | Privacy policy

Not logged in
Log in | Help
 

Linux1018

From OCAU Wiki

Enterprise Computer Part 3: Virtualisation

What is Virtualisation?

Traditionally an operating system ran on a piece of hardware. In days of old, resources like CPU power, RAM and disk space were low. Wasting precious resources on multiple OSes on one machine was not very smart.

Now days we have Terabytes of disk space, Gigabytes of RAM, and multi-core processors. The idea of running two or more OSes on one physical server isn't such a difficult thing

More than one OS on a server? Why bother?

For plenty of reasons. Security is a big one. Running multiple services on a single physical OS means you have multiple entry-points into your server. You're no longer relying on just the server's OS as your security system. Each and every running service becomes a point of vulnerability. Locking each of these into a virtualised OS means that, while you still have the point of vulnerability, it means that if someone did break into your website, for instance, your finance database isn't on the same server, and thus has at least one layer of security between it and the nasty person breaking in.

Another is rapid recovery after disaster. It's nice to have a server, but what happens when that server breaks? Traditional backups only get you so far. There's time to restore the backup to the old hardware (assuming it's not the hardware that broke) and potentially wipe new data that exists between your current system and the last backup, or there's the woes of restoring data to an entirely new piece of hardware which can mean driver woes and hardware issues depending on your OS.

Virtualised servers mean that your server is nothing more than an image (think like an .ISO file, which is an image of a CD, except now you have a hard disk image). Moving an image around isn't difficult. "backups" mean pausing your virtual server, copying the disk image across the network to another box, and unpausing. If the first physical server breaks, merely turn the image on at the other location. All settings and virtualised hardware remain the same, and the machine magically turns on as if nothing went wrong. Then you can take all the time in the world to fix the first virtual machine, and copy any vital information off the image.

Redundancy becomes a nice side effect of the above. It's possible to have entire servers migrate to other physical locations in mere seconds, meaning downtime is greatly reduced. Likewise if you need to upgrade physical hardware, you can migrate a virtual server to another box, and the end users don't even know the difference.

Cost saving is another. Virtualised servers don't need switches, network cards, cabling, fibre optic HBAs, RAID controllers, etc, etc.

Disadvantages Performance: if your workload is very strenuous, particularly when it comes to I/O (say, VERY large databases, etc) virtualisation might not be a preferred option. Where I currently work, we keep all of our application and management systems on virtualised hosts (the bulk of our servers in pure number of installs), and our databases and data warehouse on physical machines to keep performance high (only a few servers, but they are very important and need to be working at maximum throughput 24/7).

Software licensing: Licensing can be a downer for virtualisation. Linux folk don't have to worry of course, but Microsoft want to charge you per machine. Have 10 virtualised Windows servers on a single Linux host, and Microsoft will demand 10 licenses paid for. Bummer.

Types of virtual machines

There are two popular types of virtual machines. The first is the traditional hardware emulator/simulator. This builds a virtual CPU that the slave OS talks to, which then translates instructions back to the host (real world) CPU. This is a nice thing to have if you are doing things like low-level operating system or driver development, because you can do tricky things like slow down or even pause your entire virtual system. On the flipside, for running big apps and services it's slow.

The second is a newer technology called a "Hypervisor". A hypervisor is a virtual machine supervisor which allows VMs to access hardware more directly. Think of it like your virtual OSes "partitioning" up resources like CPU, I/O (for access to network cards, disks, etc) and RAM. Hypervisors are much better suited to VMs where you need "bare metal" speed from your machine. The downside is that with a hypervisor, both OSes must directly support your architecture. For instance, you can't run Windows on a Hypervisor on a non-i386 based processor. The parent hardware must be hardware that Windows itself could run on.

Virtualisation compatible hardware

For the "emulated" type virtualisation, no special hardware is needed. Everyone here probably as used MAME, which is a classic example of emulated hardware. Likewise, the early releases of VMWare and VirtualPC were the same. No special hardware was needed.

Hypervisor technology can run on standard hardware, but benefits greatly from machines that support virtualisation at a hardware level. This is moreso to do with the client machine than the host (eg: Linux being open source has Xen Hypervisor information built right into the kernel, so Linux on Linux using Xen is fine. Windows doesn't support Xen yet, so Windows on Linux using Xen needs hardware support).

In large industry (particularly big UNIX makers like HP, IBM, etc), this sort of thing has been happening for a while. On the home and small business front, this technology is finally available on low-end hardware!

Intel and AMD both have virtualisation extensions available. AMD have "AMD-V" found in Socket AM2 (Athlon64 and AthlonFX) and SocketF (Opteron) motherboards, and Intel have Intel VT (also IVT or VT-x, formally called "Vanderpool") which is supported on certain boards (check the board's specs for VT-x support, or check the CPU flags inside CPU-ID, CPUz or /proc/cpuinfo for "VMX support". Most Core2Duo based Pentium and Xeon systems will have this.

If you want to investigate virtualisation in detail, buying one of these platforms is highly recommended (as is getting buckets of RAM).

The Software

Xen http://www.xensource.com/

This is my pick of the bunch. Xen kernels are available for a lot of Linux distros. The new commercial RedHat and SuSE versions come with "out of the box" support for Xen. RedHat's system is excellent. You pay for a single license of RedHat Enterprise Linux (RHEL) and you may install an UNLIMITED amount of Xen virtual machines on a single box! Anyone who's needed test machines for network service testing or an insta-cluster will love this sort thing.

Debian and Ubuntu both have Xen kernels supplied in the repos. Ubuntu have an easy Xen guide here: https://help.ubuntu.com/community/Xe...enOnUbuntuEdgy

I use this method on many servers to build dozens of test and rollout servers for clients. Very easy stuff, and once you've got the building process down pat, pushing out working VMs takes mere minutes. (It took me literally 10 minutes to build a working webserver for a client the other day from nothing - complete VM, OS and web server software from scratch up, working and secure).

Xen is entirely free (as in freedom). The free (as in cost) version is all command-line based, but if you want a nice and easy GUI, grab the commercial pay-for version.

VMWare http://www.vmware.com/

This is the name most people equate with virtualisation, probably because they were one of the first to do it at a desktop/consumer level. VMWare initially marketed their products at people who needed multiple platforms for testing and development, but these days have also jumped on the "disaster recovery" bandwagon.

VMWare has always seemed expensive to me, but the the VMWare team have a lot of experience in this market, and from all accounts the commercial support is good.

Parallels http://www.parallels.com/

A newcomer to the market, Parallels works on all OSes. It's a nice way to run Linux on your Windows or Mac computer without needing to reboot, or as a commercial alternative to Xen/VMWare for Linux. I know a few folks who set up fast user switching (or dual monitors) in both Windows and MacOSX, and have a parallels session in the other setup running another OS. And remember, because Parallels is a hypervisor, you'll need Apple Boot Camp to get Windows running on your Mac.

QEmu http://fabrice.bellard.free.fr/qemu/

This is an odd one. It lies half way somewhere between hypervisor and emulator. Totally free (as in freedom), it's a much easier way to get Windows working Linux. Currently where I work we have a nice big 8-processor Xeon box running Linux, and all of our Linux virtual machines on Xen, and another 4 Windows 2000 Server servers on QEmu. One plus to QEmu is that Windows runs FASTER under it than on native hardware! Windows 2000 servers boot and get to logon in under 30 seconds - something that previously took close to 3-4 minutes! I'm still yet to understand why the speed increase is so severe, but in the meantime we love it, and have vowed never to run Windows on native hardware again. We've replaced a server room full of machines with 2 hot-swap redundant super-servers running Linux, Xen, QEmu and Linux and Windows VMs. As before, our backups are now so simple, and hot-swapping entire machines in the event of hardware failure is a breeze. The cost and maintenance savings are simply enormous, and it means we can get on doing preventative maintenance and network improvement rather than running around wiping the arses of dozens of physical machines.

Some more Virtual Machines:

KVM http://kvm.qumranet.com/

Stands for Linux "Kernel-based Virtual Machine". I personally think it's a silly name, as KVM means "Keyboard Video Mouse" where I come from, and means a piece of hardware that lets you use one keyboard, video and mouse for multiple machines. Boo to people who use already taken acronyms. But anyway...

KVM is the new Hypervisor-based Virtual Machine being worked on directly by the Linux Kernel team. So far it seems to be the fastest of the hypervisors according to benchmarks. I've not used it myself, but from what I read it works very closely with QEmu to provide bridged virtual ethernet adaptors and other interfaces to talk to the kernel's IP (and other I/O) stacks. It's definitely the baby of the VM world, existing only from Linux 2.6.20 kernel and up (which itself is only a couple of months old at time of writing). While the big commercial distros like RedHat and SuSE are backing Xen, Ubuntu announced that their latest release "Feisty Fawn" will support options to install both a KVM host as well as a KVM virtual machine straight off the install disk/ISO. That means simple point-and-click VM setup for users, which is always a good thing.

VirtualBox http://www.virtualbox.org/

VirtualBox by InnoTek (make sure you put a cover letter on your TPS reports).

This has since been bought by Sun Microsystems, and progressed a great deal. It includes both a GUI and back-end/CLI management tools. Simple management of virtual disks, and a wide range of options for networking including NAT and bridged connections. Simple mounting of virtual disks, and in the new versions it includes options for USB passthrough to the guest OS.

Drivers are included as "guest extensions" for Windows, allowing you to have better host->guest integration of your mouse, shared folders, 2D video acceleration, etc.

Supports virtualisation extensions in hardware also (ie: speedy).

<< Part 7 | Part 9 >>

[Main Page]
OCAU News
OCAU Forums
PC Database

Main Page
Recent changes
Random page
All pages
Help

View source
Discuss this page
Page history
What links here
Related changes

Special pages