Main Page | Recent changes | View source | Page history

Printable version | Disclaimers | Privacy policy | Latest revision

Not logged in
Log in | Help
 

Linux1016

Revision as of 00:54, 15 July 2012 by Ikt (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Windows vs Linux - Hardware support

There's quite a misnomer that Linux has poor hardware support. This is the sort of thing that gets perpetuated when long-time Windows users run Linux for the first time, and can't find downloadable drivers for their hardware.

That's usually because there are no downloadable drivers.

What??? No downloadable drivers? How the hell does Linux work??? Good question.

Linux as a kernel is called a "Macro Kernel" (compared to the BSD and Hurd Kernels, which are "Micro Kernels"). Linux is an "everything and the kitchen sink" approach to making a computer run.

When a Linux kernel is made, it contains drivers for all known hardware. These can be compiled into the kernel directly (handy for embedded devices like phones, PDAs, PVRs, etc) or they can be compiled as individual modules. The latter is handy for distributions like Ubuntu, where the person writing the distro doesn't know what hardware the end user will use.

Distros like Ubuntu, RedHat, SuSE, Debian, etc use the "throw it all at the wall and see what sticks" approach. Each one uses roughly the same Linux kernel, and as the system boots simply tries to load each and every driver. If a driver finds it's corresponding hardware, the driver stays. If not, the driver unloads.

Now this sounds long and painful, but it's not. On a low-end 1.5GHz desktop, you'd be lucky to see this section of the boot process take longer than 5-10 seconds.

The disadvantage is that if your hardware is not supported by the Linux kernel, finding a driver and adding it in requires a bit of Linux knowhow. And if you upgrade your kernel later, it can break the driver support, and you'll have to do the process again.

The advantages of course is that Linux knows about tens of thousands of pieces of hardware. For most users running on common hardware, compatibility with Linux is a no-brainer. Simple install Linux, boot into a clean system, and everything JUST WORKS. No driver installs, no hardware conflicts, nothing. Less time screwing around in hardware manager, and more time doing real work.

Further more, installing new hardware is painless. Power down, install new hardware, power up. Linux finds it, and loads a driver for it. Done.

Anyone who is still a fan of Windows' manual driver loading process, try this experiment:

Take a working Windows system. Either desktop or server, it doesn't matter. Now power down cleanly, and remove the hard disk. Take that hard disk and put it in another machine with completely different hardware (different graphics card, different chipset, different brand of CPU). Power up, and see what happens. At best you'll spend 15-30 minutes reconfiguring devices, loading new drivers manually, etc. At worst, the system will bluescreen and you won't be able to use it. Reinstall time for you.

Do the same with a working Linux system, and it's a different story. Linux boots, detects the new hardware on the fly, and loads the appropriate drivers. In fact, every boot for Linux is the same - boot, detect hardware, load OS. Whether it's been on the same hardware since day dot, or you change hardware every day, it's the same loading process.

I have on numerous occasions now rescued businesses but throwing known working hard disks from servers that have blown RAM/motherboards/etc into a completely different system, and it's booted fine and the business can continue as normal. See my earlier comment about telling your CEO that you can convert a site office into a working server room in under an hour in the event of your head office being totally destroyed. In terms of business, that's great reassurance.

RAID, LVM and enterprise disk management

RAID: Redundant Array of Inexpensive/Independent Disks

RAID is a way of writing information to multiple disks in one go. Whether it's splitting the information up so each disk writes half as much (so the write is twice as quick - called "RAID0" or "striping"), or simultaneously writing the same data to 2 or more disks at once (meaning that if one fails, the other can take over on the fly and the system notify you that you need to replace the busted one - called "RAID1" or "mirroring"). More complex RAID exists where data can be split over multiple disks (for speed) but also calculate a CRC (Cyclic Redundancy Check) dataset that allows a broken disk to have it's information rebuilt mathematically after a new disk is installed (called "RAID5" or "striping with parity"). Very useful, but touch on resources like CPU calculations.

There are 2 types of RAID: Software and Hardware.

Now here's the tricky bit: a lot of people incorrectly assume "Hardware RAID" means it comes on a card or chip. THIS IS WRONG. Hardware RAID is where a RAID controller has a dedicated RAID calculation CPU (Intel Xscale running at around 300MHz is a popular choice). You will know a card is hardware RAID because (a) The card will cost more than $600 and (b) because the RAID set will appear to your computer like a logical/virtual drive, and you won't see the individual disks.

Software RAID can be done in pure software, or on a card. Cards like these cheap shit "Promise" devices that sit in the market at anywhere from $100 to $300 are NOT hardware RAID. Despite offering RAID via a card, the actual grunt work for the RAID is done in software via a driver that has some extra code on top that makes your system's CPU do all the hard work. The card itself is dumb, and just passes information back and forth.

Linux has a piece of software built into it called MD (Multi Device). In Linux, hard disks are classified as block devices, called HD (IDE/PATA hard disk), or SD (Serial/SATA/SCSI/USB/Firewire hard disk). MD is a kernel-level virtual device that is capable of marrying ANY two disks (you can have IDE and SATA disks mixed in Linux RAID!) and making a virtual disk that can be configured in any RAID level (0, 1, 5, 6, 10 and 15 are currently the most popular RAID levels). Adding and removing drives on the fly is done by simple command line inputs, and realtime statistics on drive rebuilding, drive health, and other such useful info is output via the Linux virtual "proc" filesystem in plain-text, realtime updated readable files (adding the ability to monitor this over network/internet/web-browser is very simple).

Linux only supports RAID1 (mirroring) on your /boot/ partition. Why? Think about it: the Linux kernel needs to be loaded so it can know what RAID is, but how does it do that on a RAID hard disk? RAID1 is an exact mirror on both disks, so what the kernel does is load from one disk, and then once it knows about RAID, loads in the second disk as the mirror. RAID0, 5 and 6 which all involve striping (one file lives half on one disk, and half on another) is not supported for /boot. However, the rest of your Linux system and especially your user data are free to live on other RAID levels.

When using Linux, I recommend one of the two following alternatives:

Use either a REAL HARDWARE RAID card, or use Linux's built in kernel-level software RAID. Do not, under any circumstance, use a cheap and nasty software RAID card. Why? Several reasons:

1) Driver support for software RAID cards is often proprietary, and involves all sorts of painful configuration for Linux

2) Performance of software RAID cards is typically WORSE than Linux's own internal software RAID

3) Software RAID cards often provide no way of adding and removing drives remotely. For someone like me who administrates servers hundreds of KM away, I simply cannot afford the downtime of flying out to a site just to replace a hard disk when a few simple commands could add in a hot spare, and the onsite admins can take their time replacing the busted drive under warranty.

4) Hardware RAID provides true, independent RAID calculations on the card itself (ironically, most true hardware RAID controllers run a small embedded Linux or BSD subsystem!). Drives appear to the operating system (Windows or Linux) as a single SCSI drive, and the end user is largely ignorant to what goes on "behind the scenes". True hardware RAID cards are preferred to software RAID cards because it can continue working even if the OS has crashed. The downside is that all but the most expensive multi-thousand dollar cards support remote access, and it means that if a drive dies, you're on your bike out to the site to replace a hard disk manually. Although you can add in auto hot failover on failed drives. And if you work in the office where the server is, it's not as huge a drama.

As Linux MD RAID devices are considered true block devices (Linux considers them a "real" hard disk), they can be used in all sorts of tricky ways. Enterprise users will know terms like iSCSI and Fibrechannel. Under Linux, an MD can be exported as either. Using a cheap Linux box and dozen SATA drives, you can build your own iSCSI disk to use on any system you like (where I work we have 2 Linux-iSCSI machines that serve as hard disks for Windows servers), but at around one third as much as commercial iSCSI devices cost!


<< Part 5 | Part 7 >>

[Main Page]
OCAU News
OCAU Forums
PC Database

Main Page
Recent changes
Random page
All pages
Help

View source
Discuss this page
Page history
What links here
Related changes

Special pages