2007.03
Šarūnas Burdulis
Department of Mathematics
Dartmouth College
Server was built using TYAN chassis and motherboard (Transport TA26, Thunder h2000M) with two dual core AMD Opteron 2218 CPUs (AMD-V) and 16GB of RAM. Four Seagate 320GB SATA disks were connected to 3ware 9650SE PCIe controller in hardware RAID-5 configuration.
Debian Etch uses Linux 2.6.18, but 3w_9xxx driver version included in
2.6.18 does not work with 3ware 9650SE. To install Etch a customized Debian
netinstall CD image from 3ware.com
was used
(debian-etch-4.0-x86_64.iso
; search for "Q15012 - Compatibility:
Does 3ware provide drivers for Debian 4.0 Etch?" in their Knowledge Base).
Debian Etch installed without a problem. cat /proc/cpuinfo
showed four cores with svm
flags, i.e. having AMD-V capability
to support full virtualization.
Xen 3.0.3 is available in Etch and paravirtualized guests (domU) work fine. However I wasn't able to successfully start any HVM-type (i.e. full virtualization, unmodified OS) guest domains.
Xen 3.0.4 source xen-3.0.4_1-src.tgz
was downloaded from
xensource.com
. Xen compiled and installed without any
significant problems by following the supplied README, which is brief and
clear. Depending on your Debian installation you may have to add some
libraries and utilities. This is what was needed on our freshly installed
Etch amd64 system:
# apt-get install gcc make binutils bcc bin86 libc6-dev
libc6-dev-i386
zlib-dev libssl-dev python-dev x-dev
kernel-package build-essential
Xen 3.0.4 source uses Linux 2.6.16, which again does not have a suitable
driver for 3ware 9650SE. Driver is open-source and is included on the
original CD or can be downloaded from www.3ware.com. Compilation is as simple
as unpacking the .tgz
archive, editing SRC
line in
Makefile
to use Xen-provided kernel sources (SRC
:=/usr/src/xen-3.0.4_1-src/linux-2.6.16.33-xen
in my case) and running
make
. Copy resulting 3w_9xxx.ko
to
/lib/modules/2.6.16.33-xen/kernel/drivers/scsi
and run
depmod -a
. You'll notice that 3w_9xxx.ko
already
exists in .../2.6.16.33-xen/.../scsi
. I did try to use that
module, but the kernel did not "see" our 3ware RAID volume while booting.
Apparently the driver is for some of the earlier versions of 3ware
controllers.
To make the newly compiled driver available on boot we should add it to RAM-disk image:
# mkinitramfs -o
initrd.img-2.6.16.33-xen 2.6.16.33-xen
Copy resulting 2.6.16.33-xen/kernel/drivers/scsi
to
/boot
/ and update your boot loader. For GRUB and Debian it's
update-grub
. Check /boot/grub/menu.lst
to make sure
it contains Xen 3.0.4 entry with xen
, Linux kernel
(vmlinuz
) and initrd
lines, for example:
title Xen 3.0.4-1 / Debian GNU/Linux, kernel 2.6.16.33-xen
root (hd0,0) kernel /boot/xen-3.0.4-1.gz module /boot/vmlinuz-2.6.16.33-xen root=/dev/sda1 ro console=tty0 module /boot/initrd.img-2.6.16.33-xen
Reboot, select Xen 3.0.4-1 / Debian...
and hopefully you will
be logging into your new Xen/Debian domain dom0
.
$ uname -a Linux ghost 2.6.16.33-xen #1 SMP Wed Mar 21 11:15:33 EDT 2007 x86_64 GNU/Linux
To check HVM capabilities run xm info
and look for
xen_caps
line. If it lists hvm-*
entries, for
example:
# xm info|grep xen_caps xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
then your dom0
should be capable of hosting HVM guest
domains.
Only minor changes were made to the default Xen configuration file
/etc/xen/xen-config.sxp
. The following options were uncommented
and set:
(network-script network-bridge)
(vnc-listen '127.0.0.1')
(vncpasswd '
...')
VNC password is created with /usr/bin/vncpasswd
.
A number of HVM guests were configured and installed using disk images of their corresponding install CDs. Installer images for Linux (Debian Etch, Ubuntu Edgy and Feisty, CentOS4) and Solaris 10 were downloaded from the Net. For Windows XP the image was made by "ripping" the CD:
# dd if=/dev/hdc of=winxp.iso
A separate logical disk partition was created for each HVM guest tested. A
typical Xen guest domain configuration file was
(/etc/xen/edgy.hvm
in this case):
kernel="/usr/lib/xen/boot/hvmloader"
builder='hvm'device_model='/usr/lib/xen/bin/qemu-dm'
memory=1024
name='edgy'
vif=[ 'type=ioemu,bridge=xenbr0' ]
disk=['phy:/dev/sda6,ioemu:hda,w','file:/usr/local/iso/edgy_amd64.iso,hdc:cdrom,r']
boot='d'
vnc=1
HVM domain is started by:
$ sudo xm create /etc/xen/edgy.hvm
and then:
$ vncviewer localhost
which should display a window with a possibly familiar installer running
in it. After the installer completes and suggests a reboot, the configuration
file has to be changed to use boot='c'
instead of
boot='d'
.
NB: For Linux guest domains it was essential to have
hdc:cdrom
, not
hdb:cdrom
. Even though the installer seemed to
start with hdb
as usual it then ran extremely slowly when
accessing disks. This was specific to Linux guests --- Solaris and Windows
guests ran well independently of setting cdrom
to either
hdb
or hdc
.