Setting Up SteelHead-v on KVM
  
Setting Up SteelHead-v on KVM
SteelHead-v models VCX10 through VCX90, and models VCX1555H and lower, are available in Kernel-based Virtual Machine (KVM) format.
Kernel-based Virtual Machine (KVM) is a virtualization solution for Linux on x86 hardware. A KVM consists of a loadable kernel module that provides the core virtualization infrastructure and a processor-specific module that provides virtualization extensions. Using KVM, you can run multiple virtual machines running unmodified Linux or Windows images. KVM is open source software. The kernel component of KVM is included in mainline Linux, as of version 2.6.20. The user-space component of KVM is included in mainline QEMU, as of version 1.3.
KVM supports various I/O virtualization technologies. Paravirtualized drivers, which enable direct communication between hypervisor-level drivers and guest-level drivers, provide the best performance when compared with full virtualization. The virtio API provides a common set paravirtualized device drivers for KVM.
Note: SteelHead-v for KVM supports only virtio-based paravirtualized device drivers.
A SteelHead-v for KVM can be launched in different ways, each method using a different procedure. This document describes how to launch a SteelHead-v for KVM by using the supplied installation script and the virsh command.
This chapter describes how to install and configure a SteelHead-v for KVM virtual appliance. It includes these sections:
•  Basic steps for setting up a SteelHead-v for KVM
•  Installation prerequisites
•  Obtaining the SteelHead-v software package
•  Installing SteelHead-v on a KVM
•  Performance tuning
•  Example SteelHead-v specification file
Basic steps for setting up a SteelHead-v for KVM
This section provides an overview of the basic steps to install and configure SteelHead-v for KVM. Detailed procedures are provided in the sections that follow.
Task
Reference
1. Verify that your KVM host system meets the installation prerequisites.
2. Provision a KVM with adequate resources to run the SteelHead-v model you want.
3. Obtain the SteelHead-v for KVM package from Riverbed Support and unpack it.
4. Install the SteelHead-v for KVM image on the virtual machine.
5. Power on the VM, start the SteelHead-v for KVM, and log in.
 
Installation prerequisites
Ensure the KVM host system is configured to meet these requirements:
•  Ensure the host system has at least four network interfaces.
•  Configure the system’s network environment so that the LAN and WAN interfaces are on separate networks or bridges.
•  Ensure the host meets the minimum standards for the SteelHead-v model you want to run on it. See Third-party software dependencies and SteelHead-v models.
Obtaining the SteelHead-v software package
The SteelHead-v for KVM package is a tar file that contains these files:
•  install.sh - Installation script that creates the segstore image and generates an XML specification file, domain.xml, for the SteelHead-v instance.
•  mgmt.img - Management disk file in qcow2 format.
•  riverbed_model_tmp - Metadata file that contains the specifications for supported SteelHead-v models and that charts the virtual hardware requirements for supported SteelHead-v models.
To download the zip package from the Riverbed Support website, go to https://support.riverbed.com. Access to software downloads requires registration.
Installing SteelHead-v on a KVM
This section describes how to install SteelHead-v on a KVM.
Note: The virtual NICs must be in this order: primary, auxiliary (aux), LAN, and WAN. The virtual disks must be in this order: management (mgmt) and segstore.
To install a SteelHead-v on a KVM
1. Run the install script. The script prompts you for this configuration information:
•  Name for the virtual appliance.
•  SteelHead-v model you want to use. Supported models are listed in riverbed_model_tmp.
•  Location of the directory for segstore files. For example: /mnt/riverbed/segstore.img. The segstore files will be created as /mnt/riverbed/segstore/segstore_1.img, /mnt/riverbed/segstore/segstore_2.img and so on.
•  Networks to which you want to connect the primary, auxiliary (aux), LAN, and WAN interfaces.
•  Whether these are networks or bridges.
2. Create the virtual appliance by entering this command:
# virsh create <virtual-appliance-name>.xml
Performance tuning
These sections describe configuration settings that are not required but can improve network throughput and optimization performance.
Pinning virtual CPUs to physical CPUs
Pinning each virtual CPU to a separate physical core can increase optimization performance.
Note: We recommend that physical CPU 0 (zero) is not used for pinning virtual CPUs.
To pin virtual CPUs to physical cores
1. Use the lscpu command to view the configuration of the physical cores on the KVM host.
2. Find CPUs that share the same NUMA node (virtual CPU).
In this example physical CPUs 0-4 share node 0, physical CPUs 5-9 share node 1, physical CPUs 10-14 share node 2, and physical CPUs 15-19 share node 3.
NUMA node0 CPU(s): 0-4
NUMA node1 CPU(s): 5-9
NUMA node2 CPU(s): 10-14
NUMA node3 CPU(s): 15-19
3. Open the domain.xml file created during the SteelHead-v for KVM instantiation.
4. Add a <cputune> section to the file and assign each virtual CPU to a single, separate physical core. For example:
<cputune>
<vcpupin vcpu='0' cpuset='5'/>
<vcpupin vcpu='1' cpuset='6'/>
<vcpupin vcpu='2' cpuset='7'/>
<vcpupin vcpu='3' cpuset='8'/>
</cputune>
In this example, virtual CPU (or node) 0 is pinned to physical CPU 5, virtual CPU 1 is pinned to physical CPU 6, virtual CPU 2 is pinned to physical CPU 7, and virtual CPU 3 is pinned to CPU 8.
5. Save your changes to the domain.xml file.
6. Restart the virtual machine.
Separating the SteelHead-v segstore and management disks
Performance improvements can be achieved by placing the segstore and management virtual disks on separate physical storage devices. The segstore should be placed on the fastest disk drive, such as a solid­­­­-state drive (SSD) drive.
Setting disk cache mode to none
Write performance improvements on the segstore disk drive can be achieved by setting disk cache mode to none. Riverbed supports qcow2 format for the segstore disk.
With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured.
Note: Riverbed supports qcow2 format for the segstore disk.
To set disk cache mode to none
1. Open the domain.xml file created during the SteelHead-v for KVM instantiation.
2. Append these attributes to the <driver> element within the <disk> section that refers to your segstore.
cache='none' io='native'
Example:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/work/quicksilver/storage/oak-cs737-vsh1-segstore1.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
3. Save your changes to the domain.xml file.
4. Restart the virtual machine.
Example SteelHead-v specification file
The installation script creates the segstore image and the specification file, domain.xml, that defines key configuration elements of the virtual appliance. Here is an example SteelHead-v specification file:
<domain type='kvm'>
<name>VSH_internal</name>
<description>Riverbed Virtual SteelHead Model VCX255U</description>
<memory unit='KiB'>2097152</memory>
<vcpu placement='static'>1</vcpu>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>qemu</entry>
<entry name='product'>qemu</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<devices>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/mgmt_internal.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/segstore_internal.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</memballoon>
</devices>
</domain>