Installing SteelHead-v on Linux KVM
SteelHead-v models VCX10 through VCX110, and models VCX1555H and lower, are supported on the Linux Kernel-based Virtual Machine (KVM) hypervisor. For detailed information about KVM, see the Linux KVM documentation.
Kernel-based Virtual Machine (KVM) is a virtualization solution for Linux on x86 hardware. A KVM consists of a loadable kernel module that provides the core virtualization infrastructure and a processor-specific module that provides virtualization extensions. Using KVM, you can run multiple virtual machines running unmodified Linux or Windows images. KVM is open source software. The kernel component of KVM is included in mainline Linux, as of version 2.6.20. The user-space component of KVM is included in mainline QEMU, as of version 1.3.
KVM supports various I/O virtualization technologies. Paravirtualized drivers, which enable direct communication between hypervisor-level drivers and guest-level drivers, provide the best performance when compared with full virtualization. The virtio API provides a common set paravirtualized device drivers for KVM.
SteelHead-v for KVM supports only virtio-based paravirtualized device drivers.
A SteelHead-v for KVM can be launched in different ways, each method using a different procedure. This document describes how to launch a SteelHead-v for KVM by using the supplied installation script and the virsh command.
This chapter describes how to install and configure a SteelHead-v for KVM virtual appliance.
Basic steps for setting up a SteelHead-v for KVM
This section provides an overview of the basic steps to install and configure SteelHead-v for KVM. Detailed procedures are provided in the sections that follow.
Task | Reference |
1. Verify that your KVM host system meets the installation prerequisites. | |
2. Provision a KVM with adequate resources to run the SteelHead-v model you want. | |
3. Obtain the SteelHead-v for KVM package from Support and unpack it. | |
4. Install the SteelHead-v for KVM image on the virtual machine. | |
5. Power on the VM, start the SteelHead-v for KVM, and log in. | |
Prerequisites for installing SteelHead-v on KVM
Ensure the KVM host system is configured to meet these requirements:
• Ensure the host system has at least four network interfaces.
• Configure the system’s network environment so that the LAN and WAN interfaces are on separate networks or bridges.
• Ensure the host meets the minimum standards for the SteelHead-v model you want to run on it.
Obtaining the SteelHead-v for KVM software package
The SteelHead-v for KVM package is a tar file that contains these files:
• install.sh—Installation script that creates the RiOS data store disk image and generates an XML specification file, domain.xml, for the SteelHead-v instance.
• mgmt.img—Management disk image in qcow2 format.
• riverbed_model_tmp—Metadata file that contains the specifications for supported SteelHead-v models and that charts the virtual hardware requirements for supported SteelHead-v models.
To download the zip package from the Support site, go to
https://support.riverbed.com. Access to software downloads requires registration.
Installing SteelHead-v on a KVM virtual machine
This section describes how to install SteelHead-v on a KVM.
The virtual NICs must be in this order: primary, auxiliary (aux), LAN, and then WAN. The virtual disks must be in this order: management (mgmt) and then data store (segstore).
To install a SteelHead-v on a KVM
1. Run the install script. The script prompts you for this configuration information:
– Name for the virtual appliance.
– SteelHead-v model you want to use. Supported models are listed in riverbed_model_tmp.
– Location of the directory for RiOS data store files. For example: /mnt/riverbed/segstore.img. The data store files will be created as /mnt/riverbed/segstore/segstore_1.img, /mnt/riverbed/segstore/segstore_2.img and so on.
– Networks to which you want to connect the primary, auxiliary (aux), LAN, and WAN interfaces.
– Whether these are networks or bridges.
2. Create the virtual appliance by entering this command:
virsh create <virtual-appliance-name>.xml
Performance tuning
This section describe configuration settings that are not required but can improve network throughput and optimization performance.
Domain process CPU pinning
Network I/O is processed by vhost threads, which are threads in the Quick Emulator (QEMU) user space. Vhost threads should be pinned to match the guest virtual CPU (vCPU) threads. We recommend pinning at least 2 CPUs for vhost threads, which allows these threads to run on the same subset of physical CPUs and memory, improving system performance.
This sample XML configuration pins CPUs 0 and 2 to be used for vhost thread processing.
<cputune>
<emulatorpin cpuset="0,2"/>
</cputune>
Disk I/O thread allocation and pinning
Input/output threads (I/O threads) are dedicated event loop threads for supported disk devices. I/O threads perform block I/O requests that can improve the scalability of some systems, in particular Symmetric Multiprocessing (SMP) host and guest systems that have many logical unit numbers (LUNs).
We recommend pinning I/O threads to physical CPUs that reside in the same non-uniform memory access (NUMA) node.
This sample XML configuration defines four I/O threads for the disk devices by using the iothreads XML element, and pins CPUs 4, 6, 8, and 10 to I/O threads 1, 2, and 3 by using the iothreadpin XML element.
<domain>
<iothreads>4</iothreads>
</domain>
<cputune>
<iothreadpin iothread='1' cpuset='4,6,8,10'/>
<iothreadpin iothread='2' cpuset='4,6,8,10'/>
<iothreadpin iothread='3' cpuset='4,6,8,10'/>
…
</cputune>
Distributing disk I/O load evenly across data store disks
We recommend equally distributing the processing of RiOS data store disk I/O processes by thread.
In this example, four threads and eight RiOS data store disks are allocated, and allocating two disks for each thread equally distributes the processing.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' iothread='1' cache='none' io='threads'/>
<source file='/work/quicksilver/storage/oak-cs737-vsh1-segstore1.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
Pinning virtual CPUs to physical CPUs
Pinning each virtual CPU to a separate physical core can increase optimization performance.
We recommend that physical CPU 0 (zero) is not used for pinning virtual CPUs.
To pin virtual CPUs to physical cores
1. Use the lscpu command to view the configuration of the physical cores on the KVM host.
2. Find CPUs that share the same NUMA node (virtual CPU).
In this example physical CPUs 0 to 4 share node 0, physical CPUs 5 to 9 share node 1, physical CPUs 10 to 14 share node 2, and physical CPUs 15 to 19 share node 3.
NUMA node0 CPU(s): 0-4
NUMA node1 CPU(s): 5-9
NUMA node2 CPU(s): 10-14
NUMA node3 CPU(s): 15-19
3. Open the domain.xml file created during the SteelHead-v for KVM instantiation.
4. Add a <cputune> section to the file and assign each virtual CPU to a single, separate physical core. For example:
<cputune>
<vcpupin vcpu='0' cpuset='5'/>
<vcpupin vcpu='1' cpuset='6'/>
<vcpupin vcpu='2' cpuset='7'/>
<vcpupin vcpu='3' cpuset='8'/>
</cputune>
In this example, virtual CPU (or node) 0 is pinned to physical CPU 5, virtual CPU 1 is pinned to physical CPU 6, virtual CPU 2 is pinned to physical CPU 7, and virtual CPU 3 is pinned to CPU 8.
5. Save your changes to the domain.xml file.
6. Restart the virtual machine.
Separating the RiOS data store and management disks
Performance improvements can be achieved by placing the RiOS data store and management virtual disks on separate physical storage devices. The RiOS data store should be placed on the fastest disk drive, such as a solid-state drive (SSD).
Setting disk cache mode to none
Write performance improvements on the RiOS data store disk drive can be achieved by setting the disk cache mode to none. Riverbed supports qcow2 format for the data store disk.
With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured.
To set disk cache mode to none
1. Open the domain.xml file created during the SteelHead-v for KVM instantiation.
2. Append these attributes to the <driver> element within the <disk> section that refers to your RiOS data store.
cache='none' io='threads'
Example:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='threads'/>
<source file='/work/quicksilver/storage/oak-cs737-vsh1-segstore1.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
3. Save your changes to the domain.xml file.
4. Restart the virtual machine.
Example SteelHead-v specification file
The installation script creates the RiOS data store disk image and the specification file, domain.xml, that defines key configuration elements of the virtual appliance. Here is an example SteelHead-v specification file:
<domain type='kvm'>
<name>VSH_internal</name>
<description>Riverbed Virtual SteelHead Model VCX255U</description>
<memory unit='KiB'>2097152</memory>
<vcpu placement='static'>1</vcpu>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>qemu</entry>
<entry name='product'>qemu</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<devices>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/mgmt_internal.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/segstore_internal.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</memballoon>
</devices>
</domain>