Setting Up the SCC-VE on KVM
  
Setting Up the SCC-VE on KVM
SteelCentral Controller for SteelHead (virtual edition) (SCC-VE) is available in Kernel-based Virtual Machine (KVM) format, model 8152.
Kernel-based Virtual Machine (KVM) is a virtualization solution for Linux on x86 hardware. A KVM consists of a loadable kernel module that provides the core virtualization infrastructure and a processor-specific module that provides virtualization extensions. Using KVM, you can run multiple virtual machines running unmodified Linux or Windows images. KVM is open source software. The kernel component of KVM is included in mainline Linux, as of version 2.6.20.
KVM supports various I/O virtualization technologies. Paravirtualized drivers, which enable direct communication between hypervisor-level drivers and guest-level drivers, provide the best performance when compared with full virtualization. The virtio API provides a common set paravirtualized device drivers for KVM.
Note: SCC-VE for KVM supports only virtio-based paravirtualized device drivers.
The SCC-VE for KVM can be launched in different ways, each method using a different procedure. This document describes how to launch an SCC-VE for KVM by using the supplied installation script and the virsh command.
This appendix describes how to install and configure an SCC-VE for a KVM virtual appliance. It includes these sections:
•  Basic steps for setting up a SCC-VE for KVM
•  Installation prerequisites
•  Obtaining the SCC-VE software package
•  Installing SCC-VE on a KVM
•  Example SCC-VE specification file
Basic steps for setting up a SCC-VE for KVM
This section provides an overview of the basic steps to install and configure SCC-VE. Detailed procedures are provided in the sections that follow.
Task
Reference
Verify that your KVM host system meets the installation prerequisites.
Obtain the SCC-VE for KVM package from Riverbed Support and unpack it.
Install the SCC-VE for KVM image on the virtual machine.
Power on the VM, restart the SCC-VE for KVM, and log in.
 
Installation prerequisites
Ensure the KVM host system is configured to meet these requirements:
•  SCC-VE requires 4096 MB of memory, 2 vCPU and 27 GB of disk space.
•  SCC-VE for KVM has been tested on these operating systems together with virtio paravirtualized device drivers: CentOS 7.2, and Ubuntu 14.04.
Obtaining the SCC-VE software package
The SCC-VE for KVM package is a tar file, image-vcx.kvm.tgz, containing these files:
•  install.sh - Installation script that generates an XML specification file, domain.xml, for the SCC-VE instance.
•  mgmt.img - Management disk file in qcow2 format.
•  datastore.img - Data disk file in qcow2 format.
•  riverbed_model_tmp - Metadata file that contains the specifications for the SCC-VE models.
To download the package from the Riverbed Support site, go to https://support.riverbed.com. Access to software downloads requires registration.
Installing SCC-VE on a KVM
This section describes how to install SCC-VE on a KVM.
Note: The virtual NICs must be configured in this order: primary and auxiliary (aux). The virtual disks must be in this order: management (mgmt) and datastore.
To install a SCC-VE on a KVM
1. Run the install script. The script prompts you for this configuration information:
–  Name for the virtual appliance (should be fewer than 80 characters).
–  Location of the mgmt.img file. For example: /mnt/riverbed/mgmt.img (This configuration parameter will be prompted if the install.sh script and the mgmt.img file are at different folder locations in the KVM host.)
–  Location of the datastore.img file: For example: /mnt/riverbed/datastore.img. (This configuration parameter will be prompted if the install.sh script and the datastore.img file are at different folder locations in the KVM host)
–  Networks (virtual) to which you want to connect the primary, auxiliary (aux) interfaces of the SCC-VE, whether these are networks or bridges.
Install script example:
./install.sh
What should the VM be named? scc_kvm_trial
Please enter the location of mgmt.img: /mnt/riverbed/mgmt.img
Please enter the location of datastore.img: /mnt/riverbed/datastore.img
What network should interface primary be connected to? default
What type of network should be used for primary, network? or bridge? network
Using network for primary
What network should interface aux be connected to? default
What type of network should be used for aux, network? or bridge? network
Using network for aux
After the installation process is complete, this message appears:
Successfully created a KVM virtual SCC, please use virsh define scc_kvm_trial.xml followed by virsh start scc_kvm_trial.xml to start it.
2. Start the SCC-VE by running these commands.
–  Enter the virsh define command followed by the virsh start command.
virsh define <virtual-appliance-name>.xml
virsh start <virtual-appliance-name>.xml
–  Alternatively you can start the SCC-VE using the virsh create command, but using this command will invalidate the license.
virsh create <virtual-appliance-name>.xml
To start or shut down the SCC-VE using virsh
Note: The virsh reboot and virsh shutdown commands are not support by SCC-VE.
•  Use the virsh start <virtual-appliance-name>.xml command to start the appliance.
•  Use the virsh destroy <virtual-appliance-name>.xml command to shut down the appliance.
To resize the datastore on SCC-VE
Resizing of data store might be required if your SCC-VE is managing more than ten appliances. Use these commands to resize the data store disk. This requires the destroy and restart of SCC-VE.
virsh destroy <name-of-kvm-instance>
sudo qemu-img resize datastore.img +<size> (for example: sudo qemu-img resize datastore.img +2GB)
virsh start <name-of-kvm-instance>
Example SCC-VE specification file
The installation script creates the specification file, domain.xml, that defines key configuration elements of the virtual appliance. Here is an example SCC-VE specification file:
<domain type='kvm'>
<name>scc_internal</name>
<description>Riverbed Virtual SCC Model 8152</description>
<memory unit='KiB'>4597888</memory>
<vcpu placement='static'>1</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>qemu</entry>
<entry name='product'>qemu</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/mgmt.img'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/images/datastore.img'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='network'>
<source network='default'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type=‘network’>
<source bridge=‘default’/>
<virtualport type='openvswitch'>
</virtualport>
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/3'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</memballoon>
</devices>
<seclabel type='none'/>
</domain>