About Virtual Deployments
Riverbed virtual products are available for download from the Support site. Access to download packages requires that you have a registered user account. Downloadable deployment packages are archived in formats suitable for a variety of virtualization environments. Each package contains:
an installation script.
an appliance specification file.
two virtual disk files: one for management, the other for data.
a package manifest file.
helper scripts (varies by product).
The steps to deploy any virtual appliance are generally the same across products:
1. Provision a virtual machine that meets or exceeds the minimum resource requirements for the product model you want to deploy. For details, go to the product family specification sheet.
2. Obtain a deployment package and install its contents on the virtual machine.
3. Map the virtual machine’s network interfaces to the appliance’s primary, auxiliary, LAN, and WAN interfaces.
4. Power on the virtual machine.
5. Complete the appliance’s first-time configuration.
6. License and further configure the appliance and its features using the Management Console.
As you provision resources to the virtual machine, provision the amount required for the model plus an additional amount for hypervisor overhead. For ESXi, reserve the memory and CPU cycles needed for the appliance model and verify that the host has resources to accommodate the 5 percent hypervisor overhead. For Hyper‑V, reserve the memory and CPU percentage needed for the appliance model and verify that the host has 1.5 GB and 15 percent CPU remaining, for hypervisor overhead.
About Virtual Products
About VMware deployments
About Microsoft deployments
About Linux deployments
About Cisco deployments
About Nutanix deployments
Obtaining deployment packages
1. Log in to your Riverbed Support account.
2. Choose Software & Documentation, and then choose a product.
3. On the product page, select the Software tab.
4. Select a major version. If you select a point version, such as one with a letter appended to version number, the page might not display links to the download packages for the virtual product.
5. Click the Software link for the product you want. The download begins. Note the download location.
About Virtual Deployments
Reference: Example Specification File
About VMware deployments
The deployment package for VMware ESXi is an OVA archive, and it contains the VMX and VMDK files that are necessary to create the virtual appliance. The VMX file is the appliance specification file. The VMDK files represent the appliance’s management and data store virtual disks. The package contents require several gigabytes of disk space. Do not modify any of the files in the package.
If you are upgrading to ESXi 7.0 or later, you must first upgrade the VMware Tools to version 11.0.5 or later.
The ESXi datastore, as distinct from the appliance’s data store, is part of the underlying virtual environment and provides the virtual storage where the appliance’s management and data store disks are to be located. Make sure that the datastore has enough capacity for the OVA package. You can install the smaller VMDK containing the management disk on an ESXi datastore backed by any type of storage media. We recommend that you put the larger VMDK containing the appliance’s data store on a datastore backed by the fastest available physical storage media. That datastore should have enough room to expand to the size required by the appliance model.
Never delete the first VMDK. It contains the virtual appliance’s operating system.
If you’re using Riverbed network interface cards (NICs) on the physical ESXi host, you must map the appliance’s LAN interface to the virtual machine’s pg-vmnic2 port label and the appliance’s WAN interface to the virtual machine’s pg-vmnic3 port label.
About Virtual Deployments
SteelHead considerations
Deploying on VMware
SteelHead considerations
The deployment package contains predefined configuration for the default model appliance, which is model VCX30. You’ll need to first install the default model, and then upgrade to your desired VCX model.
Using pass-through devices requires that a memory reservation be made for the full amount of allocated memory. This reservation is done automatically initially, but if a model upgrade requires more memory, you’ll need to manually increase the reservation before powering on the virtual machine.
The following table shows supported RiOS versions for each listed ESXi version.
ESXi
RiOS
 
10.1.x, 10.2.x
9.16.x
9.15.x
9.14.x
9.12.x
9.10.x
9.9.x
8.0
Yes
Yes
No
No
No
No
No
7.0
No
Yes
Yes
Yes
No
No
N/A
6.7
No
Yes
No
Yes
Yes
Yes
No
6.5
No
Yes
No
No
Yes
Yes
Yes
6.0
No
Yes
No
No
No
No
Yes
RiOS 9.16.0 does not currently support the Riverbed NIC for virtual appliances running on ESXi 8.0.
Some models only support a two-port 10-GbE multimode fiber NIC using direct I/O. You can configure bypass support using the VMware Direct Path feature. This feature allows the appliance to directly control the physical bypass card. You must use a Riverbed NIC. If you currently use a Riverbed NIC with ESXi, you can use the same card if you want to upgrade the ESXi version.
About VMware deployments
Deploying on VMware
Deploying on VMware
1. Obtain the deployment package.
2. Log in to the web console of your target ESXi instance, and then initiate the deployment of a virtual machine. When prompted, select the deployment package you downloaded.
3. Select a datastore.
4. Configure hardware resources, such as CPU, RAM, and disk space. We recommend that you provision disks in thick format, which preallocates your specified amount of storage.
5. Map the virtual machine's network interface (primary, auxiliary, LAN, and WAN) to the host interface.
6. If available, enable the virtual machine to automatically power on.
Obtaining deployment packages
About VMware deployments
SteelHead considerations
About Microsoft deployments
The deployment package for Microsoft Hyper‑V is a ZIP archive. You’ll need to run the installation script in the deployment package from Windows PowerShell. To run the script, you might need to configure the virtual machine’s security policy to Unrestricted using the PowerShell Set-ExecutionPolicy Unrestricted command. This table lists the parameters for the installation script.
Network setting
Your value
InstallLocation (required)
Path to the directory for the virtual machine.
Model (required)
The hardware model to be configured. Choosing the model causes the installation to allocate the correct disk sizes, memory, and CPU cores.
VHDLocation (optional)
The default is the selected directory. The script looks for the management VHD image at this location.
VMName (optional)
The default is Riverbed SteelHead.
ComputerName (optional)
The default is localhost. If you are installing to a remote computer, enter the name of that computer.
NumInpaths (optional)
The default is 1. Enter the number of in-path pairs to create.
SegstoreSize (optional)
The default is the allocated disk size for your model. Enter a value in bytes (B) or gigabytes (GB) to override the allocated size.
PowerOn (optional)
Include this setting if you want the appliance to start up after the installation is complete.
PrimaryNetwork (optional)
Enter the name of the vSwitch to connect the primary NIC to.
AuxNetwork (optional)
Enter the name of the vSwitch to connect the auxiliary NIC to.
{WL}an{01234}_0Network (optional)
Enter the name of the vSwitch to connect the named network interface to.
About Virtual Deployments
Deploying on Microsoft
Deploying manually on Microsoft
MAC address spoofing
VLAN tagging networks
Troubleshooting Microsoft deployments
Deploying on Microsoft
1. Obtain a deployment package and extract the contents to a directory accessible to the host virtual machine.
2. In PowerShell, run the installation script. You can enter all the script parameters as part of the run command. If you do not enter any parameters, you are prompted for the two required parameters.
3. If prompted, enter the installation location and the product model. Virtual machine creation can take 30 minutes or more to complete.
4. In Hyper-v Manager, verify all the virtual machine settings.
5. In the virtual machine settings, set the reserve weight for CPU to 100 and the memory weight to High.
6. In Hyper‑V Manager, create a virtual switch for the appliance’s primary, auxiliary, LAN, and WAN interfaces. Ensure that Enable virtual LAN identification for management operating system is disabled; you do not enable VLAN tagging at the Hyper‑V Virtual Network Switch level.
7. Connect each virtual switch interface to the corresponding appliance interface. If your network uses VLAN tagging, enable Enable virtual LAN identification for the LAN0_0 and WAN0_0 interfaces. If your network does not use VLAN tagging, disable the feature.
8. Power on the virtual machine and log in to it.
9. In the virtual machine, open the network connections control panel.
10. Under the networking properties settings for the connection to the Hyper-V server, set the jumbo frame size to 9014 bytes.
11. In the same advanced properties dialog box, select Priority & VLAN. For a VLAN-tagged network, select Packet Priority & VLAN Enabled. For a non-VLAN-tagged network, select Packet Priority & VLAN Disabled.
12. Close the properties dialog box and restart the virtual machine.
Obtaining deployment packages
About Microsoft deployments
Deploying manually on Microsoft
Troubleshooting Microsoft deployments
Deploying manually on Microsoft
Before you begin, create and connect the necessary virtual interfaces and switches.
1. Create a new virtual machine.
2. Remove the CD drive.
3. Create a fixed-size disk for the management VHD. You can perform this step from the Hyper‑V Manager, or you can use the Convert-VHD script in the product’s deployment package.
4. Add the management VHD as the disk in controller 0, slot 0.
5. Create a fixed-size disk for the appliance’s data store. Add this disk to controller 0, slot 1.
6. Create virtual NICs for the primary and auxiliary interfaces, and create two virtual NICs for each additional in-path interface pair.
About Microsoft deployments
Deploying on Microsoft
MAC address spoofing
VLAN tagging networks
MAC address spoofing
We recommend that you enable MAC address spoofing on the virtual network adapters. You’ll need to run PowerShell as an administrator.
set-vmnetworkadapter -VMname “<vm-name>” -computername <hyper‑v-host>
-name <v-network-adapter> -macaddressspoofing on
where <vm-name> is the name of the virtual machine, <hyper‑v-host> is the name of the Hyper‑V host, and <v-network-adapter> is the name of the virtual network adapter.
This example shows how to enable MAC address spoofing for the lan0_0 and wan0_0 interfaces on the virtual machine myVM and Hyper‑V host myHost:
Administrator> set-vmnetworkadapter -VMname “myVM” -computername myHost -name lan0_0
-macaddressspoofing on
Administrator> set-vmnetworkadapter -VMname “myVM” -computername myHost -name wan0_0
-macaddressspoofing on
Verify that MAC address spoofing has been enabled:
Get-VMNetworkAdapter -VMname <vm-name> -computername <hyper‑v-host> |fl Name,macaddressspoofing
where <vm-name> is the name of the virtual machine and <hyper‑v-host> is the name of the Hyper‑V host.
This example shows how to display MAC address spoofing status for the configuration as a formatted list:
Get-VMNetworkAdapter -VMname “myVM” -computername myHost |fl Name,macaddressspoofing
 
Name                : primary
MacAddressSpoofing  : Off
 
Name                : aux
MacAddressSpoofing  : Off
 
Name                : lan0_0
MacAddressSpoofing  : On
 
Name                : wan0_0
MacAddressSpoofing  : On
About Microsoft deployments
Deploying on Microsoft
VLAN tagging networks
VLAN tagging networks
Determine if your network uses VLAN tagging before you deploy the appliance. Specific configuration is required depending on the VLAN configuration. This table shows the configuration to use for networks with and without VLAN tagging.
Type of adapter or interface
VLAN-tagged packet setting
Non-VLAN-tagged packet setting
Windows network adapter configuration for VLAN
VLAN enabled
VLAN disabled
Hyper‑V virtual switch network interface configuration
VLAN disabled
VLAN disabled
Hyper‑V virtual machine network interface configuration
VLAN enabled
VLAN disabled
SteelHead in-path interface configuration
VLAN ID 0
VLAN ID 0
About Microsoft deployments
Deploying on Microsoft
MAC address spoofing
Troubleshooting Microsoft deployments
After powering on the virtual machine, if you see messages about missing interfaces or disks, check these troubleshooting tips:
If there are missing interfaces on the appliance, check the virtual machine settings and verify that you are using synthetic NICs, and that the cards are connected.
If the appliance logs messages about missing disks, ensure that the data store disk is present and is in slot 1 of controller 0.
After every change to Hyper-V configuration settings, you must shut down and restart the Hyper-V.
Be sure that the lan0_0 and wan0_0 interfaces are mapped to the correct NIC on the server.
About Microsoft deployments
Deploying on Microsoft
About Linux deployments
The deployment package for the KVM package is a TAR archive. Kernel-based Virtual Machine (KVM) is a virtualization solution for Linux on x86 hardware. A KVM consists of a loadable kernel module that provides the core virtualization infrastructure and a processor-specific module that provides virtualization extensions. Using KVM, you can run multiple virtual machines running unmodified Linux or Windows images. KVM is open source software. The kernel component of KVM is included in mainline Linux, as of version 2.6.20. The user-space component of KVM is included in mainline QEMU, as of version 1.3.
KVM supports various I/O virtualization technologies. Paravirtualized drivers, which enable direct communication between hypervisor-level drivers and guest-level drivers, provide the best performance when compared with full virtualization. The virtio API provides a common set of paravirtualized device drivers for KVM.
Riverbed supports only virtio-based paravirtualized device drivers.
The virtual NICs must be configured in this order: primary, auxiliary (aux), LAN, and then WAN. The virtual disks must be configured in this order: management (mgmt) and then data store (segstore).
Virtual appliances for KVM can be deployed in different ways, each method using a different procedure. This document describes how to deploy appliances for KVM by using the installation script supplied in the product’s deployment package and the virsh command.
Ensure the KVM host system has at least four network interfaces, and that the system’s network environment is configured so that the LAN and WAN interfaces are on separate networks or bridges.
About Virtual Deployments
SteelHead Central Controller considerations
SteelHead Central Controller considerations
Resizing the data store might be required if your virtual SCC is managing more than ten appliances.
Because the virsh reboot and virsh shutdown commands are not support by virtual SCC, you’ll need to use the virsh destroy command. Run these commands to resize the data store disk:
virsh destroy <name-of-kvm-instance>
sudo qemu-img resize <name-of-datastore>.img +<size> (for example: sudo qemu-img resize datastore.img +2GB)
virsh start <name-of-kvm-instance>
About Linux deployments
Deploying on Linux
Deploying on Linux
To deploy on Linux, run the installation script, specifying values for the:
name for the virtual appliance.
product model you want to use. Supported models are listed in riverbed_model_tmp.
location of the directory for the appliance’s data store. For example: /mnt/riverbed/segstore.img. The data store files will be created as /mnt/riverbed/segstore/segstore_1.img, /mnt/riverbed/segstore/segstore_2.img, and so on.
networks to which you want to connect the primary, auxiliary (aux), LAN, and WAN interfaces, and whether these are networks or bridges.
Create the virtual appliance by entering the virsh create <virtual-appliance-name>.xml command.
Start the appliance by running the virsh define <virtual-appliance-name>.xml command followed by the virsh start <virtual-appliance-name>.xml command.
Obtaining deployment packages
About Linux deployments
SteelHead Central Controller considerations
About Cisco deployments
The deployment package for the Cisco Enterprise Network Compute System (ENCS) is the same TAR archive used for Linux. ENCS is a line of compute appliances designed for the Cisco SD-Branch and Enterprise Network Functions Virtualization (ENFV) solution. Cisco SD-Branch is a hosting platform designed for the enterprise branch edge. The platform provides a virtual environment that enables the automated deployment of virtual network services consisting of multiple virtualized network functions (VNFs). Using the platform, administrators can leverage the flexibility of software-defined networking (SDN) capabilities to service chain VNFs in a variety of ways. Cisco SD-Branch is comprised of these components:
ENCS physical x86 hardware that provides compute resources to back the virtual layers.
Network Function Virtual Infrastructure Software (NFVIS) platform that facilitates the deployment and operation of VNFs and hardware components.
An orchestration environment to allow easy automation of the deployment of virtualized network services, consisting of multiple VNFs.
In this context, Riverbed products serve as VNFs running as virtual appliance on NFVIS. In-path deployments and out-of-path deployments using Web Cache Communication Protocol (WCCP) and policy-based routing (PBR) are supported. While appliances can be deployed in different locations in a topology depending on your needs, the procedures here focus on an in-path deployment. After you understand the underlaying concepts, you will be able to design and execute different kinds of deployments.
Paravirtualized device drivers and Single Root Input/Output Virtualization (SR-IOV) are supported. However, the Cisco 5000 Series ENCS does not support SR-IOV ports in promiscuous mode. The virtual switches that connect the physical host’s ports to the appliance’s LAN and WAN interfaces must be configured to use promiscuous mode to ensure all traffic reaches the appliance. Therefore, the appliance LAN and WAN interfaces cannot leverage SR-IOV. Additionally, SR-IOV cannot be used for in-box service chaining on Cisco 5000 Series ENCS; only virtio interfaces may be used.
Before deployment, ensure that the:
ENCS and NFVIS components are running the most current software from Cisco.
host system has at least four network interfaces.
LAN and WAN interfaces are on separate networks or bridges.
Linux system where you plan to prepare the software image has QEMU installed.
host meets the minimum standards for the appliance model.
Download the package to a system running any supported Linux operating system. You will prepare the software image for use on Cisco 5000 Series ENCS on there.
About Virtual Deployments
Preparing images for NFVIS
Uploading images to the NFVIS image repository
Deploying on the Cisco 5100 Series ENCS
Preparing images for NFVIS
You can use the Riverbed-provided helper script—together with Cisco helper files—to create an image file that can be immediately uploaded to NFVIS and deployed. This method provides some flexibility in setting the attributes of the virtual machine and automates much of the process. You can also manually prepare the image. This method requires more steps but provides the most flexibility in configuring virtual machine attributes.
After you add an image to the NFVIS image repository and register it, you can use the image on any Cisco NFVIS system.
About Cisco deployments
Preparing images using scripts
Preparing images without using scripts
Uploading images to the NFVIS image repository
Deploying on the Cisco 5100 Series ENCS
Preparing images using scripts
The script helps to automate the image preparation and packaging process while allowing you some flexibility in setting virtual machine properties. The script generates a .tar.gz image file to your specifications that you can upload and deploy.
The Riverbed helper script requires two additional files that you obtain from Cisco. You must place these files in the same location as the Riverbed helper script:
image_properties_template.xml
the nfvpt.py
Unzip and untar the product’s deployment package, and then log in to your account on NFVIS. Choose VM Life Cycle > Image Repository > Browse Datastore > Data > intdatastore > Uploads > vmpackagingutility > nfvisvmpackagingtool.tar.
Download the nfvisvmpackagingtool.tar file to your local system. Unpack the nfvisvmpackagingtool.tar file, and locate these files: image_properties_template.xml and nfvpt.py. Place these two files in the same location where you placed the Riverbed helper script, riverbed_encs_package_gen.py.
Run the Riverbed helper script and follow the prompts. The system creates a .tar.gz file that is suitable for upload to the NFVIS image repository.
In the NFVIS console, choose VM Life Cycle > Image Repository: Image Registration. Upload the .tar.gz file to the repository and then register it.
Preparing images for NFVIS
Preparing images without using scripts
Uploading images to the NFVIS image repository
Deploying on the Cisco 5100 Series ENCS
Preparing images without using scripts
Follow this procedure if you want more flexibility in setting virtual machine attributes. You’ll need to:
extract the contents of the downloaded deployment package.
modify the mgmt.qcow2 file, if necessary, using the qemu-img resize mgmt.qcow2 +<amount-of-additional-space> command. The default size is 20 GB. Some appliance models may require a larger management disk.
create a second qcow2 file for the appliance’s data store disk using the qemu-img create -f qcow2 segstore.<size>G.qcow2 <size>G command.
Order is important when creating, uploading, and connecting virtual disks. Always work with the management disk first and then the data store disk.
After you have prepared these files, you can import them into NFVIS using the Image Packaging section of the NFVIS console. There you can package the uploaded files into an image (.tar.gz file) suitable for use on any Cisco 5000 Series ENCS. After packaging, register the image in the repository.
Preparing images for NFVIS
Preparing images without using scripts
Uploading images to the NFVIS image repository
Deploying on the Cisco 5100 Series ENCS
Uploading images to the NFVIS image repository
To upload images to the NFVIS image repository, log in to your account on NFVIS. Choose VM Life Cycle > Image Repository, and select the Image Packaging tab. Click the icon next to VM Packages, and then enter values for these fields:
Package Name is the name for this instance of the SteelHead package.
VM Version is the version for this instance of the SteelHead package.
VM Type specifies Other from the drop-down menu. You must select Other.
Dedicated Cores (Optimize) specifies Yes from the drop-down menu.
Serial Console specifies Enable from the drop-down menu.
Sriov Driver(s) enables you to select all available options if you plan to use SR-IOV on the primary and auxiliary interfaces.
Raw Disk File Bus specifies Virtio from the drop-down menu.
Thick Disk Provisioning specifies Yes if you are deploying to a production environment.
Accept the default values for items in the bootstrap section.
Select Raw Images (.qcow2/.img) and upload both of the qcow2 files you created in Preparing images without using scripts in this order: mgmt.qcow2 file and then data store qcow2 file.
The order is important; you must upload the files in this order.
Optionally, you can create preconfigured deployment profiles using the Advanced Configuration settings.
After the qcow2 files are uploaded, submit them. The uploaded files are packaged into a tar.gz file, and then the tar.gz file is added to the list of packages at the bottom of the Image Packaging tab.
Register the new package. Registered images can be used on any NFVIS system.
About Cisco deployments
Preparing images for NFVIS
Deploying on the Cisco 5100 Series ENCS
Deploying on the Cisco 5100 Series ENCS
Before you deploy the appliance, ensure that the virtual environment has:
a representation of the physical host ports GEO-0 through GEO-3.
two SR-IOV interfaces on each GEO port that are available for virtual machines.
a WAN virtual switch (default name is wan-net) connected to the GEO-0 port.
a LAN virtual switch (default name is lan-net) connected to the GEO-2 and GEO-3 ports.
a virtual switch (default name is service-net) connected to a virtual router.
a virtual router connected to the service-net virtual switch and the wan-net virtual switch.
Some elements are created for you by the system using default values, but you will need to manually create the router and service-net virtual switch.
After your virtual environment is in place, you can create the appliance, assign interfaces, and then deploy the environment including the appliance.
Order is important when creating and connecting virtual interfaces. Virtual interfaces must be created and connected in this order: primary, auxiliary, LAN, and WAN.
Order
SteelHead interface
Assign to
Type
1
primary
LAN-side vswitch
virtio
2
auxiliary
GEO-1
virtio
3
LAN_0
GEO-3
virtio
4
WAN_0
WAN-side router
virtio
Remove the connection between the GEO-3 port from the lan-net virtual switch. To do so, choose VM Life Cycle > Networking. In the Networks & Bridges section, find the row for the lan-net virtual switch, and click the edit icon (blue pencil). In the lan-net virtual switch details page, find the Interfaces field and remote GEO-3, and then click Submit.
Choose VM Life Cycle > Deploy. Drag and drop an Other icon from the palette at the top of the VM Deployment page to an open space in the canvas below. Ensure the Other icon on the canvas is selected, and then under VM Details specify these items:
VM Name specifies the name for the SteelHead.
Image specifies the .tar.gz image.
Profile selects the profile.
Deployment Disk specifies Internal. For the Cisco 5100 Series ENCS, this item must be set to Internal.
One at a time, drag and drop NETWORK icons onto the canvas, connecting one end of each to the appliance and the other end to the lan-net virtual switch.
For the first NETWORK icon, ensure vNIC ID under vNIC Details is set to 0.
For the second NETWORK icon, connect the other end to the GEO-1 port. Ensure that the vNIC ID is set to 1.
For the third NETWORK icon, connect the other end to the GEO-3 port. Ensure that the vNIC ID is set to 2. This will be the LAN_0 interface.
For the fourth NETWORK icon, connect the other end to the service-net virtual switch. Ensure the vNIC ID is set to 3. This will be the WAN_0 interface.
Deploy the setup. Deployment is complete when the status of the virtual machine changes from Deploying to Active.
Start the appliance. Startup is complete when the console displays a login prompt.
Obtaining deployment packages
About Cisco deployments
About Nutanix deployments
The deployment package for Nutanix is the same TAR archive used for Linux.
After you have downloaded the package and extracted its contents, upload the mgmt.qcow2 image file to your Nutanix Prism system. After upload, confirm that the image is available and that its status is Active.
About Virtual Deployments
Obtaining deployment packages
Creating the Nutanix virtual machine
Creating the Nutanix virtual machine
After upload and confirmation of its availability, create a virtual machine capable of hosting the model you want.
When adding a management disk, use Clone from Image Service from the Operation, select PCI as the bus type, and clone the image from the deployment package. Ensure that your newly added disk is first in boot priority. Depending on the appliance model, you may need to expand the disk. For details, go to Knowledge Base article S29147.
When adding a data store disk, select PCI as the type and allocate it on the storage container.
You need to add a total of four NICs, one each for these interfaces: primary, auxiliary, LAN, and WAN.
Create the primary, or management, NIC first. Ensure that the VLAN IDs for the LAN and WAN interfaces are different from each other, and that those VLAN IDs are not used by any other appliance on your network.
Power on the virtual machine, and then log in to the appliance and complete the initial configuration.
Auto bootup and keystroke entry on the Nutanix default console has known issues. Perform the following actions before trying to access the virtual machine’s console:
1. SSH into one of the CVMs in the AHV cluster and execute this command:
acli vm.serial_port_create <vm_name> type=kServer index=0
2. Power cycle the virtual machine for the above command to take effect.
3. Select "COM1" when launching the console of the virtual machine.
About Virtual Deployments
About Nutanix deployments