New Features in Version 9.2
Version 9.2 of SteelHead-v provides support for Kernel-based Virtual Machine (KVM).
New Features in Version 9.0
Version 9.2 of SteelHead-v includes a virtual hardware benchmarking tool. The tool enables you to test the optimization and disk usage performance of the appliance’s underlying virtual hardware. Test results indicate the highest virtual appliance that can be run on the hardware supporting the tested appliance.
SteelHead-v Deployment Guidelines
Note: Riverbed requires that you follow these guidelines when deploying the SteelHead-v package on a hypervisor. If you do not follow the configuration guidelines, SteelHead-v might not function properly or might cause outages in your network.
Network Configuration
When you deploy a hypervisor, follow this guideline:
• Ensure that a network loop does not form - An in-path interface is, essentially, a software connection between the lanX_Y and wanX_Y interfaces. Before deploying a SteelHead-v, Riverbed strongly recommends that you connect each LAN and WAN virtual interface to a distinct virtual switch and physical NIC (through the vSphere Networking tab).
Caution: Connecting LAN and WAN virtual NICs to the same vSwitch or physical NIC could create a loop in the system and might make your hypervisor unreachable.
When you deploy SteelHead-v on ESX or ESXi, follow these guidelines:
• Enable promiscuous mode for the LAN/WAN vSwitch - Promiscuous mode allows the LAN/WAN SteelHead-v NICs to intercept traffic not destined for the SteelHead installation and is mandatory for traffic optimization on in-path deployments. You must accept promiscuous mode on each in-path virtual NIC. You can enable promiscuous mode through the vSwitch properties in vSphere. For details, see
Installing SteelHead-v.
• Use distinct port groups for each LAN or WAN virtual NIC connected to a vSwitch for each SteelHead-v - If you are running multiple SteelHead-v virtual machines (VMs) on a single virtual host, you must add the LAN (or WAN) virtual NIC from each VM into a different port group (on each vSwitch). Using distinct port groups for each LAN or WAN virtual NIC prevents the formation of network loops.
Network Performance
Follow these configuration tips to improve performance:
• Use at least a gigabit link for LAN/WAN - For optimal performance, connect the LAN/WAN virtual interfaces to physical interfaces that are capable of at least 1 Gbps.
• Do not share physical NICs - For optimal performance, assign a physical NIC to a single LAN or WAN interface. Do not share physical NICs destined for LAN/WAN virtual interfaces with other VMs running on the hypervisor. Doing so can create performance bottlenecks.
• Ensure that the host has resources for overhead - In addition to reserving the CPU resources needed for the SteelHead-v model, verify that additional unclaimed resources are available. Due to hypervisor overhead, VMs can exceed their configured reservation. For details about hypervisor resource reservation and calculating overhead, see Managing Licenses and Model Upgrades. • Do not overprovision the physical CPUs - Do not run more VMs than there are CPUs. For example, if a hypervisor is running off a quad-core CPU, all the VMs on the host should use no more than four vCPUs.
• Use a server-grade CPU for the hypervisor - For example, use a Xeon or Opteron CPU as opposed to an Intel Atom.
• Always reserve RAM - Memory is another very important factor in determining SteelHead-v performance. Reserve the RAM that is needed by the SteelHead-v model, but ensure there is extra RAM for overhead. This overhead can provide a performance boost if the hypervisor exceeds its reserved capacity.
• Virtual RAM should not exceed physical RAM - The total virtual RAM provisioned for all running VMs should not be greater than the physical RAM on the system.
• Do not use low-quality storage for the RiOS data store disk - Make sure that the SteelHead-v disk used for the data store virtual machine disk (VMDK) for ESX or virtual hard disk (VHD) for Hyper-V resides on a disk medium that supports a high number of Input/Output Operations Per Second (IOPS). For example, use NAS, SAN, or dedicated SATA disks.
• Do not share host physical disks - To achieve near-native disk I/O performance, do not share host physical disks (such as SCSI or SATA disks) between VMs. When you deploy SteelHead-v, allocate an unshared disk for the RiOS data store disk.
• Do not use hyperthreading - Hyperthreading can cause contention among the virtual cores, resulting in significant loss of performance.
• BIOS Power Management Settings - If configurable, power management settings in the BIOS should be set to maximize performance.
Deployment Options
Typically you deploy SteelHead-v on a LAN with communication between appliances taking place over a private WAN or VPN. Because optimization between SteelHeads typically takes place over a secure WAN, it is not necessary to configure company firewalls to support SteelHead-specific ports.
For optimal performance, minimize latency between SteelHead-v appliances and their respective clients and servers. Place the SteelHead-v appliances as close as possible to your network endpoints: client-side SteelHead-v appliances as close to your clients as possible, and server-side SteelHead-v appliances as close to your servers as possible.
Ideally, SteelHead-v appliances optimize only traffic that is initiated or terminated at their local sites. The best and easiest way to achieve this traffic pattern is to deploy the SteelHead-v appliances where the LAN connects to the WAN, and not where any LAN-to-LAN or WAN-to-WAN traffic can pass through (or be redirected to) the SteelHead.
For detailed information about deployment options and best practices for deploying SteelHeads, see the SteelHead Deployment Guide.
Before you begin the installation and configuration process, you must select a network deployment.
Note: You can also use the Discovery Agent to deploy the SteelHead-v. For information, see
Using Discovery Agent. In-Path Deployment
You can deploy SteelHead-v in the same scenarios as the SteelHead, with the following exception: SteelHead-v software does not provide a failover mechanism like the SteelHead fail-to-wire. For full failover functionality, you must install a Riverbed NIC with SteelHead-v.
Riverbed bypass cards come in four-port and two-port models. For more information about NICs and SteelHead-v, see
NICs for SteelHead-v.
For deployments where a Riverbed bypass card is not an option (for example, in a Cisco SRE deployment) Riverbed recommends that you do not deploy your SteelHead-v in-path. If you are not using a bypass card, you can still have a failover mechanism, by employing either a virtual in-path or an out-of-path deployment. These deployments allow a router using WCCP or PBR to handle failover.
Promiscuous mode is required for in-path deployments.
Note: SteelHead-v on Hyper-V does not support the direct in-path deployment or the Riverbed bypass NIC.
Virtual In-Path Deployment
In a virtual in-path deployment, SteelHead-v is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface, and the LAN interface is not used. This deployment differs from a physical in-path deployment in that a packet redirection mechanism, such as WCCP or PBR, directs packets to SteelHead-v appliances that are not in the physical path of the client or server. In this configuration, clients and servers continue to see client and server IP addresses.
On SteelHead-v models with multiple WAN ports, you can deploy WCCP and PBR with the same multiple interface options available on the SteelHead.
For a virtual in-path deployment, attach only the WAN virtual NIC to the physical NIC, and configure the router using WCCP or PBR to forward traffic to the VM to optimize. You must also enable in-path Out-of-Path (OOP) on SteelHead-v.
Out-of-Path Deployment
The SteelHead-v is not in the direct path between the client and the server. Servers see the IP address of the server-side SteelHead installation rather than the client IP address, which might have an impact on security policies.
For a virtual OOP deployment, connect the primary interface to the physical in-path NIC and configure the router to forward traffic to this NIC. You must also enable OOP on SteelHead-v.
The following caveats apply to server-side OOP SteelHead-v configuration:
• OOP configuration does not support autodiscovery. You must create a fixed-target rule on the client-side SteelHead.
• You must create an OOP connection from an in-path or logical in-path SteelHead and direct it to port 7810 on the primary interface of the server-side SteelHead. This setting is mandatory.
• Interception is not supported on the primary interface.
• An OOP configuration provides nontransparent optimization from the server perspective. Clients connect to servers, but servers treat it like a server-side SteelHead connection. This affects log files, server-side ACLs, and bidirectional applications such as rsh.
• You can use OOP configurations along with in-path or logical in-path configurations.
SteelHead-v Platform Models
The tables in this section list the platform models available for SteelHead-v and SteelHead-v CX (VCX). Each SteelHead-v has a primary and an auxiliary interface. Confirm that you have the resources required for the SteelHead-v model you are installing before you download and install SteelHead-v.
This table lists the SteelHead-v xx50 models.
SteelHead-v Model | Virtual CPU | Min. CPU Speed | Memory | Management Disk (VMDK1) | RiOS Data Store Disk (VMDK2) | Optimized WAN Capacity | Max. Connections |
V150M | 1 CPU | 1200 MHz | 1 GB | 30 GB | 44 GB | 1 Mbps | 20 |
V250L | 1 CPU | 1200 MHz | 1 GB | 30 GB | 44 GB | 1 Mbps | 30 |
V250M | 1 CPU | 1200 MHz | 1 GB | 30 GB | 44 GB | 4 Mbps | 125 |
V250H | 1 CPU | 1200 MHz | 1 GB | 30 GB | 44 GB | 4 Mbps | 200 |
V550M | 2 CPUs | 1200 MHz | 2 GB | 30 GB | 80 GB | 2 Mbps | 300 |
V550H | 2 CPUs | 1200 MHz | 2 GB | 30 GB | 80 GB | 4 Mbps | 600 |
V1050L | 2 CPUs | 1800 MHz | 2 GB | 30 GB | 102 GB | 8 Mbps | 800 |
V1050M | 2 CPUs | 1800 MHz | 2 GB | 30 GB | 102 GB | 10 Mbps | 1300 |
V1050H | 2 CPUs | 1800 MHz | 4 GB | 30 GB | 202 GB | 20 Mbps | 2300 |
V2050L | 4 CPUs | 2000 MHz | 6 GB | 30 GB | 400 GB | 45 Mbps | 2500 |
V2050M | 4 CPUs | 2000 MHz | 6 GB | 30 GB | 400 GB | 45 Mbps | 4000 |
V2050H | 4 CPUs | 2000 MHz | 6 GB | 30 GB | 400 GB | 45 Mbps or 90 Mbps with a separate upgrade | 6000 |
This table lists the SteelHead-v CX xx55 models.
SteelHead-v Model | Virtual CPU | Min. CPU Speed | Memory | Manage- ment Disk (VMDK1) | RiOS Data Store Disk (VMDK2+) | QoS Band- width | Optimized WAN Capacity | Max. Con- nections |
VCX255U | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 4 Mbps | 2 Mbps | 50 |
VCX255L | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 75 |
VCX255M | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 150 |
VCX255H | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 230 |
VCX555L | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 12 Mbps | 6 Mbps | 250 |
VCX555M | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 20 Mbps | 10 Mbps | 400 |
VCX555H | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 20 Mbps | 10 Mbps | 650 |
VCX755L | 2 CPUs | 1200 MHz | 2 GB | 38 GB | 102 GB | 45 Mbps | 10 Mbps | 900 |
VCX755M | 2 CPUs | 1200 MHz | 2 GB | 38 GB | 102 GB | 45 Mbps | 10 Mbps | 1500 |
VCX755H | 2 CPUs | 1200 MHz | 4 GB | 38 GB | 150 GB | 45 Mbps | 20 Mbps | 2300 |
VCX1555L | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 50 Mbps | 3000 |
VCX1555M | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 50 Mbps | 4500 |
VCX1555H | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 100 Mbps | 6000 |
VCX5055M | 12 CPUs | 1200 MHz | 16 GB | 82 GB | 8 x 80 GB | No limit | 200 Mbps | 14,000 |
VCX5055H | 12 CPUs | 1200 MHz | 16 GB | 82 GB | 8 x 80 GB | No limit | 400 Mbps | 25,000 |
VCX7055L | 16 CPUs | 1200 MHz | 32 GB | 178 GB | 10 x 160 GB | No limit | 622 Mbps | 75,000 |
VCX7055M | 24 CPUs | 1200 MHz | 48 GB | 178 GB | 14 x 160 GB | No limit | 1 Gbps | 100,000 |
The platform families are independent. You cannot upgrade a xx50 model to a xx55 model. The xx55 virtual models require RiOS 8.0 or later.
The data store size per model allocates extra disk space to accommodate hypervisor overhead. As of 9.0, the size of the management disk for new open virtualization appliance (OVA) deployments for the VCX models is 38 GB. Older models that upgrade still use a 50-GB management disk.
Flexible RiOS Data Store
As of RiOS 9.0, the flexible data store feature for VCX models supports a smaller data store size, down to a minimum 12 GB.
To change the disk size of a running SteelHead-v, you must first power off the VM. From the Settings section, you can expand or remove the RiOS data store (second) disk, and replace it with a smaller disk. (Reducing the disk size will not work.) Modifying the disk size causes the RiOS data store to automatically clear.
If you provide a disk larger than the configured RiOS data store for the model, the entire disk is partitioned, but only the allotted amount for the model is used.
Memory and CPU requirements are a hard requirement for a model to run. Flexible RiOS data store is not supported for the older Vxx50 models.
Multiple RiOS Data Stores
As of RiOS 8.6, SteelHead-v models VCX5055, and VCX7055 support up to 14 RiOS data stores using Fault Tolerant Storage (FTS). Riverbed recommends that all RiOS data stores on an appliance be the same size.
To add additional data stores, you must power off the VM.
In-Path Pairing for NIC Interfaces
SteelHead-v models are not limited to a fixed number of NIC interfaces. However, the in-path pair limit is four (four LAN and four WAN interfaces), including bypass cards. If you want to use the SteelHead-v bypass feature, you are limited to the number of hardware bypass pairs the model can support.
Each SteelHead-v requires a primary and auxiliary interface, which are the first two interfaces added. If you add additional interface pairs to the VM, they are added as in-path optimization interfaces. Total bandwidth and connection limits still apply.
NICs for SteelHead-v
Riverbed NICs provide hardware-based fail-to-wire and fail-to-block capabilities for SteelHead-v. The configured failure mode is triggered if the ESX or ESXi host loses power or is unable to run the
SteelHead-v guest, if the SteelHead-v guest is powered off, or if the SteelHead-v guest experiences a significant fault (using the same logic as the physical SteelHead).
Note: Physical fail-to-wire and fail-to-block NICs in SteelHead-v are not supported on Hyper-V and KVM.
Riverbed NICs are available in two-port and four-port configurations:
Riverbed NICs for SteelHead-v | Orderable Part Number | SteelHead-v Models |
Two-Port 1-GbE TX Copper NIC | NIC-001-2TX | All |
Four-Port 1-GbE TX Copper NIC | NIC-002-4TX | 1050L, 1050M, 1050H, 2050L, 2050M, and 2050H VCX255, VCX555, VCX755, and VCX1555, VCX5055, VCX7055 |
Two-Port 10-GbE Multimode Fiber NIC (direct I/O only) | NIC-008-2SR | VCX5055 and VCX7055 |
You must use Riverbed NICs for fail-to-wire or fail-to-block with SteelHead-v. NIC cards without a bypass feature from other vendors are supported for functionality other than fail-to-wire and fail-to-block, if supported by ESX or ESXi.
Requirements for SteelHead-v Deployment with a NIC
To successfully install a NIC in an ESXi host for SteelHead-v, you need the following items:
• ESXi host with a PCIe slot.
• vSphere client access to the ESXi host.
• VMware ESXi 5.0 or later and RiOS 8.0.3 or later.
—or—
VMware ESXi 4.1 and one of the following RiOS versions:
– For V150, RiOS 7.0.3a or later.
– For V250, V550, V1050, and V2050, RiOS 7.0.2 or later.
– For VCX555, VCX755, and VCX1555, RiOS 8.0 or later.
For ESXi 4.1, you also need the following items:
• ESXi bypass driver (a .vib file) available from https://support.riverbed.com.
• Intel 82580 Gigabit network interface driver.
• By default, ESXi does not include the Intel 82580 Gigabit Ethernet network interface driver needed for the Riverbed bypass card. If you do not have this driver installed, you can download it from the VMware website.
For ESX 4.1:
http://downloads.vmware.com/d/details/dt_esxi4x_intel_10g_825xx/ZHcqYnQldypiZCVodw==
• SSH and SCP access to the ESXi host.
For more information about Riverbed NICs installation, see the Network and Storage Card Installation Guide. The installation procedure in this manual assumes you have successfully installed a Riverbed NIC following the instructions in the Network and Storage Card Installation Guide.
The number of hardware bypass pairs (that is, one LAN and one WAN port) supported is determined by the model of the SteelHead-v:
• Models V150, V250, and V550: one bypass pair
• Models V1050 and V2050: two bypass pairs (that is, two LAN and two WAN ports)
• Models VCX555, VCX755, VCX1555, VCX 5055, and VCX 7055: two bypass pairs
Note: You can install a four-port card in an ESXi host for a SteelHead-v 150, 250, or 550. However, only one port pair is available because the SteelHead-v model type determines the number of pairs.
The following configurations have been tested:
• Two SteelHead-v guests, each using one physical pair on a single four-port Riverbed NIC card
• Two SteelHead-v guests connecting to separate cards
• One SteelHead-v guest connecting to bypass pairs on different NIC cards
For more information about installing and configuring SteelHead-v with a Riverbed NIC, see
Completing the Preconfiguration Checklist.
SteelHead-v on the Cisco SRE
In addition to standard ESX and ESXi, you can run SteelHead-v on a Cisco server blade, using the SRE platform, based on ESXi 5.0.
This table lists the SteelHead-v models supported on each supported Cisco SRE model and the required version of RiOS, disk configuration, and RAM.
SRE Model | SteelHead-v Model | RiOS Version | Disk Configuration | RAM |
910 | V1050H, VCX755H | 6.5.4+, 7+, 8+ | RAID1 | 8 GB |
910 | V1050M, VCX755M | 6.5.4+, 7+, 8+ | RAID1 | 4 GB |
900 | V1050M, VCS755M | 6.5.4+, 7+, 8+ | RAID1 | 4 or 8 GB |
700/710 | V250H | 6.5.4+, 7+, 8+ | Single disk | 4 GB |
300 | Not Supported | | | |
For more information about deploying SteelHead-v on a Cisco SRE blade, search the Riverbed knowledge base at
https://supportkb.riverbed.com/support/index?page=home.