Overview of SteelHead-v
This chapter provides an overview of SteelHead-v. It includes these sections:
Product dependencies and compatibility
This section provides information about product dependencies and compatibility.
Third-party software dependencies
This table summarizes the software requirements for SteelHead-v.
Component | Software requirements |
Microsoft Hyper-V Hypervisor | Legacy SteelHead-v models VCX255 through VCX1555 and performance tier-based models VCX10 through VCX90 support Hyper-V, available on Windows Server 2012 R2 and Windows Hyper-V Server. |
VMware ESX/ESXi Hypervisor | Legacy SteelHead-v models VCX255 through VCX7055 and performance tier-based models VCX10 through VCX90 support ESX/ESXi 4.0, 4.1, 5.0, 5.1, 5.5, and 6.0. Only VMware hardware versions 10 and earlier are supported. If you use the Riverbed network interface card (NIC), you must use ESXi 4.1 or later. |
Linux Kernel-based Virtual Machine (KVM) Hypervisor | Legacy SteelHead-v models VCX255 through VCX1555 and performance tier-based models VCX10 through VCX90 support KVM. SteelHead-v has been tested on RHEL 7, CentOS 7, Qemu versions 1.7.91 to 2.5.0, and Ubuntu 13.10, 14.04 LTS, and 16.04.1 LTS with paravirtualized virtio device drivers. Host Linux Kernel versions 3.13.0-24-generic through 4.4.0-51 are supported. |
SteelHead-v Management Console | Same requirements as physical SteelHead. See the SteelHead Installation and Configuration Guide. |
SNMP-based management compatibility
This product supports a proprietary Riverbed MIB accessible through SNMP. SNMP1 (RFCs 1155, 1157, 1212, and 1215), SNMP2c (RFCs 1901, 2578, 2579, 2580, 3416, 3417, and 3418), and SNMP3 are supported, although some MIB items might only be accessible through SNMP2 and SNMP3.
SNMP support enables the product to be integrated into network management systems such as Hewlett-Packard OpenView Network Node Manager, BMC Patrol, and other SNMP-based network management tools.
SCC compatibility
To manage SteelHead 9.2 and later appliances, you need to use SCC 9.2 and later. Earlier versions of the SCC do not support 9.2 SteelHeads. For details about SCC compatibility across versions, see the SteelCentral Controller for SteelHead Installation Guide.
Understanding SteelHead-v
SteelHead-v is software that delivers the benefits of WAN optimization, similar to those offered by the SteelHead hardware, while also providing the flexibility of virtualization.
Built on the same RiOS technology as the SteelHead, SteelHead-v reduces bandwidth utilization
and speeds up application delivery and performance. SteelHead-v on VMware vSphere is certified for the Cisco Service-Ready Engine (SRE) module with Cisco Services-Ready Engine Virtualization (Cisco SRE-V).
SteelHead-v runs on VMware ESXi, Microsoft Hyper-V, and Linux KVM hypervisors installed on industry-standard hardware servers.
Figure: SteelHead-v and hypervisor architecture

SteelHead-v enables consolidation and high availability while providing most of the functionality of the physical SteelHead, with these exceptions:
• Virtual Services Platform (VSP) or Riverbed Services Platform (RSP)
• Proxy File Service (PFS)
• Fail-to-wire (unless deployed with a Riverbed NIC card)
• Hardware reports such as the Disk Status report
• Hardware-based alerts and notifications, such as a RAID alarm
Note: Hyper-V does not currently support the Riverbed bypass NIC card.
You can integrate SteelHead-v into a wide range of networks. You can deploy SteelHead-v out-of-path, virtual in-path, or using the Discovery Agent. SteelHead-v supports both asymmetric route detection and connection-forwarding features. You can make SteelHead-v highly available in active-active configurations, with data store synchronization as serial clusters.
After you license and obtain a serial number for SteelHead-v appliances, you can manage them across the enterprise from a Riverbed SteelCentral Controller for SteelHead (SCC) 8.0.0 or later.
SteelHead-v supports up to 24 virtual CPUs and 10 interfaces.
SteelHead-v optimization
With SteelHead-v, you can solve a range of problems affecting WANs and application performance, including:
• insufficient WAN bandwidth.
• inefficient transport protocols in high-latency environments.
• inefficient application protocols in high-latency environments
RiOS intercepts client-server connections without interfering with normal client-server interactions, file semantics, or protocols. All client requests are passed through to the server normally, while relevant traffic is optimized to improve performance.
RiOS uses these optimization techniques:
• Data streamlining - SteelHead products (SteelHead-v, SteelHeads, and SteelCentral Controller for SteelHead Mobile) can reduce WAN bandwidth utilization by 65 to 98 percent for TCP-based applications using data streamlining. In addition to traditional techniques like data compression, RiOS uses a Riverbed proprietary algorithm called Scalable Data Referencing (SDR). SDR breaks up TCP data streams into unique data chunks that are stored in the hard disk (data store) of the device running RiOS (a SteelHead or SteelCentral Controller for SteelHead Mobile host system). Each data chunk is assigned a unique integer label (reference) before it is sent to a peer RiOS device across the WAN. When the same byte sequence is seen again in future transmissions from clients or servers, the reference is sent across the WAN instead of the raw data chunk. The peer RiOS device (SteelHead-v software, SteelHead, or SteelCentral Controller for SteelHead Mobile host system) uses this reference to find the original data chunk in its data store and to reconstruct the original TCP data stream.
• Transport streamlining - SteelHead-v uses a generic latency optimization technique called transport streamlining. Transport streamlining uses a set of standards and proprietary techniques to optimize TCP traffic between SteelHeads. These techniques:
– ensure that efficient retransmission methods, such as TCP selective acknowledgments, are used.
– negotiate optimal TCP window sizes to minimize the impact of latency on throughput.
– maximize throughput across a wide range of WAN links.
• Application streamlining - In addition to data and transport streamlining optimizations, RiOS can apply application-specific optimizations for certain application protocols: for example, CIFS, MAPI, NFS, TDS, HTTP, and Oracle Forms.
• Management streamlining - These tools include:
Autodiscovery process - Autodiscovery enables SteelHead-v, the SteelHead, and SteelCentral Controller for SteelHead Mobile to automatically find remote SteelHead installations and to optimize traffic using them. Autodiscovery relieves you from having to manually configure large amounts of network information. The autodiscovery process enables administrators to control and secure connections, specify which traffic is optimized, and specify peers for optimization.
Enhanced autodiscovery automatically discovers the last SteelHead in the network path of the TCP connection. In contrast, the original autodiscovery protocol automatically discovers the first SteelHead in the path. The difference is only seen in environments where there are three or more SteelHeads in the network path for connections to be optimized.
Enhanced autodiscovery works with SteelHeads running the original autodiscovery protocol, but it is not the default. When enhanced autodiscovery is enabled on a SteelHead that is peering with other appliances using the original autodiscovery method in a “mixed” environment, the determining factor for peering is whether the next SteelHead along the path uses original autodiscovery or enhanced autodiscovery (regardless of the setting on the first appliance).
If the next SteelHead along the path is using original autodiscovery, the peering terminates at that appliance (unless peering rules are configured to modify this behavior). Alternatively, if the SteelHead along the path is using enhanced autodiscovery, the enhanced probing for a peer continues a step further to the next appliance in the path. If probing reaches the final SteelHead in the path, that appliance becomes the peer.
SteelCentral controller - The SCC enables remote SteelHeads to be automatically configured and monitored. It also gives you a single view of the data reduction and health of the SteelHead network.
SteelHead Mobile controller - The Mobile Controller is the management appliance you use to track the individual health and performance of each deployed software client and to manage enterprise client licensing. The Mobile Controller enables you to see who is connected, view their data reduction statistics, and perform support operations such as resetting connections, pulling logs, and automatically generating traces for troubleshooting. You can perform all of these management tasks without end-user input.
SteelHead-v is typically deployed on a LAN, with communication between appliances occurring over a private WAN or VPN. Because optimization between SteelHeads typically occurs over a secure WAN, it is not necessary to configure company firewalls to support SteelHead-specific ports.
For detailed information about how SteelHead-v, the SteelHead, or SteelCentral Controller for SteelHead Mobile works and deployment design principles, see the SteelHead Deployment Guide.
Configuring optimization
You configure optimization of traffic using the Management Console or the Riverbed CLI. You configure the traffic that SteelHead-v optimizes and specify the type of action it performs using:
• In-path rules - In-path rules determine the action that a SteelHead-v takes when a connection is initiated, usually by a client. In-path rules are used only when a connection is initiated. Because connections are usually initiated by clients, in-path rules are configured for the initiating, or client-side, SteelHead-v. In-path rules determine SteelHead-v behavior with SYN packets. You configure one of these types of in-path rule actions:
– Auto - Use the autodiscovery process to determine if a remote SteelHead is able to optimize the connection attempted by this SYN packet.
– Pass-through - Allow the SYN packet to pass through the SteelHead. No optimization is performed on the TCP connection initiated by this SYN packet.
– Fixed-target - Skip the autodiscovery process and use a specified remote SteelHead as an optimization peer. Fixed-target rules require the input of at least one remote target SteelHead; an optional backup SteelHead might also be specified.
– Deny - Drop the SYN packet and send a message back to its source.
– Discard - Drop the SYN packet silently.
• Peering rules - Peering rules determine how a SteelHead-v reacts to a probe query. Peering rules are in ordered lists of fields that a SteelHead-v uses to match with incoming SYN packet fields—for example, source or destination subnet, IP address, VLAN, or TCP port—as well as the IP address of the probing SteelHead-v. This rule is useful in complex networks. Following are the types of peering rule actions:
– Pass - The receiving SteelHead does not respond to the probing SteelHead and allows the SYN+ probe packet to continue through the network.
– Accept - The receiving SteelHead responds to the probing SteelHead and becomes the remote-side SteelHead (the peer) for the optimized connection.
– Auto - If the receiving SteelHead is not using enhanced autodiscovery, Auto has the same effect as Accept. If enhanced autodiscovery is enabled, the SteelHead becomes the optimization peer only if it is the last SteelHead in the path to the server.
For detailed information about in-path and peering rules and how to configure them, see the SteelHead Management Console User’s Guide.
New features in version 9.6
Version 9.6 of SteelHead-v provides these enhancements:
• New performance tier-based models replace legacy VCX xx55 models.
New performance tier models are adapted to a cloud environment and can be easily upgraded. Legacy VCXxx55 models cannot be upgraded to performance tier models. Model families are not interchangable.
• New licensing for VCX10 through VCX90 models.
VCX10 through VCX90 models use a new licensing paradigm. Customers receive a customer key, which is used across purchases. Licenses are tied to the customer key and are provided for model performance tier, WAN optimization, and optional add-on features. In addition, a support identification code is provided for included or purchased support services. Licensing a non-evaluation virtual appliance still requires daily connectivity to the Riverbed Cloud Portal; however, licensing an evaluation virtual appliance does not. When a licensed, non-evaluation virtual appliance cannot connect to the Riverbed Cloud Portal, an email alert is sent daily for two weeks, or until connectivity is restored, to the address that is configured on the appliance to receive event notifications. If connectivity is not restored after two weeks, the license expires and the functionality associated with it stops.
One-, three-, and five-year subscription licenses are available. When a SteelHead-v is licensed for capacity that exceeds its hardware specification, the SteelHead-v will operate at the maximum capacity allowed by the underlying hardware.
Legacy models that upgrade to RiOS 9.6 experience no change in licensing behavior.
• Web proxy support.
Web proxy is an optional feature (disabled by default) available on all performance tier models. The web proxy cache is part of the management disk and requires additional disk space (additional 5 GB minimum for all models). The web proxy cache size can be adjusted up to a maximum allowed by the license and underlying disk capacity. Changing cache size will purge the cache. Web proxy configuration is performed through SteelCentral Controller for SteelHead.
• Orchestrated deployment and configuration.
Support for predeployment configuration and validation of several parameters including in-path interfaces, licensing, auxiliary and primary IP, static routes, SteelCentral Controller for SteelHead, hostname, name server, NTP server.
• High performance, 1-Gbps WAN optimization throughput models for KVM and Hyper-v hypervisors.
• Support for ESXi 6.0 hypervisors and up to VMware virtual hardware version 10.
• Reduced management disk size to accommodate customer premise equipment (CPE).
• Ability to upgrade from one performance model to another.
SteelHead-v deployment guidelines
Riverbed requires that you follow these guidelines when deploying the SteelHead-v package on a hypervisor. If you do not follow the configuration guidelines, SteelHead-v might not function properly
or might cause outages in your network.
Network configuration
When you deploy a hypervisor, follow this guideline:
• Ensure that a network loop does not form - An in-path interface is, essentially, a software connection between the lanX_Y and wanX_Y interfaces. Before deploying a SteelHead-v, we strongly recommend that you connect each LAN and WAN virtual interface to a distinct virtual switch and physical NIC (through the vSphere Networking tab).
Caution: Connecting LAN and WAN virtual NICs to the same vSwitch or physical NIC could create a loop in the system and might make your hypervisor unreachable.
When you deploy SteelHead-v on ESX/ESXi, follow these guidelines:
• Enable promiscuous mode for the LAN/WAN vSwitch - Promiscuous mode allows the LAN/WAN SteelHead-v NICs to intercept traffic not destined for the SteelHead installation and is mandatory for traffic optimization on in-path deployments. You must accept promiscuous mode on each in-path virtual NIC. You can enable promiscuous mode through the vSwitch properties in vSphere. For details, see
Installing SteelHead-v.
• Use distinct port groups for each LAN or WAN virtual NIC connected to a vSwitch for each SteelHead-v - If you are running multiple SteelHead-v virtual machines (VMs) on a single virtual host, you must add the LAN (or WAN) virtual NIC from each VM into a different port group (on each vSwitch). Using distinct port groups for each LAN or WAN virtual NIC prevents the formation of network loops.
Network performance
Follow these configuration tips to improve performance:
• Use at least a gigabit link for LAN/WAN - For optimal performance, connect the LAN/WAN virtual interfaces to physical interfaces that are capable of at least 1 Gbps. For high capacity models such as VCX80 and VCX90, use a 10 Gbps interface.
• Do not share physical NICs - For optimal performance, assign a physical NIC to a single LAN or WAN interface. Do not share physical NICs destined for LAN/WAN virtual interfaces with other VMs running on the hypervisor. Doing so can create performance bottlenecks.
• Ensure that the host has resources for overhead - In addition to reserving the CPU resources needed for the SteelHead-v model, verify that additional unclaimed resources are available. Due to hypervisor overhead, VMs can exceed their configured reservation. For details about hypervisor resource reservation and calculating overhead, see Managing legacy licenses and model upgrades. • Do not overprovision the physical CPUs - Do not run more VMs than there are CPUs. For example, if a hypervisor is running off a quad-core CPU, all the VMs on the host should use no more than four vCPUs.
• Use a server-grade CPU for the hypervisor - For example, use a Xeon or Opteron CPU as opposed to an Intel Atom.
• Always reserve RAM - Memory is another very important factor in determining SteelHead-v performance. Reserve the RAM that is needed by the SteelHead-v model, but ensure there is extra RAM for overhead. This overhead can provide a performance boost if the hypervisor exceeds its reserved capacity.
• Virtual RAM should not exceed physical RAM - The total virtual RAM provisioned for all running VMs should not be greater than the physical RAM on the system.
• Do not use low-quality storage for the RiOS data store disk - Make sure that the SteelHead-v disk used for the data store virtual machine disk (VMDK) for ESX or virtual hard disk (VHD) for Hyper-V resides on a disk medium that supports a high number of Input/Output Operations Per Second (IOPS). For example, use NAS, SAN, or dedicated SATA disks.
• Do not share host physical disks - To achieve near-native disk I/O performance, do not share host physical disks (such as SCSI or SATA disks) between VMs. When you deploy SteelHead-v, allocate an unshared disk for the RiOS data store disk.
• Do not use hyperthreading - Hyperthreading can cause contention among the virtual cores, resulting in significant loss of performance.
• BIOS Power Management Settings - If configurable, power management settings in the BIOS should be set to maximize performance.
Deployment options
Typically you deploy SteelHead-v on a LAN with communication between appliances taking place over a private WAN or VPN. Because optimization between SteelHeads typically takes place over a secure WAN, it is not necessary to configure company firewalls to support SteelHead-specific ports.
For optimal performance, minimize latency between SteelHead-v appliances and their respective clients and servers. Place the SteelHead-v appliances as close as possible to your network endpoints: client-side SteelHead-v appliances as close to your clients as possible, and server-side SteelHead-v appliances as close to your servers as possible.
Ideally, SteelHead-v appliances optimize only traffic that is initiated or terminated at their local sites. The best and easiest way to achieve this traffic pattern is to deploy the SteelHead-v appliances where the LAN connects to the WAN, and not where any LAN-to-LAN or WAN-to-WAN traffic can pass through (or be redirected to) the SteelHead.
For detailed information about deployment options and best practices for deploying SteelHeads, see the SteelHead Deployment Guide.
Before you begin the installation and configuration process, you must select a network deployment.
Note: You can also use the Discovery Agent to deploy the SteelHead-v. For information, see
Using Discovery Agent. In-path deployment
You can deploy SteelHead-v in the same scenarios as the SteelHead, with this exception: SteelHead-v software does not provide a failover mechanism like the SteelHead fail-to-wire. For full failover functionality, you must install a Riverbed NIC with SteelHead-v.
Riverbed bypass cards come in four-port and two-port models. For more information about NICs and SteelHead-v, see
NICs for SteelHead-v.
For deployments where a Riverbed bypass card is not an option (for example, in a Cisco SRE deployment) we recommend that you do not deploy your SteelHead-v in-path. If you are not using a bypass card, you can still have a failover mechanism, by employing either a virtual in-path or an out-of-path deployment. These deployments allow a router using WCCP or PBR to handle failover.
Promiscuous mode is required for in-path deployments.
Note: SteelHead-v on Hyper-V does not support the direct in-path deployment or the Riverbed bypass NIC.
Virtual in-path deployment
In a virtual in-path deployment, SteelHead-v is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface, and the LAN interface is not used. This deployment differs from a physical in-path deployment in that a packet redirection mechanism, such as WCCP or PBR, directs packets to SteelHead-v appliances that are not in the physical path of the client or server. In this configuration, clients and servers continue to see client and server IP addresses.
On SteelHead-v models with multiple WAN ports, you can deploy WCCP and PBR with the same multiple interface options available on the SteelHead.
For a virtual in-path deployment, attach only the WAN virtual NIC to the physical NIC, and configure the router using WCCP or PBR to forward traffic to the VM to optimize. You must also enable in-path out-of-path (OOP) deployment on SteelHead-v.
Out-of-path deployment
The SteelHead-v is not in the direct path between the client and the server. Servers see the IP address of the server-side SteelHead installation rather than the client IP address, which might have an impact on security policies.
For a virtual OOP deployment, connect the primary interface to the physical in-path NIC and configure the router to forward traffic to this NIC. You must also enable OOP on SteelHead-v.
These caveats apply to server-side OOP SteelHead-v configuration:
• OOP configuration does not support autodiscovery. You must create a fixed-target rule on the client-side SteelHead.
• You must create an OOP connection from an in-path or logical in-path SteelHead and direct it to port 7810 on the primary interface of the server-side SteelHead. This setting is mandatory.
• Interception is not supported on the primary interface.
• An OOP configuration provides nontransparent optimization from the server perspective. Clients connect to servers, but servers treat it like a server-side SteelHead connection. This affects log files, server-side ACLs, and bidirectional applications such as rsh.
• You can use OOP configurations along with in-path or logical in-path configurations.
SteelHead-v models
Starting with RiOS 9.6, SteelHead-v models are based on performance tiers. Prior to RiOS 9.6 SteelHead-v models were based on the hardware capacities of equivalent physical SteelHead appliance models.
SteelHead-v model families are independent. You cannot upgrade a xx55 model to a performance tier model. The xx55 virtual models require RiOS 8.0 or later. Performance tier models require RiOS 9.6 or later.
Confirm that you have the physical resources required for the SteelHead-v model you are installing before you download and install SteelHead-v.
This table lists the new performance tier-based models and their maximum capacities.
Note: Minimum web proxy cache size for all models is 5 GB.
Model | Connections | Optimized WAN Capacity | Network Services Capacity | Web Proxy Cache Size |
VCX10 | 50 | 2 Mbps | 10 Mbps | 200 GB |
VCX20 | 200 | 5 Mbps | 25 Mbps | 200 GB |
VCX30 | 500 | 10 Mbps | 50 Mbps | 400 GB |
VCX40 | 1,000 | 20 Mbps | 100 Mbps | 400 GB |
VCX50 | 2,000 | 50 Mbps | 250 Mbps | 800 GB |
VCX60 | 5,000 | 100 Mbps | 500 Mbps | 800 GB |
VCX70 | 12,000 | 200 Mbps | No limit | 800 GB |
VCX80 | 50,000 | 500 Mbps | No limit | 800 GB |
VCX90 | 100,000 | 1000 Mbps | No limit | 800 GB |
This table lists the new performance tier-based models and their minimum resource requirements.
Note: Minimum CPU clock speed for all models is 1200 MHz.
Model | Virtual CPUs | RAM | Management disk size w/o web proxy | Maximum data store disk size |
VCX10 | 1 CPU | 2 GB | 20 GB | 50 GB |
VCX20 | 1 CPU | 2 GB | 20 GB | 80 GB |
VCX30 | 2 CPUs | 2 GB | 20 GB | 100 GB |
VCX40 | 4 CPUs | 4 GB | 26 GB | 150 GB |
VCX50 | 4 CPUs | 8 GB | 38 GB | 400 GB |
VCX60 | 4 CPUs | 8 GB | 38 GB | 400 GB |
VCX70 | 6 CPUs | 24 GB | 70 GB | 10 x 80 GB |
VCX80 | 12 CPUs | 32 GB | 86 GB | 10 x 160 GB |
VCX90 | 24 CPUs | 48 GB | 118 GB | 14 x 160 GB |
This table lists the SteelHead-v CX xx55 models.
Note: The data store size per model allocates extra disk space to accommodate hypervisor overhead. As of RiOS 9.0, the size of the minimum management disk size for new open virtualization appliance (OVA) deployments for the VCX models is 38 GB or greater. Older models that upgrade still use a 50-GB management disk.
Model | Min. virtual CPUs | Min. CPU speed | Memory | Manage- ment disk (VMDK1) | RiOS data store disk (VMDK2+) | Network Services Capacity | Optimized WAN capacity | Max. con- nections |
VCX255U | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 4 Mbps | 2 Mbps | 50 |
VCX255L | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 75 |
VCX255M | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 150 |
VCX255H | 1 CPU | 1000 MHz | 2 GB | 38 GB | 50 GB | 12 Mbps | 6 Mbps | 230 |
VCX555L | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 12 Mbps | 6 Mbps | 250 |
VCX555M | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 20 Mbps | 10 Mbps | 400 |
VCX555H | 1 CPU | 1200 MHz | 2 GB | 38 GB | 80 GB | 20 Mbps | 10 Mbps | 650 |
VCX755L | 2 CPUs | 1200 MHz | 2 GB | 38 GB | 102 GB | 45 Mbps | 10 Mbps | 900 |
VCX755M | 2 CPUs | 1200 MHz | 2 GB | 38 GB | 102 GB | 45 Mbps | 10 Mbps | 1500 |
VCX755H | 2 CPUs | 1200 MHz | 4 GB | 38 GB | 150 GB | 45 Mbps | 20 Mbps | 2300 |
VCX1555L | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 50 Mbps | 3000 |
VCX1555M | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 50 Mbps | 4500 |
VCX1555H | 4 CPUs | 1200 MHz | 8 GB | 38 GB | 400 GB | 100 Mbps | 100 Mbps | 6000 |
VCX5055M | 12 CPUs | 1200 MHz | 16 GB | 82 GB | 10 x 80 GB | No limit | 200 Mbps | 14,000 |
VCX5055H | 12 CPUs | 1200 MHz | 16 GB | 82 GB | 10 x 80 GB | No limit | 400 Mbps | 25,000 |
VCX7055L | 16 CPUs | 1200 MHz | 32 GB | 178 GB | 10 x 160 GB | No limit | 622 Mbps | 75,000 |
VCX7055M | 24 CPUs | 1200 MHz | 48 GB | 178 GB | 14 x 160 GB | No limit | 1 Gbps | 100,000 |
Flexible RiOS data store
As of RiOS 9.0, the flexible data store feature supports a smaller data store size, down to a minimum 12 GB.
To change the disk size of a running SteelHead-v, you must first power off the VM. From the Settings page, you can expand or remove the RiOS data store (second) disk, and replace it with a smaller disk. (Reducing the disk size will not work.) Modifying the disk size causes the RiOS data store to automatically clear.
If you provide a disk larger than the configured RiOS data store for the model, the entire disk is partitioned, but only the allotted amount for the model is used.
Memory and CPU requirements are a hard requirement for a model to run. Flexible RiOS data store is not supported for the older Vxx50 models.
Multiple RiOS data stores
SteelHead-v models VCX5055 through VCX7055 running RiOS 8.6 and later, and VCX70 through VCX90 running RiOS 9.6 and later, support up to 14 RiOS data stores using Fault Tolerant Storage (FTS). We recommend that all RiOS data stores on an appliance be the same size.
To add additional data stores, you must power off the VM.
In-path pairing for NIC interfaces
SteelHead-v models are not limited to a fixed number of NIC interfaces. However, the in-path pair limit is four (four LAN and four WAN interfaces), including bypass cards. If you want to use the SteelHead-v bypass feature, you are limited to the number of hardware bypass pairs the model can support.
Each SteelHead-v requires a primary and auxiliary interface, which are the first two interfaces added. If you add additional interface pairs to the VM, they are added as in-path optimization interfaces. Total bandwidth and connection limits still apply.
NICs for SteelHead-v
Riverbed NICs provide hardware-based fail-to-wire and fail-to-block capabilities for SteelHead-v. The configured failure mode is triggered if the ESX or ESXi host loses power or is unable to run the
SteelHead-v guest, if the SteelHead-v guest is powered off, or if the SteelHead-v guest experiences a significant fault (using the same logic as the physical SteelHead).
Note: Physical fail-to-wire and fail-to-block NICs in SteelHead-v are not supported on Hyper-V and KVM.
Riverbed NICs are available in two-port and four-port configurations:
Riverbed NICs for SteelHead-v | Orderable part number | SteelHead-v models |
Two-Port 1-GbE TX Copper NIC | NIC-001-2TX | All |
Four-Port 1-GbE TX Copper NIC | NIC-002-4TX | 1050L, 1050M, 1050H, 2050L, 2050M, and 2050H VCX255, VCX555, VCX755, and VCX1555, VCX5055, VCX7055 VCX10 through VCX90 |
Two-Port 10-GbE Multimode Fiber NIC (direct I/O only) | NIC-008-2SR | VCX5055 and VCX7055 VCX70 through VCX90 |
You must use Riverbed NICs for fail-to-wire or fail-to-block with SteelHead-v. NIC cards without a bypass feature from other vendors are supported for functionality other than fail-to-wire and fail-to-block, if supported by ESX or ESXi.
Requirements for SteelHead-v deployment with a NIC
To successfully install a NIC in an ESXi host for SteelHead-v, you need these items:
• ESXi host with a PCIe slot.
• vSphere client access to the ESXi host.
• VMware ESXi 5.0 or later and RiOS 8.0.3 or later.
—or—
VMware ESXi 4.1 and one of these RiOS versions:
– For V150, RiOS 7.0.3a or later.
– For V250, V550, V1050, and V2050, RiOS 7.0.2 or later.
– For VCX555, VCX755, and VCX1555, RiOS 8.0 or later.
For ESXi 4.1, you also need these items:
• ESXi bypass driver (a .vib file) available from https://support.riverbed.com.
• Intel 82580 Gigabit network interface driver.
• By default, ESXi does not include the Intel 82580 Gigabit Ethernet network interface driver needed for the Riverbed bypass card. If you do not have this driver installed, you can download it from the VMware website.
• SSH and SCP access to the ESXi host.
For more information about Riverbed NICs installation, see the Network and Storage Card Installation Guide. The installation procedure in this manual assumes you have successfully installed a Riverbed NIC following the instructions in the Network and Storage Card Installation Guide.
The number of hardware bypass pairs (that is, one LAN and one WAN port) supported is determined by the model of the SteelHead-v:
• Models V150, V250, and V550: one bypass pair
• Models V1050 and V2050: two bypass pairs (that is, two LAN and two WAN ports)
• Models VCX555, VCX755, VCX1555, VCX 5055, and VCX 7055: two bypass pairs
• Models VCX10 through VCX90: two bypass pairs
Note: You can install a four-port card in an ESXi host for a SteelHead-v 150, 250, or 550. However, only one port pair is available because the SteelHead-v model type determines the number of pairs.
These configurations have been tested:
• Two SteelHead-v guests, each using one physical pair on a single four-port Riverbed NIC card
• Two SteelHead-v guests connecting to separate cards
• One SteelHead-v guest connecting to bypass pairs on different NIC cards
For more information about installing and configuring SteelHead-v with a Riverbed NIC, see
Completing the preconfiguration checklist.
SteelHead-v on the Cisco SRE
In addition to standard ESX and ESXi, you can run SteelHead-v on a Cisco server blade, using the SRE platform, based on ESXi 5.0.
This table lists the SteelHead-v models supported on each supported Cisco SRE model and the required version of RiOS, disk configuration, and RAM.
SRE model | SteelHead-v model | RiOS version | Disk configuration | RAM |
910 | V1050H, VCX755H | 6.5.4+, 7+, 8+ | RAID1 | 8 GB |
910 | V1050M, VCX755M | 6.5.4+, 7+, 8+ | RAID1 | 4 GB |
900 | V1050M, VCS755M | 6.5.4+, 7+, 8+ | RAID1 | 4 or 8 GB |
700/710 | V250H | 6.5.4+, 7+, 8+ | Single disk | 4 GB |
300 | Not Supported | | | |
For more information about deploying SteelHead-v on a Cisco SRE blade, search the Riverbed knowledge base at
https://supportkb.riverbed.com/support/index?page=home.