About Virtual Products
Riverbed offers several of its products in virtual form factors. Virtual products are easily deployed and make more efficient use of compute resources. If your enterprise has a virtualization environment, these products can help you manage and accelerate application performance across the platform and your network.
Our virtual products are software that run on virtual machines. Virtual machines run on hypervisors, which in turn run on physical hardware. Before using a virtual product from Riverbed, ensure that you understand its requirements and that your virtualization environment can supply the necessary resources. Our virtual products are compatible with many virtualization platforms, and their requirements and setup can differ depending on the environment where you’ll be using them. To ensure successful deployment and operation, take time to understand the requirements, limitations, and best practices for the product and its target virtualization platform.
The operation and functionality of hypervisors and virtual machines are beyond the scope of this document. Consult the documentation from the vendor of your virtualization tools for details about your hypervisor and setting up virtual machines.
For information about technical specifications, go to the product family specification sheet.
About virtual environment best practices
About in-path deployment
About virtual in-path deployment
About out-of-path deployment
About virtual environment best practices
These guidelines help you prepare your virtual environment for appliance deployment. Following them can help you avoid problems with your appliances and keep them running at optimal performance.
Connecting multiple LAN/WAN virtual interfaces to a single vSwitch or physical network interface card (NIC) could create a network loop that might make your hypervisor unreachable.
About Virtual Products
Host NICs, ports, and promiscuous mode
Host BIOS, CPU, RAM, and storage
ESXi hosts with Riverbed NICs
Host NICs, ports, and promiscuous mode
Use at least a gigabit link for each LAN/WAN interface. For best performance, ensure each LAN/WAN interface is backed by a dedicated NIC. Do not share physical NICs with other virtual machines. Using dedicated virtual switches and physical NICs also prevents network loops.
For all two-port NICs, ensure both ports are provided to the appliance. The number of NICs should always be even and in order; if they are not, the LAN/WAN pairs will not form properly and the appliance will not function properly.
Enable promiscuous mode for the LAN/WAN vSwitch. Promiscuous mode allows the LAN/WAN NICs to intercept traffic not destined for the appliance. Promiscuous mode is mandatory for traffic acceleration on in-path deployments.
For VMware, use distinct port groups for LAN and WAN virtual NICs connected to a vSwitch. If you are running multiple appliances on a single virtual host, you must add the LAN/WAN virtual NIC from each virtual machine into a different port group on each vSwitch. Using distinct port groups for each LAN or WAN virtual NIC prevents the formation of network loops.
About virtual environment best practices
Host BIOS, CPU, RAM, and storage
ESXi hosts with Riverbed NICs
Host BIOS, CPU, RAM, and storage
When you provision the host virtual machine, account for overhead. Activity at the host, and even the hypervisor, level can produce some overhead, causing the virtual machine to exceed its configured reservations.
Set your BIOS power management settings to maximize performance, if available.
Always use server-grade physical CPUs, and avoid overprovisioning them. For example, if a physical host has a quad-core CPU, the host’s virtual machines together should use no more than four vCPUs. We recommend you do not use hyperthreading. Hyperthreading can cause contention for the cores, resulting in significant loss of performance.
Always reserve RAM and ensure there is extra for overhead. The total amount of virtual RAM provisioned to all of the host’s virtual machines should not be greater than the host’s amount of physical RAM.
Back the appliance’s data store with high-quality physical storage devices. Physical storage must support a high number of input/output operations per second (IOPS). For best performance, avoid sharing physical storage across multiple virtual machines. Always allocate an unshared disk.
If you don’t allocate sufficient resources, the Virtual Machine Configuration alarm will be triggered and display a message relevant to the underprovisioned resource.
Resource
Alarm message
Memory
Not enough memory (available = x MB, required = x MB)
Storage
Not enough disk 2 storage (available = x MB, required = x MB)
CPU
Not enough cumulative CPU (available = x MHz, required = x MHz)
For example, you might receive the following alarm message:
Not enough cumulative CPU (available = 1861.260000 MHz, required = 2000.000000 MHz)
About virtual environment best practices
Host NICs, ports, and promiscuous mode
Firewalls
Port requirements for SteelHead Mobile:
Ports 80 and 443 must be open for the server-side firewall management connection to the SteelHead Mobile Controller. Port 22 must be open for access to the command-line interface (CLI).
Either port 80 or port 443 and port 7870 must be open for the connection to the SteelHead Mobile endpoints.
For SteelHead Mobile Controllers deployed behind a DMZ or screened subnet, open ports 7800 for in-path and 7810 for out-of-path deployments.
If you’re using application control, you must allow these processes:
For Windows—rbtdebug.exe, rbtmon.exe, rbtsport.exe, and shmobile.exe
For Mac OS X—rbtsport, rbtmond, rbtuseragentd, and rbtdebug
Virtual private networks
Typically, SteelHead Mobile endpoint software connects to server-side appliances through a virtual private network (VPN). If that is the case for your network, ensure that the VPN tunnel is not optimized. If the tunnel uses TCP for transport, ensure that your endpoint policies include pass-through rules for the VPN port number. Depending on your deployment scenario, you might want that rule to be the first in the rule list. If the port uses UDP, no rule is required.
VPNs that use IPsec as the transport protocol don’t need a pass-through rule because IPsec is its own non-TCP/IP protocol and, by default, the SteelHeads don’t optimize it.
For a complete list of supported VPN software, go to Knowledge Base article S14999.
About in-path deployment
You can deploy virtual appliances in the same scenarios as physical ones, with the exception that fail-to-wire requires that the physical host has Riverbed NICs. For deployments where a Riverbed bypass NIC is not an option, we recommend that you do not deploy your appliance in-path. If you are not using a bypass card, you can still have a failover mechanism by employing either a virtual in-path or an out-of-path deployment. These deployments allow a router using Web Cache Communication Protocol (WCCP) or policy-based routing (PBR) to handle failover.
NIC promiscuous mode is required for in-path deployments.
About virtual environment best practices
About virtual in-path deployment
About out-of-path deployment
About network interface cards
About virtual in-path deployment
In a virtual in-path deployment, the appliance is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface, and the LAN interface is not used. This deployment differs from a physical in-path deployment in that a packet redirection mechanism, such as WCCP or PBR, directs packets to appliances that are not in the physical path of the client or server. In this configuration, clients and servers continue to see client and server IP addresses.
On appliances with multiple WAN ports, you can deploy WCCP and PBR with the same multiple interface options available on physical appliances.
For a virtual in-path deployment, attach only the WAN virtual NIC to the physical NIC, and configure the router using WCCP or PBR to forward traffic to the appliance for acceleration. You must also enable in-path out-of-path (OOP) deployment on the appliance. To do that, enable L4/PBR/WCCP/Interceptor Support on the General Service Settings page of Management Console.
About virtual environment best practices
About in-path deployment
About out-of-path deployment
About out-of-path deployment
In an out-of-path (OOP) deployment, SteelHead is not in the direct path between the client and the server. Servers see the IP address of the server-side SteelHead installation rather than the client IP address, which might have an impact on security policies. For a virtual OOP deployment, connect the primary interface to the physical in-path NIC and configure the router to forward traffic to this NIC. You must also enable OOP on the appliance. Also, these caveats apply to server-side OOP appliances:
Autodiscovery is not supported. Configure fixed-target rules on client-side appliances.
Interception is not supported on the primary interface.
You must create an OOP connection from an in-path or logical in-path appliance and direct it to port 7810 on the primary interface of the server-side SteelHead. This setting is mandatory.
An OOP configuration provides nontransparent acceleration from the server perspective. Clients connect to servers, but servers treat it like a server-side SteelHead connection. This affects log files, server-side ACLs, and bidirectional applications such as rsh.
You can use OOP configurations along with in-path or logical in-path configurations.
About virtual environment best practices
About in-path deployment
About virtual in-path deployment
About appliance considerations
SteelHead is software that delivers acceleration similar to that offered by the SteelHead hardware, while also providing the flexibility of virtualization. Built on the same underlying operating system as the physical SteelHead, SteelHead supports:
out-of-path deployments.
virtual in-path deployments.
physical in-path deployments with bypass cards.
fail-to-wire and fail-to-block with Riverbed NICs.
asymmetric route detection.
connection-forwarding.
high availability in active-active configurations, with data store synchronization across serial clusters.
management and reporting through SteelHead Central Controller (SCC).
SteelHead does not support:
Proxy file service (PFS).
hardware reports, such as those for disk status.
hardware-based alerts and notifications, such as a RAID alarm
The product runs on VMware ESXi, Microsoft Hyper‑V, and Linux KVM hypervisors installed on industry-standard hardware servers. SteelHead on VMware vSphere is certified for the Cisco Service-Ready Engine (SRE) module with Cisco Services-Ready Engine Virtualization (Cisco SRE-V).
SteelHead supports up to 24 virtual CPUs and 10 interfaces.
About virtual environment best practices
About controller considerations
About flexible data stores
About network interface cards
About controller considerations
SteelHead Central Controller (SCC) and SteelHead Mobile Controller provide central management and reporting for multiple, separate deployments of acceleration products. You can run multiple virtual instances of these controller products on a single physical host, provided that the host has sufficient resources. The amount of resources the host virtual machine needs depends on the number of appliances you plan to manage through the controller.
You can access controllers through HTTP, HTTPS, or SSH. Access is configured through the controller’s Management Console.
SteelHead Central Controller
Use the following table to determine the resources needed for SCC. Use the 64-bit SCC when managing more than 100 appliances.
For SteelHead management, each controller instance requires port 80 and port 443 inbound from the SteelHeads and port 22 outbound from each controller instance to managed appliances.
Maximum managed appliances
Minimum datastore size
Minimum RAM
Minimum CPU
50
50 GB
4 GB
2 core, 2 GHz
100
100 GB
6 GB
2 cores, 2 GHz
250
250 GB
6 GB
2 cores, 2 GHz
500
400 GB
16 GB
4 cores, 2.4 GHz
1500
400 GB
32 GB
4 cores, 4 GHz
SteelHead Mobile Controller
The table below lists the recommended resources for the indicated number of managed endpoints. By default, the controller is configured to support 100 endpoints.
The controller uses disk 1 for management and disk 2 for statistics.
When the size of disk 2 is increased to accommodate additional endpoints, the controller resizes it nondestructively; however, the contents of the disk are deleted if it’s size is decreased.
Maximum endpoints
Minimum RAM
Minimum disk 2
Minimum CPU
100
3 GB
3 GB
1 GHz
1,000
3 GB
15 GB
1 GHz
2,000
4 GB
50 GB
2 GHz
4,000
6 GB
100 GB
2 GHz
20,000
16 GB
500 GB SSD
9.2 GHz
You need to have valid licenses installed on all managed appliances and endpoints. A missing base license on an appliance or endpoint can cause the health of the controller to become critical and temporarily invalidates other licenses on the controller.
About virtual environment best practices
About appliance considerations
About flexible data stores
About network interface cards
About flexible data stores
The flexible data store feature supports a smaller data store size, down to a minimum 12 GB.
To change the disk size or add disks, you’ll need to power off the host virtual machine, and then detach the data store disk and attach a new, smaller one. Simply attempting to reduce the size of the existing disk will not work.
Modifying the disk size causes the data store to automatically clear.
If you provide a disk size larger than the configured data store for the model, the entire disk is partitioned, but only the allotted amount for the model is used.
SteelHead models VCX70 through VCX110 support multiple data stores using Fault Tolerant Storage (FTS). We recommend that all data stores on an appliance be the same size.
About virtual environment best practices
About appliance considerations
About network interface cards
About network interface cards
Each appliance instance requires a primary and auxiliary interface. If you add additional interface pairs to the virtual machine, they are added as in-path acceleration interfaces. Total bandwidth and connection limits still apply, regardless of the number of interfaces.
The in-path limit is four LAN/WAN interface pairs, including bypass cards. If you want to use the bypass feature, you are limited to the number of hardware bypass pairs your model supports. The bypass feature is available on all supported virtualization platforms.
Riverbed NICs are available in two-port and four-port configurations. Third-party NICs without a bypass feature are supported for functionality other than fail-to-wire and fail-to-block, providing the underlying hypervisor supports them.
About virtual environment best practices
About appliance considerations
ESXi hosts with Riverbed NICs
ESXi hosts with Riverbed NICs
SteelHead models VCX30 through VCX110 support two bypass LAN/WAN interface pairs. These configurations have been tested:
Two SteelHead guests, each using one physical pair on a single four-port Riverbed NIC
Two SteelHead guests connecting to separate cards
One SteelHead guest connecting to bypass pairs on different NICs