About virtual environment best practices
These guidelines help you prepare your virtual environment for appliance deployment. Following them can help you avoid problems with your appliances and keep them running at optimal performance.
Connecting multiple LAN/WAN virtual interfaces to a single vSwitch or physical network interface card (NIC) could create a network loop that might make your hypervisor unreachable.
Host NICs, ports, and promiscuous mode
Use at least a gigabit link for each LAN/WAN interface. For best performance, ensure each LAN/WAN interface is backed by a dedicated NIC. Do not share physical NICs with other virtual machines. Using dedicated virtual switches and physical NICs also prevents network loops.
For all two-port NICs, ensure both ports are provided to the appliance. The number of NICs should always be even and in order; if they are not, the LAN/WAN pairs will not form properly and the appliance will not function properly.
Enable promiscuous mode for the LAN/WAN vSwitch. Promiscuous mode allows the LAN/WAN NICs to intercept traffic not destined for the appliance. Promiscuous mode is mandatory for traffic acceleration on in-path deployments.
For VMware, use distinct port groups for LAN and WAN virtual NICs connected to a vSwitch. If you are running multiple appliances on a single virtual host, you must add the LAN/WAN virtual NIC from each virtual machine into a different port group on each vSwitch. Using distinct port groups for each LAN or WAN virtual NIC prevents the formation of network loops.
Host BIOS, CPU, RAM, and storage
When you provision the host virtual machine, account for overhead. Activity at the host, and even the hypervisor, level can produce some overhead, causing the virtual machine to exceed its configured reservations.
Set your BIOS power management settings to maximize performance, if available.
Always use server-grade physical CPUs, and avoid overprovisioning them. For example, if a physical host has a quad-core CPU, the host’s virtual machines together should use no more than four vCPUs. We recommend you do not use hyperthreading. Hyperthreading can cause contention for the cores, resulting in significant loss of performance.
Always reserve RAM and ensure there is extra for overhead. The total amount of virtual RAM provisioned to all of the host’s virtual machines should not be greater than the host’s amount of physical RAM.
Back the appliance’s data store with high-quality physical storage devices. Physical storage must support a high number of input/output operations per second (IOPS). For best performance, avoid sharing physical storage across multiple virtual machines. Always allocate an unshared disk.
If you don’t allocate sufficient resources, the Virtual Machine Configuration alarm will be triggered and display a message relevant to the underprovisioned resource.
Resource | Alarm message |
---|
Memory | Not enough memory (available = x MB, required = x MB) |
Storage | Not enough disk 2 storage (available = x MB, required = x MB) |
CPU | Not enough cumulative CPU (available = x MHz, required = x MHz) |
For example, you might receive the following alarm message:
Not enough cumulative CPU (available = 1861.260000 MHz, required = 2000.000000 MHz)
Firewalls
Port requirements for Client Accelerator:
• Ports 80 and 443 must be open for the server-side firewall management connection to the Client Accelerator Controller. Port 22 must be open for access to the command-line interface (CLI).
• Either port 80 or port 443 and port 7870 must be open for the connection to the Client Accelerator endpoints.
• For Client Accelerator Controllers deployed behind a DMZ or screened subnet, open ports 7800 for in-path and 7810 for out-of-path deployments.
If you’re using application control, you must allow these processes:
• For Windows—rbtdebug.exe, rbtmon.exe, rbtsport.exe, and shmobile.exe
• For Mac OS X—rbtsport, rbtmond, rbtuseragentd, and rbtdebug
Virtual private networks
Typically, Client Accelerator endpoint software connects to server-side appliances through a virtual private network (VPN). If that is the case for your network, ensure that the VPN tunnel is not optimized. If the tunnel uses TCP for transport, ensure that your endpoint policies include pass-through rules for the VPN port number. Depending on your deployment scenario, you might want that rule to be the first in the rule list. If the port uses UDP, no rule is required.
VPNs that use IPsec as the transport protocol don’t need a pass-through rule because IPsec is its own non-TCP/IP protocol and, by default, the SteelHeads don’t optimize it.
For a complete list of supported VPN software, go to Knowledge Base article
S14999.