Policy Pages Reference : Optimization Policy Settings : Transport Rules
  
Transport Rules
You configure the TCP settings for the selected optimization policy in the Transport Settings page.
To properly configure transport settings for your environment, you need to understand its characteristics. For information on gathering performance characteristics for your environment, see the SteelHead Deployment Guide.
For detailed information about transport settings, see the SteelHead Management Console User’s Guide for SteelHead CX.
Enabling Congestion Control Algorithm
Complete the configuration as described in this table.
Control
Description
Congestion Control Algorithm
Select a method for congestion control from the drop-down list.
•  Standard (RFC-Compliant) - Optimizes non-SCPS TCP connections by applying data and transport streamlining for TCP traffic over the WAN. This control forces peers to use standard TCP as well. For details on data and transport streamlining, see the SteelHead Deployment Guide. This option clears any advanced bandwidth congestion control that was previously set.
•  HighSpeed - Enables high-speed TCP optimization for more complete use of long fat pipes (high-bandwidth, high-delay networks). Do not enable for satellite networks.
Riverbed recommends that you enable high-speed TCP optimization only after you have carefully evaluated whether it will benefit your network environment. For details about the trade-offs of enabling high-speed TCP, see tcp highspeed enable in the Riverbed Command-Line Interface Reference Manual.
•  Bandwidth Estimation - Uses an intelligent bandwidth estimation algorithm along with a modified slow-start algorithm to optimize performance in long lossy networks. These networks typically include satellite and other wireless environments, such as cellular networks, longer microwave, or Wi-Max networks.
Bandwidth estimation is a sender-side modification of TCP and is compatible with the other TCP stacks in the RiOS system. The intelligent bandwidth estimation is based on analysis of both ACKs and latency measurements. The modified slow-start mechanism enables a flow to ramp up faster in high latency environments than traditional TCP. The intelligent bandwidth estimation algorithm allows it to learn effective rates for use during modified slow start, and also to differentiate BER loss from congestion-derived loss and deal with them accordingly. Bandwidth estimation has good fairness and friendliness qualities toward other traffic along the path.
•  SkipWare Per-Connection - Applies TCP congestion control to each SCPS-capable connection. This method is compatible with IPv6. The congestion control uses:
•  a pipe algorithm that gates when a packet should be sent after receipt of an ACK.
•  the NewReno algorithm, which includes the sender's congestion window, slow start, and congestion avoidance.
•  time stamps, window scaling, appropriate byte counting, and loss detection.
This transport setting uses a modified slow-start algorithm and a modified congestion-avoidance approach. This method enables SCPS per connection to ramp up flows faster in high-latency environments, and handle lossy scenarios, while remaining reasonably fair and friendly to other traffic. SCPS per connection does a very good job of efficiently filling up satellite links of all sizes. SkipWare per connection is a high-performance option for satellite networks.
The Management Console dims this setting until you install a SkipWare license.
 
•  SkipWare Error-Tolerant - Enables SkipWare optimization with the error-rate detection and recovery mechanism on the SteelHead. This method is compatible with IPv6.
This method tolerates some loss due to corrupted packets (bit errors), without reducing the throughput, using a modified slow-start algorithm and a modified congestion avoidance approach. It requires significantly more retransmitted packets to trigger this congestion-avoidance algorithm than the SkipWare per-connection setting. Error-tolerant TCP optimization assumes that the environment has a high BER and most retransmissions are due to poor signal quality instead of congestion. This method maximizes performance in high-loss environments, without incurring the additional per-packet overhead of a FEC algorithm at the transport layer.
Use caution when enabling error-tolerant TCP optimization, particularly in channels with coexisting TCP traffic, because it can be quite aggressive and adversely affect channel congestion with competing TCP flows.
The Management Console dims this setting until you install a SkipWare license.
Enable Rate Pacing
Imposes a global data transmit limit on the link rate for all SCPS connections between peer SteelHeads or on the link rate for a SteelHead paired with a third-party device running TCP-PEP (Performance Enhancing Proxy).
Rate pacing combines MX-TCP and a congestion-control method of your choice for connections between peer SteelHeads and SEI connections (on a per-rule basis). The congestion-control method runs as an overlay on top of MX-TCP and probes for the actual link rate. It then communicates the available bandwidth to MX-TCP.
Enable rate pacing to prevent these problems:
•  Congestion loss while exiting the slow start phase. The slow-start phase is an important part of the TCP congestion-control mechanisms that starts slowly increasing its window size as it gains confidence about the network throughput.
•  Congestion collapse.
•  Packet bursts.
Rate pacing is disabled by default.
With no congestion, the slow start ramps up to the MX-TCP rate and settles there. When RiOS detects congestion (either due to other sources of traffic, a bottleneck other than the satellite modem, or because of a variable modem rate), the congestion-control method kicks in to avoid congestion loss and exit the slow start phase faster.
Enable rate pacing on the client-side SteelHead along with a congestion-control method. The client-side SteelHead communicates to the server-side SteelHead that rate pacing is in effect. You must also:
•  Enable Auto-Detect TCP Optimization on the server-side SteelHead to negotiate the configuration with the client-side SteelHead.
•  Configure an MX-TCP QoS rule to set the appropriate rate cap. If an MX-TCP QoS rule is not in place, rate pacing is not applied and the congestion-control method takes effect. You cannot delete the MX-TCP QoS rule when rate pacing is enabled.
The Management Console dims this setting until you install a SkipWare license.
Rate pacing does not support IPv6.
You can also enable rate pacing for SEI connections by defining an SEI rule for each connection.
Configuring Buffer Settings
The buffer settings in the Transport Settings page support high-speed TCP and are also used in data protection scenarios to improve performance. For details about data protection deployments, see the SteelHead Deployment Guide.
To properly configure buffer settings for a satellite environment, you need to understand its characteristics. For information on gathering performance characteristics for your environment, see the SteelHead Deployment Guide.
The high-speed TCP feature provides acceleration and high throughput for high-bandwidth links (also known as Long Fat Networks, or LFNs) where the WAN pipe is large but latency is high. High-speed TCP is activated for all connections that have a BDP larger than 100 packets.
For details about using HS-TCP in data protection scenarios, see the SteelHead Deployment Guide.
Automatic HighSpeed TCP is disabled by default. For details about HighSpeed TCP, see the SteelHead Management Console User’s Guide for SteelHead CX.
Complete the configuration as described in this table.
Control
Description
LAN Send Buffer Size
Specify the send buffer size used to send data out of the LAN. The default value is 81920.
LAN Receive Buffer Size
Specify the receive buffer size used to receive data from the LAN. The default value is 32768.
WAN Default Send Buffer Size
Specify the send buffer size used to send data out of the WAN. The default value is 262140.
WAN Default Receive Buffer Size
Specify the receive buffer size used to receive data from the WAN. The default value is 262140.
Enabling and Adding Single-Ended Connection Rules
You can optionally add rules to control single-ended SCPS connections. The SteelHead uses these rules to determine whether to enable or pass through SCPS connections.
A SteelHead receiving a SCPS connection on the WAN evaluates only the single-ended connection rules table.
To pass through a SCPS connection, Riverbed recommends setting both an in-path rule and a single-ended connection rule.
Complete the configuration as described in this table.
Control
Description
Enable Single-Ended Connection Rules Table
Enables transport optimization for single-ended interception connections with no SteelHead peer. These connections appear in the rules table.
In RiOS 8.5 or later, you can impose rate pacing for single-ended interception connections with no peer SteelHead. By defining an SEI connection rule, you can enforce rate pacing even when the SteelHead is not peered with a SCPS device and SCPS is not negotiated.
To enforce rate pacing for a single-ended interception connection, create an SEI connection rule for use as a transport-optimization proxy, select a congestion method for the rule, and then configure a QoS rule (with the same client/server subnet) to use MX-TCP. RiOS 8.5 and later accelerate the WAN-originated or LAN-originated proxied connection using MX-TCP.
By default, the SEI connection rules table is disabled. When enabled, two default rules appear in the rules table. The first default rule matches all traffic with the destination port set to the interactive port label and bypasses the connection for SCPS optimization.
The second default rule matches all traffic with the destination port set to the RBT-Proto port label and bypasses the connection for SCPS optimization.
This option does not affect the optimization of SCPS connections between SteelHeads.
When you disable the table, you can still add, move, or remove rules, but the changes do not take effect until you reenable the table.
The Management Console dims the SEI rules table until you install a SkipWare license.
Enable SkipWare Legacy Compression - Enables negotiation of SCPS-TP TCP header and data compression with a remote SCPS-TP device. This feature enables interoperation with RSP SkipWare packages and TurboIP devices that have also been configured to negotiate TCP header and data compression.
Legacy compression is disabled by default.
After enabling or disabling legacy compression, you must restart the optimization service.
The Management Console dims legacy compression until you install a SkipWare license and enable the SEI rules table.
Legacy compression also works with non-SCPS TCP algorithms.
These limits apply to legacy compression:
•  This feature is not compatible with IPv6.
•  Packets with a compressed TCP header use IP protocol 105 in the encapsulating IP header; this might require changes to intervening firewalls to permit protocol 105 packets to pass.
•  This feature supports a maximum of 255 connections between any pair of end-host IP addresses. The connection limit for legacy SkipWare connections is the same as the appliance-connection limit.
•  QoS limits for the SteelHead apply to the legacy SkipWare connections.
Adding Single-Ended Connection Rules
You can optionally add rules to control single-ended SCPS connections. The SteelHead uses these rules to determine whether to enable or pass through SCPS connections.
A SteelHead receiving a SCPS connection on the WAN evaluates only the single-ended connection rules table.
To pass through a SCPS connection, Riverbed recommends setting both an in-path rule and a single-ended connection rule.
Complete the configuration as described in this table.
Control
Description
Add New Rule
Displays the controls for adding a new rule.
Position
Select Start, End, or a rule number from the drop-down list. SteelHeads evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule do not match, the system consults the next rule. As an example, if the conditions of rule 1 do not match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
Source Subnet
Specify an IPv4 or IPv6 address and mask for the traffic source; otherwise, specify All-IP for all IPv4 and IPv6 traffic.
Use these formats:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxxx (IPv6)
Destination Subnet
Specify an IPv4 or IPv6 address and mask pattern for the traffic destination; otherwise, specify All-IP for all traffic.
Use these formats:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxxx (IPv6)
Port or Port Label
Specify the destination port number, port label, or all.
Click Port Label to go to the Networking > App Definitions: Port Labels page for reference.
VLAN Tag ID
Specify one of the following: a VLAN identification number from 1 to 4094; all to specify that the rule applies to all VLANs; or untagged to specify the rule applies to untagged connections.
RiOS supports VLAN v802.1Q. To configure VLAN tagging, configure SCPS rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
Web Proxy
Specify one of the following options from the drop-down list:
•  Ignore - Ignores web proxy settings for this rule.
•  Disabled - Disables web proxy settings for this rule.
•  Enabled - Enables web proxy settings for this rule.
Traffic
Specifies the action that the rule takes on a SCPS connection. To allow single-ended interception SCPS connections to pass through the SteelHead unoptimized, disable SCPS Discover and TCP Proxy.
Select one of these options:
•   SCPS Discover - Turns on SCPS and turns off TCP proxy.
•   TCP Proxy - Turns off SCPS and turns on TCP proxy.
Congestion Control Algorithm
Select a method for congestion control from the drop-down list.
•  Standard (RFC-Compliant) - Optimizes non-SCPS TCP connections by applying data and transport streamlining for TCP traffic over the WAN. This control forces peers to use standard TCP as well. For details on data and transport streamlining, see the SteelHead Deployment Guide. This option clears any advanced bandwidth congestion control that was previously set.
•  HighSpeed - Enables high-speed TCP optimization for more complete use of long fat pipes (high-bandwidth, high-delay networks). Do not enable for satellite networks.
Riverbed recommends that you enable high-speed TCP optimization only after you have carefully evaluated whether it will benefit your network environment. For details about the trade-offs of enabling high-speed TCP, see tcp highspeed enable in the Riverbed Command-Line Interface Reference Manual.
•  Bandwidth Estimation - Uses an intelligent bandwidth estimation algorithm along with a modified slow-start algorithm to optimize performance in long lossy networks. These networks typically include satellite and other wireless environments, such as cellular networks, longer microwave, or Wi-Max networks.
Bandwidth estimation is a sender-side modification of TCP and is compatible with the other TCP stacks in the RiOS system. The intelligent bandwidth estimation is based on analysis of both ACKs and latency measurements. The modified slow-start mechanism enables a flow to ramp up faster in high latency environments than traditional TCP. The intelligent bandwidth estimation algorithm allows it to learn effective rates for use during modified slow start, and also to differentiate BER loss from congestion-derived loss and deal with them accordingly. Bandwidth estimation has good fairness and friendliness qualities toward other traffic along the path.
•  SkipWare Per-Connection - Applies TCP congestion control to each SCPS-capable connection. This method is compatible with IPv6. The congestion control uses:
•  a pipe algorithm that gates when a packet should be sent after receipt of an ACK.
•  the NewReno algorithm, which includes the sender's congestion window, slow start, and congestion avoidance.
•  time stamps, window scaling, appropriate byte counting, and loss detection.
This transport setting uses a modified slow-start algorithm and a modified congestion-avoidance approach. This method enables SCPS per connection to ramp up flows faster in high-latency environments, and handle lossy scenarios, while remaining reasonably fair and friendly to other traffic. SCPS per connection does a very good job of efficiently filling up satellite links of all sizes. SkipWare per connection is a high-performance option for satellite networks.
The Management Console dims this setting until you install a SkipWare license.
 
•  SkipWare Error-Tolerant - Enables SkipWare optimization with the error-rate detection and recovery mechanism on the SteelHead. This method is compatible with IPv6.
This method tolerates some loss due to corrupted packets (bit errors), without reducing the throughput, using a modified slow-start algorithm and a modified congestion avoidance approach. It requires significantly more retransmitted packets to trigger this congestion-avoidance algorithm than the SkipWare per-connection setting. Error-tolerant TCP optimization assumes that the environment has a high BER and most retransmissions are due to poor signal quality instead of congestion. This method maximizes performance in high-loss environments, without incurring the additional per-packet overhead of a FEC algorithm at the transport layer.
Use caution when enabling error-tolerant TCP optimization, particularly in channels with coexisting TCP traffic, because it can be quite aggressive and adversely affect channel congestion with competing TCP flows.
The Management Console dims this setting until you install a SkipWare license.
Enable Rate Pacing
Imposes a global data transmit limit on the link rate for all SCPS connections between peer SteelHeads or on the link rate for a SteelHead paired with a third-party device running TCP-PEP (Performance Enhancing Proxy).
Rate pacing combines MX-TCP and a congestion-control method of your choice for connections between peer SteelHeads and SEI connections (on a per-rule basis). The congestion-control method runs as an overlay on top of MX-TCP and probes for the actual link rate. It then communicates the available bandwidth to MX-TCP.
Enable rate pacing to prevent these problems:
•  Congestion loss while exiting the slow start phase. The slow-start phase is an important part of the TCP congestion-control mechanisms that starts slowly increasing its window size as it gains confidence about the network throughput.
•  Congestion collapse.
•  Packet bursts.
Rate pacing is disabled by default.
With no congestion, the slow start ramps up to the MX-TCP rate and settles there. When RiOS detects congestion (either due to other sources of traffic, a bottleneck other than the satellite modem, or because of a variable modem rate), the congestion-control method kicks in to avoid congestion loss and exit the slow start phase faster.
Enable rate pacing on the client-side SteelHead along with a congestion-control method. The client-side SteelHead communicates to the server-side SteelHead that rate pacing is in effect. You must also:
•  Enable Auto-Detect TCP Optimization on the server-side SteelHead to negotiate the configuration with the client-side SteelHead.
•  Configure an MX-TCP QoS rule to set the appropriate rate cap. If an MX-TCP QoS rule is not in place, rate pacing is not applied and the congestion-control method takes effect. You cannot delete the MX-TCP QoS rule when rate pacing is enabled.
The Management Console dims this setting until you install a SkipWare license.
Rate pacing does not support IPv6.
You can also enable rate pacing for SEI connections by defining an SEI rule for each connection.
Add
Adds the rule to the list. The Management Console redisplays the SCPS Rules table and applies your modifications to the running configuration, which is stored in memory.
Remove Selected Rules
Select the check box next to the name and click Remove Selected.
Move Selected Rules
Moves the selected rules. Click the arrow next to the desired rule position; the rule moves to the new position.