SteelHead™ Deployment Guide : Satellite Optimization : Configuring Satellite Optimization Features
  
Configuring Satellite Optimization Features
This section contains the following topics:
  • Configuring Transport Optimization
  • Configuring Rate Pacing
  • Configuring Single-Ended Connection Rule Table Settings
  • Configuring Single-Ended Rules
  • Configuring Transport Optimization
    To properly configure transport settings for the target environment, you must understand its characteristics. This section describes how to configure, monitor, and troubleshoot the transport settings in RiOS v7.0 or later.
    To capture your performance characteristics
    Connect to the SteelHead CLI, using an account with administration rights.
     
    Enter the enable mode and then configure terminal mode:
    enable
    configure terminal
    If your environment does not support path MTU discovery, use the ping command to measure the maximum transmission unit (MTU) by pinging a remote host.
    Start with a full-size packet and decrease the size of the packet in small increments until the ping successfully reaches the target host.
    Write the MTU here: ____________________________________
    Write the round trip time here: __________________________
    If you are deploying your SteelHead through WCCP, you might need to take into account the additional GRE overhead of WCCP. The following shows an example ping command for measuring maximum packet size along a specified path.
    ping -I inpath0_0 -s <Bytes> <target host>
    Use the following command with a full-size packet for a count of 1000 or more packets.
    ping -I inpath0_0 -c 1000 -s <your MTU> <target host>
    Write the percentage of loss during this test here: _______________
    Configure the SteelHead WAN buffers for the target network.
    Using the data you have collected, calculate two times bandwidth delay project (BDP) for your satellite network using the following formula or table. For satellite networks that vary in capacity, use the maximum potential speed the network can achieve. If your satellite link speeds differ in either direction, you might have different size send and receive buffer sizes. If this is the case, your send buffer on the transmitting side should match your receive buffer on the receiving side.
    SteelHead WAN Buffers = ((Your RTT in milliseconds * 0.001) * (Your Circuit Speed in bps/8) *2)
    For example, ((600 ms * 0.001) * (5,000,000bps/8) * 2) = 750,000 byte WAN buffers.
    Use the following table as a quick reference to help estimate appropriate SteelHead WAN buffers.
     
    Link Speed (bps)
    256,000
    768,000
    1,544,000
    6,000,000
    10,000,000
    20,000,000
    45,000,000
    RTT (ms)
     
     
     
     
     
     
     
    600
    38,400
    115,200
    231,600
    900,000
    1,500,000
    3,000,000
    6,750,000
    700
    44,800
    134,400
    270,200
    1,050,000
    1,750,000
    3,500,000
    7,875,000
    800
    51,200
    153,600
    308,800
    1,200,000
    2,000,000
    4,000,000
    9,000,000
    900
    57,600
    172,800
    347,400
    1,350,000
    2,250,000
    4,500,000
    10,125,000
    1000
    64,000
    192,000
    386,000
    1,500,000
    2,500,000
    5,000,000
    11,250,000
    1100
    70,400
    211,200
    424,600
    1,650,000
    2,750,000
    5,500,00
    12,375,000
    1200
    76,800
    230,400
    463,200
    1,800,000
    3,000,000
    6,000,000
    13,500,000
    1300
    83,200
    249,600
    501,800
    1,950,000
    3,250,000
    6,500,000
    14,625,000
    1400
    89,600
    268,800
    540,000
    2,100,000
    3,500,000
    7,000,000
    15,750,000
    1500
    96,000
    288,000
    579,000
    2,250,000
    3,742,800
    7,500,000
    16,875,000
    1600
    102,400
    307,200
    617,600
    2,400,000
    4,000,000
    8,000,000
    18,000,000
    1700
    108,800
    326,400
    656,200
    2,550,000
    4,250,000
    8,500,000
    19,125,000
    1800
    115,200
    345,600
    694,800
    2,700,00
    4,500,000
    9,000,000
    20,250,000
    1900
    121,600
    364,800
    733,400
    2,850,000
    4,750,000
    9,500,000
    21,375,000
    2000
    128,000
    384,000
    772,000
    3,000,000
    5,000,000
    10,000,000
    22,500,000
    Write the WAN buffer size, in bytes, here: ____________________________
    Configure the satellite modem or router in the path with at least 1 BDP size buffer. This device is sometimes called the bottleneck buffer. Satellite modems might configure this in terms of time such as milliseconds. In this case, the RTT provides the easiest measurement to use. Routers commonly configure this in terms of packets and thus 1 BDP/MTU gives the best approximation for the queue size.
    Write the LAN buffer size, in bytes, here: ____________________________
    For information about bottleneck buffer, see Potential Under Performance Due to Short Bottleneck Buffer.
    To configure transport settings
    Configure all the SteelHead WAN buffers with the following commands:
    protocol connection wan send def-buf-size <your buffer size>
    protocol connection wan receive def-buf-size <your buffer size>
    Or, choose Optimization > Network Services: Transport Settings, select WAN and LAN receive and send buffers, and click Apply.
    Configure your remote SteelHeads with the desired transport options from the commands in the following table.
     
    Transport Optimization Option
    CLI Command
    Enable BW estimation
    tcp-cong-ctrl mode bw-est
    Enable error recovery
    tcp-err-recovery loss-recovery mode always
    Disable error recovery
    tcp-err-recovery loss-recovery mode disable
    Enable SCPS per connection
    tcp cong-ctrl mode per-conn-tcp
    Enable SCPS error tolerance
    tcp cong-ctrl mode err-tol-tcp
    Set back to default TCP
    tcp cong-ctrl mode default
    Or, choose Optimization > Network Services: Transport Settings, select the appropriate radio button, and click Apply (Figure 15‑2).
    Figure 15‑2. Transport Settings Page
    If you have a mixed environment, configure your hub SteelHead to use automatic detect TCP optimization to reflect the various transport optimization mechanisms of your various remote site SteelHeads.
    You can also hard code your hub SteelHead to the desired setting.
    Restart the optimization service, either with the Management Console or the CLI.
    Riverbed recommends that you test a few different transport settings, such as the WAN buffer sizes, at different remote sites and determine which settings work best for your environment.
    For information about automatic detect TCP, see Configuring Automatic Detect TCP Optimization. For information about gathering performance characteristics and configuring transport settings, see the Riverbed Command-Line Interface Reference Manual and the SteelHead Management Console User’s Guide.
    Configuring Rate Pacing
    The following steps are required for rate pacing to function:
    After you choose the transport option, select the Enable Rate Pacing check box in the SteelHead Management Console. You can also use the CLI command tcp rate-cap enable.
    Configure MX-TCP under Advanced QoS.
    For more information about rate pacing, see Rate Pacing. For more information about MX-TCP and QoS, see MX-TCP.
    The relationship between the overall link rate and MX-TCP rate dictates how the rate pacing mechanism operates. Rate pacing exits TCP slow start at the MX-TCP rate if the MX-TCP rate is less than the link rate. If you configure rate pacing in this way, it avoids the first loss on exiting slow start and uses MX-TCP as a scheduler for sending data while still adapting to congestion on a shared network in the congestion avoidance phase.
    Alternatively, if the MX-TCP rate is greater than the link rate, then rate pacing exits at the link rate. This exit rate can incur a loss on exiting slow start, or packets are buffered in the bottleneck queue. The sending rate during congestion avoidance is based on a calculation between the rate the transport option (TCP stack) determines and the MX-TCP rate. Over time, the rate pacing mechanism continually probes for the higher MX-TCP rate.
    In summary, the relationship works as follows:
  • Link rate greater than MX-TCP rate—Exit slow start at MX-TCP rate and maintain MX-TCP rate in congestion avoidance.
  • Link rate is greater than 50% of the MX-TCP rate but less than the MX-TCP rate—Exit slow start at MX-TCP rate and use the congestion avoidance rate determined by the underlying TCP stack selected as the transport option.
  • Link rate less than 50% of the MX-TCP rate or MX-TCP not enabled—Use the underlying transport option for exiting slow start and the congestion avoidance algorithm.
  • Because a hub site can be connected to multiple satellite networks and remote sites can use a variety of TCP stacks, it is appropriate for you to use automatic detect on the hub site for rate pacing. You can set up MX-TCP on a site-by-site basis to refine the data rate for each remote. MX-TCP follows the QoS configuration for matching on a site and rule.
    The following is an example configuration for the hub site for two remote sites using rate pacing with different bandwidths:
  • Site 1 has subnet 172.16.1.0/24 and a link rate of 2 Mbps
  • Site 2 has subnet 172.16.2.0/24 and a link rate of 8 Mbps
  • Use the following CLI commands on the hub-site SteelHead:
    tcp cong-ctrl mode auto
    tcp rate-cap enable
    Configuring Single-Ended Connection Rule Table Settings
    Use the single-ended connection rule table to manage which flows are optimized or passed through for SCPS optimization. Configuration of the single-ended optimization rule table is similar to the in-path rules, but you must enable the single-ended connection rule table to apply the rules.
    To enable the single-ended connection rule table
  • Connect to the CLI and enter the following command:
  • tcp sat-opt scps scps-table enable
    You must have RiOS v8.5 or later to enable the single-ended connection rule table and SCPS compression with third-party WAN optimizers or TCP-PEPs.
    To enable the single-ended connection rule table and SCPS compression with third-party WAN optimizers or TCP-PEPs
  • Connect to the CLI and enter the following command:
  • tcp sat-opt scps scps-table enable
    tcp sat-opt scps legacy-comp enable
    You can also complete the following procedure from the SteelHead Management Console Optimization > Network Services: Transport Settings page.
    Figure 15‑3. Transport Settings Page with Single-Ended Connection Rule and SCPS Compression
    Enabling the SCPS single-ended connection rule table or SCPS compression requires a service restart.
    To see the current rules in the table, use the show tcp sat-opt scps rules command. Following is an example single-ended connection rule table:
    ssh (config) # show tcp sat-opt scps rules
    Rule S P VLAN Source Addr Dest Addr Port
    ----- - - ---- ------------------ ------------------ --------------
    1 Y Y all all all Interactive
    2 Y Y all all all RBT-Proto
    def Y Y all all all all
     
    (S) SCPS setting: Y=Allow SCPS
    N=SCPS Bypass
    (P) Allow only SCPS peering: Y=Enabled
    N=Disabled
    Rules are matched from top to bottom. A flow matching a rule stops at the first rule it matches and applies one of the SCPS modes: pass-through or enable. To pass through all flows for SCPS optimization, add a rule at the start of the table to match all sources, all destinations, all destination ports, and all VLANs.
    To create a pass through all flows rule
  • Connect to the CLI and enter the following command:
  • tcp sat-opt scps rule srcaddr all dstaddr all dstport "all" allow-scps disable scps-peer-only disable rulenum start
    Figure 15‑6 shows an example of a pass-through rule in the Management Console.
    Configuring Single-Ended Rules
    The following are procedures for configuring single-ended rules. Configuration of single-ended rules in RiOS v8.5 or later differs from configuration in RiOS v7.0 and v8.0. There are additional options available in single-ended rules: for example, using a single-ended proxy, enabling SCPS discovery or third-party integration and using rate pacing.
    Figure 15‑4 shows an example of adding a single-ended rule configured in the SteelHead Management Console using SCPS per connection as the TCP stack and rate pacing enabled. You can use this configuration for when you interpolate with a third-party WAN optimizer or TCP-PEP and your network could benefit from using rate pacing with SCPS per connection as the transport option. Rate pacing requires that you configure MX-TCP with advanced QoS and MX-TCP only supports IPv4 traffic.
    Figure 15‑4. Single-Ended Rule with SCPS Per Connection and Rate Pacing
    For more information about single-ended rules, see SCPS Single-Ended Rules. For information about configuring single-ended before RiOS v8.5, see earlier versions of the SteelHead Deployment Guide and Riverbed Deployment Guide on the Riverbed Support site at https://support.riverbed.com.
    To edit a single-ended connection rule
    Choose Optimization > Network Services: Transport Settings page.
    Expand the rule that you want to edit.
    Figure 15‑5. Edit Single-Ended Connection Rules
    To add a single-ended connection rule
    Choose Optimization > Network Services: Transport Settings page.
    Select Add New Rule.
    Populate the appropriate fields and settings.
    Click Add.
    The changes take place immediately to all new flows.
    Figure 15‑6 shows how to configure a pass-through rule for all traffic. Clear the SCPS Discover and TCP Proxy check boxes.
    Figure 15‑6. Configure a Pass-Through Rule for All Traffic
    Figure 15‑7, Rule 1, shows an example of a single-ended optimization pass-through rule for all traffic initiated from the client-side SteelHead.
    Figure 15‑7. Single-Ended Optimization Pass-Through Rule
    The Management Console passes through only locally initiated sessions through the LAN interface. Inbound SCPS sessions (SYNs with SCPS negotiation headers) arriving at the WAN interface are terminated. To bypass these inbound SCPS sessions, use the CLI. To pass through inbound SCPS sessions in the single-ended connection table, use the syntax option scps-peer-only disable.