Traffic Redirection
  
Traffic Redirection
This chapter describes the redirection protocol and the redirection controls, and it provides general recommendations for redirection in Interceptor deployments:
•  Overview of traffic redirection
•  Intra-cluster latency
Overview of traffic redirection
This section includes the following topics:
•  Hardware-assisted pass-through
•  In-path rules
•  Load-balance rules
Interceptors control traffic redirection to SteelHeads and SteelHead optimization targets for different types of traffic using the following techniques:
•  In-path rules - Control whether locally initiated connections are redirected.
•  Hardware-assisted pass-through (HAP) rules - Control what traffic is passed through in hardware on supported network bypass cards.
•  Load-balance rules - Control what traffic is redirected and how traffic is distributed to the SteelHead cluster.
The three types of redirection control rules control what traffic is redirected and potentially optimized by a SteelHead. Figure: Redirection packet process overview shows how the control rules are used when a packet arrives on the LAN or WAN interfaces of the Interceptor.
Figure: Redirection packet process overview
First the Interceptor checks whether the packets arriving on a LAN or WAN port match an HAP rule. If they match, the Interceptor bridges the packet in the hardware corresponding to the port. If not, the Interceptor checks whether the packet belongs to the flow that is redirected. If the packet does not belong to that flow, it could be because the flow is going through autodiscovery or because the flow previously went through autodiscovery and started optimization.
If the packet does not correspond to a redirected flow, in-path and load-balance rules determine the next action. TCP SYN packets from a LAN interface are processed with the in-path rules and can be either dropped, passed-through, or proceed for further processing with the load-balance rules. Interceptor in-path rules are not checked for packets with an automatic discovery probe, such as WAN-side or LAN-side SYN+ packets.
Typically in an Interceptor cluster, all Interceptors have the same control rule set. You are not required to have the same control set, but differences in rules can lead to surprising behavior. For example, as shown in Figure: Redirection packet process overview, HAP rules are checked in hardware before a redirect flow entry is checked in software. This order results in wrong behavior if one Interceptor in a cluster begins redirecting packets when another clustered Interceptor has a HAP-pass rule that matches the connection.
Situations in which different rules are useful include deployments in which Interceptors and SteelHeads in the same cluster are in locations that have some small, but significant, latency separating them. For details on deploying Interceptor clusters across distances, see Intra-cluster latency.
Hardware-assisted pass-through
Interceptor 2.0.4 or later supports HAP traffic forwarding with certain NIC cards. HAP is currently supported with 10-Gbps Ethernet cards (parts NIC-10G-2LR and NIC-10G-2SR). HAP allows you to statically configure all UDP traffic and selected TCP traffic (identified by subnet pairs, not necessarily source and destination; or VLANs) to be passed through the Interceptor at close to line-rate speeds.
HAP works by programming a special network chip on the NIC card to recognize traffic as soon as it enters the LAN or WAN port. HAP then bridges the traffic in hardware to the corresponding LAN or WAN port. Because you cannot use HAP for traffic when autodiscovery must decide whether to optimize or not, HAP is only useful for traffic you know you never want redirected or optimized.
You control HAP rules using the in-path hw-assist commands or using the Interceptor Management Console. The current maximum number of HAP rules allowed is 50.
In-path rules
The Interceptor in-path rules serve a similar purpose to the SteelHead in-path rules. They define the action to be taken when TCP SYN packets arrive on a LAN interface of an Interceptor.
The in-path rules are an ordered list of matching parameters and an action field. The matching parameters can be:
•  IP source or destination subnets.
•  an IP source or destination host.
•  a destination TCP port.
•  a VLAN ID.
The list is processed in order, and the action from the first rule whose parameters match the packet determines the next step of the Interceptor.
In-path rules have the following actions:
•  Redirect - Continue processing the packet with the load-balance rules.
•  Pass - Bridge the SYN packet to the corresponding WAN port.
•  Deny - Drop the SYN packet and send a TCP to its source.
•  Discard - Silently drop the SYN packet.
The deny and discard actions are generally not used in deployments. Similar to the in-path rule actions of the SteelHead with the same names, these actions might be useful for troubleshooting or when trying to contain a worm or virus outbreak. Most in-path rules use either the redirect or pass actions.
The Interceptor default in-path rule configuration is similar to that for a SteelHead. There are three pass rules configured by default. The rules match secure, interactive, and Riverbed protocols with destination TCP ports. All other traffic matches the built-in default rule whose type is redirect.
The Interceptor does not have a control mechanism that corresponds to the SteelHeads peering rules. The load-balance rules are used to provide the same type of control as the peering rules of the SteelHead.
For details on entering or viewing in-path rules, see the Riverbed Command-Line Interface Reference Manual and SteelHead Interceptor User’s Guide.
Load-balance rules
This section discusses load-balancing rules and includes the following topics:
•  Rule types and matching
•  Default rule and pool
•  Load-balance rule processing
•  SteelHead selection
You can use the load-balance rules in the following ways:
•  As a filtering mechanism to determine whether traffic is optimized or not
•  As a distribution mechanism, specifying which SteelHeads optimize particular traffic
Rule types and matching
As shown in Figure: Redirection packet process overview, the load-balance rules are processed on a TCP SYN packet for a connection. This process might be a SYN packet for a connection initiated at the site where the Interceptor is deployed, or it might be a SYN packet arriving from the WAN, which has an embedded autodiscovery probe.
Each rule has an action type of either pass-through or redirect. Rules whose action is redirect must also specify at least one SteelHead. When considering SteelHeads that have multiple in-path IP addresses, use only one in-path IP address from the SteelHead for load-balance rule configuration. Typically the IP address is the inpath0_0 interface IP address. As long as the Interceptor can reach one of the in-path IP addresses for the SteelHeads, it handles the load-balance rules the same as if the inpath0_0 IP address was reachable.
A redirect rule might also specify the fair peering flag. Use this flag when the Interceptor selects the SteelHead among those listed in the redirect rule to optimize a connection.
For details on the fair peering flag, and on how the Interceptor selects among the SteelHeads listed in a redirect rule, see SteelHead selection.
You can specify a SteelHead in more than one redirect-type load-balance rule, but only if none of the rules have the fair peering flag enabled.
The load-balance rules match a packet with any of the following parameters:
•  Source subnet (or host)
•  IP destination subnet (or host)
•  IP destination TCP port
•  VLAN ID
•  Neighbor information
Specify the neighbor information parameter used to match arriving SYN packets that have an embedded autodiscovery probe. You can specify one of the following parameters:
•  Non-probe - Match packets that do not have an embedded probe.
•  Probe-only - Match packets that do have an embedded probe.
•  IP address - Match packets that have an embedded probe, but only if the probing SteelHead matches the specified IP address.
Default rule and pool
The Interceptor has a built-in default rule called auto. This rule acts as the last rule in the load-balance rule list, and you cannot remove or alter it. This rule has a list of SteelHeads associated with it, but the Interceptor manages the list dynamically.
Any SteelHead that you do not specify in a configured load-balance rule is in the redirect list of the default load-balance rule. If you specify a SteelHead in a configured load-balance rule, the Interceptor treats it as if it is not present in the default load-balance rule. The SteelHeads that are targets of the default load-balance rule are called the default load-balance rule pool, or the default pool.
If you do not configure any load-balance rules on the Interceptor, all four SteelHeads are present in the default pool. If you add a redirect rule that specifies SteelHeads A and B, then only SteelHeads C and D remain in the default pool. If you change the load-balance rule and only specify SteelHead A, then SteelHead B moves back to the default pool, and the pool now contains SteelHeads B, C, and D.
If you configure all SteelHeads on an Interceptor with load-balance rules, then the default pool is empty. In this case, only connections that are specified by one of the configured rules are optimized. To change that behavior, you can configure a redirect rule at the end of the list of load-balance rules, which matches any connection, and specify in its redirect list which SteelHeads to use as catch-all SteelHeads.
Load-balance rule processing
Load-balance rules are processed differently from the Interceptor in-path rules.
Load-balance rule is a numbered list of rules. A list is processed in order, from the first rule in the list to the last. If the action from the first rule whose parameters match the packet is of type pass-through, the packet (and subsequent packets for the connection) is passed through the Interceptor.
If the action from the first matching rule is of type redirect, then the Interceptor compares the list of SteelHeads specified in the rule with its knowledge of the SteelHead cluster to create a potentially smaller list of rule-specific candidate SteelHeads. Candidate SteelHeads are those that are specified by the redirect rule; are live (not paused) and have the TCP connection capacity to optimize an additional connection. Out of these candidate SteelHeads, the Interceptor selects a SteelHead to perform the optimization for the connection.
If the list of candidate SteelHeads is empty, then the Interceptor continues examining the list of load-balance rules. It proceeds to the next rule until it either finds a redirect list that has a candidate SteelHead or a pass-through rule matches. If no configured rule matches and the default pool is empty, the connection is passed through. If no configured rule matches and there are SteelHeads in the default pool, then the Interceptor selects a SteelHead from the default pool for optimization.
For details on how an Interceptor selects a SteelHead from among the candidates, see SteelHead selection.
By default, SteelHead and Interceptors exchange heartbeat information every second. If an Interceptor does not receive the most recent three heartbeat responses, it considers the SteelHead disabled.
You can put a SteelHead into paused mode with the steelhead name <name> paused command. When a SteelHead is in paused mode, it is never considered a candidate SteelHead. The Interceptor continues to redirect traffic for existing connections to a paused SteelHead. Because it cannot be a candidate SteelHead, it does not redirect new connections.
SteelHeads communicate the number of TCP connections that they are optimizing with the Interceptor. The Interceptor uses this information when it needs to decide which SteelHead in a list has the least number of connections optimized and to make sure that a SteelHead is not sent too many TCP connections for optimization.
SteelHead selection
When a packet matches a redirect rule, the Interceptor creates a list of candidate SteelHeads by selecting those SteelHeads that are specified in the redirect rule, are known to the Interceptor to be live (not paused), and have capacity for an additional optimized TCP connection. The Interceptor selects a SteelHead depending on the fair peering flag setting and on which SteelHead cluster optimizes connections to or from the remote site in the past, known as peer affinity.
To achieve peer affinity, the Interceptor selects a clustered SteelHead that has optimized connections with the remote SteelHead. Peer affinity can improve bandwidth savings and performance because it matches SteelHeads that might have segments in common in their RiOS data stores. The Interceptor tracks information about peer affinity by maintaining an in-memory history on past optimized connections. For connections initiated at the Interceptor site, the Interceptor takes the history from the remote server IP and TCP port information. For connections initiated at remote locations, the Interceptor bases the information on an internal SteelHead identifier of the remote SteelHeads.
For details on the RiOS data store, see the SteelHead Deployment Guide.
All Interceptors in a cluster share peer affinity information. If you add an Interceptor to the cluster, it receives the current state of peer affinity from the other Interceptors. Peer affinity information does not remain on stable storage, but only in the memory of all the Interceptors in a cluster. If all of the Interceptors in a cluster fail or reboot, the tracked peer affinity information is lost.
You can clear peer affinity information at sites by restarting the service of one Interceptor. For sites with multiple Interceptors, you must stop service on all clustered Interceptors and then start all service again. Because Interceptors share affinity tables, if service is restarted on only one Interceptor, that Interceptor receives the affinity table from its peer when the service starts again.
For connections from a new remote SteelHead, the Interceptor determines which clustered SteelHead to redirect to, based on the following selection mechanisms:
•  Peer affinity only selection - If you do not enable fair peering, the Interceptors pair a new remote SteelHead to a clustered SteelHead with the least number of connections. Additional connections from an already known and paired remote SteelHead are always redirected to the same clustered SteelHead, unless the SteelHead is unavailable or at maximum connection capacity.
This behavior is geared toward maximum data reduction while using the least disk space across the clustered SteelHeads. The downside of using peer affinity only is that a disproportionate number of remote SteelHeads might be paired with certain clustered SteelHeads. For example, if you take a cluster of two SteelHeads and one SteelHead is removed for maintenance, all remote SteelHeads are peered to the remaining SteelHead. Even when the removed SteelHead returns to operation, the remote SteelHeads maintain pairing with the original remaining SteelHead.
You can reset peer affinity information by stopping service on all Interceptors. Alternatively, you can add a SteelHead to the cluster and then configure load-balancing rules to force some traffic to the new SteelHead to create affinity. However, the fair peering feature enhancements described next, address the optimization concentration of peer-affinity-only selection automatically.
•  Fair peering v1 (traditional) - Fair peering v1 was introduced in Interceptor 2.0. When you enable fair peering v1, the Interceptor selects SteelHeads so that, over time, each local clustered SteelHead is peered with an equal number of remote SteelHeads. For example, if there are 100 communicating remote SteelHeads and two clustered SteelHeads, each clustered SteelHead has 50 remote SteelHeads paired with it. If an additional clustered SteelHead is added, the Interceptor moves existing and new pairings to the new SteelHead. As a result, each clustered SteelHead has approximately 33 pairings.
Fair peering v1 does not take into account the size of the remote SteelHead. You can have much larger remote SteelHeads paired with one local clustered SteelHead, using resources on the clustered SteelHeads unevenly.
You enable fair peering v1 under the load-balancing rules. You configure fair peering v1 for each load-balancing rule with the Enable Fair Peering for this Rule check box in the Interceptor Management Console. For the default rule, use the Enable Use of Fair Peering for Default Rule option. You cannot use fair peering v1 on two different rules that have the same local SteelHead targets.
•  Fair peering v2 - Interceptor 3.0 and later introduces improvements to the original fair peering v1. Fair peering v2 provides a more intelligent pairing distribution by taking into account the remote and local SteelHead sizes. The Interceptor takes the size of the remote SteelHeads paired to a local SteelHead and compares it to the local SteelHead size to calculate a utilization ratio. The Interceptor then uses the local SteelHead ratio to determine where new remote SteelHead pairings are distributed. If a local SteelHead utilization ratio exceeds a percentage compared to other local SteelHeads, then the Interceptor migrates existing connection to a SteelHead with a lower utilization ratio. Keep in mind the utilization ratio refers to the total capacity of the remote SteelHeads as compared to the total capacity of the local SteelHeads. The total capacity is the total number of connections a SteelHead can optimize.
You enable fair peering v2 on Interceptor 3.0 and later under Optimization > Load Balancing Rules and selecting Enable Fair Peering v2. This setting is applied across all rules, unlike fair peering v1. Fair peering v2 overrides fair peering v1 configuration. If you enable fair peering v2, local SteelHeads must run RiOS 6.1.3 or later. You can only enable fair peering v2 if the multi-interface (cluster) protocol is also enabled.
We recommend that you use fair peering v2 because it uses more variables for determining selection.
Interceptor 3.0 and later includes local SteelHead pressure monitoring, providing the ability to avoid over-used SteelHeads or to direct load-balancing decisions away from them. These monitored parameters are as follows:
•  Available memory
•  CPU use
•  Disk load
We recommend that you enable pressure monitoring only in conjunction with fair peering v2.
The pressure monitoring parameter is in a normal, high, or severe state. The local SteelHead is responsible for sending its pressure monitoring state changes to the Interceptors. The Interceptor never directs new connections to a SteelHead in a severe state. If other SteelHeads are not available or are in a severe state, then the new connection is passed through. The Interceptor does not pair a new remote SteelHead with a high-state SteelHead unless there are no other SteelHeads in a normal state. New connections from already paired remote SteelHeads continue to redirect traffic to the SteelHead in a high state.
If you select Enable Capacity Reduction, you artificially and temporarily reduce the size of the SteelHead in a high state for Interceptor calculations. By reducing the size for load-balancing calculations, the Interceptor moves existing paired peers from the SteelHead in the high state to less-used SteelHeads, which reduces the number of new connections sent to the SteelHead in the high state.
The temporary reduction of the local SteelHead size continues until the SteelHead goes back to normal state, unless you select Enable Permanent Capacity Reduction. The Enable Permanent Capacity Reduction option artificially reduces the size of the SteelHead in a high state for load-balancing decisions until a service restart on the Interceptors.
For more details on capacity management and how to enable pressure monitoring, see the SteelHead Interceptor User’s Guide and https://supportkb.riverbed.com/support/index?page=content&id=S14235.
Note: We recommend that you select Enable Capacity Reduction with fair peering v2 to help distribute pairings to less-used local SteelHeads. If the local SteelHead periodically is in a high state, we recommend that you select Enable Permanent Capacity Reduction. You can view the pressure status of SteelHeads from the Interceptor Management Console Home page.
Intra-cluster latency
In general, we recommend that the maximum intra-cluster round-trip latency (the round-trip latency between any two members of a cluster between any two Interceptors, or between any Interceptor and SteelHead) be less than 1 millisecond.
You can deploy Interceptors so that some moderate latency exists between members of the Interceptor cluster. Such intra-cluster latency has an impact on the optimized traffic, because each connection to be optimized requires communication among all Interceptors in a cluster, to achieve efficient redirection.
The longest round-trip latency between any two Interceptors, or between an Interceptor and a SteelHead, should be less than one-fifth of the round-trip latency to the closest optimized remote site. This trip time ensures that intra-cluster communication does not cause connection setup time to be greater for optimized connections compared to nonoptimized connections for the closest remote site. Deployments with intra-cluster round-trip latencies higher than 10 milliseconds should be implemented only after technical consultation with Riverbed.
Figure: Data center with WAN landing points at separate locations shows a data center with WAN landing points at separate locations. If any TCP flow can be routed asymmetrically across both WAN links, then you can configure the Interceptors at the locations as redirect peers. Any latency between the sites might have an impact on the performance of optimized connections, and it definitely impacts the time it takes for an optimized connection to be established.
Figure: Data center with WAN landing points at separate locations