Configuring In-Path Rules : Configuring in-path rules
  
Configuring in-path rules
You review, add, edit, and remove in-path rules in the Optimization > Network Services: In-Path Rules page. The In-Path Rules table lists the order and properties of the rules set for the running configuration.
For an overview of in-path rules, see In-path rules overview.
For details on IPv6 deployment options, see the SteelHead Deployment Guide.
To configure in-path rules
1. Choose Optimization > Network Services: In-Path Rules to display the In-Path Rules page.
2. Configure the rules as described in this table.
 
Control
Description
Add a New In-Path Rule
Displays the controls for adding a new rule.
Type
Select one of these rule types from the drop-down list:
•  Auto Discover - Uses the autodiscovery process to determine if a remote SteelHead is able to optimize the connection attempting to be created by this SYN packet. By default, Auto Discover is applied to all IP addresses and ports that are not secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting.
•  Fixed-Target - Skips the autodiscovery process and uses a specified remote SteelHead as an optimization peer.
You must specify at least one remote target appliance (and, optionally, which ports and backup appliances), and add rules to specify the network of servers, ports, port labels, and out-of-path appliances to use.
A fixed-target rule enables you to optimize traffic end-to-end using IPv6 addresses. You must change the use of all-IP to all-IPv6.
If you do not change to all-IPv6, use specific source and destination IPv6 addresses. The inner channel between appliances forms a TCP connection using the manually assigned IPv6 address. This method is similar to an IPv4 fixed-target rule and you configure it the same way.
•  Fixed-Target (Packet Mode Optimization) - Skips the autodiscovery process and uses a specified remote appliance as an optimization peer to perform bandwidth optimization on TCPv4, TCPv6, UDPv4, or UDPv6 connections.
Packet-mode optimization rules support both physical in-path and master/backup appliance configurations.
You must specify which TCP or UDP traffic flows need optimization, at least one remote target appliance, and, optionally, which ports and backup appliances to use.
In addition to adding fixed-target packet-mode optimization rules, you must go to Optimization > Network Services: General Service Settings, enable packet-mode optimization, and restart the optimization service.
Packet-mode optimization rules are unidirectional; a rule on the client-side appliance optimizes traffic to the server only. To optimize bidirectional traffic, define two rules:
•  A fixed-target packet-mode optimization rule on the client-side appliance to the server.
•  A fixed-target packet-mode optimization rule on the server-side SteelHead to the client.
Packet-mode optimization rules perform packet-by-packet optimization, as opposed to traffic-flow optimization. After you create the in-path rule to intercept the connection, the traffic flows enter the appliance. The appliance does not terminate the connection, but instead rearranges the packet headers and payload for SDR optimization. Next, it provides SDR optimization and sends the packets through a TCPv4 or TCPv6 channel to the peer appliance. The peer appliance decodes the packet and routes it to the destined server. The optimized packets are sent through a dedicated channel to the peer, depending on which in-path rule the packet's flow was matched against.
To view packet-mode optimized traffic, choose Reports > Networking: Current Connections or Connection History. You can also enter the show flows command at the system prompt.
 
Requirements:
•  You must enable IPv6 on base interfaces (IPv6 is enabled by default in RiOS 9.0).
Packet-mode optimization rules do not support:
•  Automatic reflection of DSCP markings.
•  Latency optimization and preoptimization policies. Selecting this rule type automatically sets the preoptimization policy and latency optimization policies to none.
•  Autodiscovery of the peer appliance. Because this is a fixed-target rule, the appliance determines the IP address of its peer from the rule configuration.
•  Connection forwarding, simplified routing, or asymmetric routing.
•  QoS, MIP interfaces, VSP, NetFlow, transparency, or the automatic kickoff feature.
•  Automatically assigned IPv6 addresses.
•  Pass-Through - Allows the SYN packet to pass through the appliance unoptimized. No optimization is performed on the TCP connection initiated by this SYN packet. You define pass-through rules to exclude subnets from optimization. Traffic is also passed through when the appliance is in bypass mode. (Pass through of traffic might occur because of in-path rules or because the connection was established before the appliance was put in place or before the optimization service was enabled.)
•  Discard - Drops the SYN packets silently. The appliance filters out traffic that matches the discard rules. This process is similar to how routers and firewalls drop disallowed packets: the connection-initiating device has no knowledge that its packets were dropped until the connection times out.
•  Deny - Drops the SYN packets, sends a message back to its source, and resets the TCP connection being attempted. Using an active reset process rather than a silent discard allows the connection initiator to know that its connection is disallowed.
Source Subnet
Specify the subnet IP address and netmask for the source network:
•  All IPv4 - Maps to 0.0.0.0/0.
•  All IPv6 - Maps to ::/0.
•  All IP - Maps to all IPv4 and IPv6 networks.
Note: In a virtual in-path configuration using packet-mode optimization, do not use the wildcard All IP option for both the source and destination IP addresses on the server-side and client-side appliances. Doing so can create a loop between the appliances if the server-side SteelHead forms an inner connection with the client-side appliance before the client-side appliance forms an inner connection with the server-side SteelHead. Instead, configure the rule using the local subnet on the LAN side of the appliance.
Use this format for an individual subnet IP address and netmask:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxx (IPv6)
When creating a fixed-target packet-mode rule, you must configure an IPv6 address and route for each interface, unless you are optimizing UDP traffic.
Port or Port Label - Specify the source port number, port label, or All. Click Port Label to display the Networking > App Definitions: Port Labels page for reference.
Destination Subnet
Specify the subnet IP address and netmask for the destination network:
•  All IPv4 - Maps to 0.0.0.0/0.
•  All IPv6 - Maps to ::/0.
•  All IP - Maps to all IPv4 and IPv6 networks.
Note: In a virtual in-path configuration using packet-mode optimization, do not use the wildcard All IP option for both the source and destination IP addresses on the server-side and client-side appliances. Doing so can create a loop between the appliances if the server-side SteelHead forms an inner connection with the client-side SteelHead before the client-side appliance forms an inner connection with the server-side SteelHead. Instead, configure the rule using the local subnet on the LAN side of the SteelHead.
Use this format for an individual subnet IP address and netmask:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxx (IPv6)
When creating a fixed-target packet mode optimization rule, you must configure an IPv6 address and route for each interface.
Port or Port Label - Specify the destination port number, port label, or All. Click Port Label to display the Networking > App Definitions: Port Labels page for reference.
Target Appliance IP Address
Specify the target appliance address for a fixed-target rule. When the protocol is TCP and you do not specify an IP address, the rule defaults to all IPv6 addresses.
Port - Specify the target port number for a fixed-target rule.
Backup Appliance IP Address
Specify the backup appliance address for a fixed-target rule.
Port - Specify the backup destination port number for a fixed-target rule.
VLAN Tag ID
Specify a VLAN identification number from 0 to 4094, enter all to apply the rule to all VLANs, or enter untagged to apply the rule to nontagged connections.
RiOS supports VLAN 802.1Q. To configure VLAN tagging, configure in-path rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
Protocol
(Appears only for fixed-target packet-mode optimization rules.) Select a traffic protocol from the drop-down list:
•  TCP - Specifies the TCP protocol. Supports TCP-over-IPv6 only.
•  UDP - Specifies the UDP protocol. Supports UDP-over-IPv4 only.
•  Any - Specifies all TCP-based and UDP-based protocols. This is the default setting.
Preoptimization Policy
Select a traffic type from the drop-down list:
•  None - If the Oracle Forms, SSL, or Oracle Forms-over-SSL preoptimization policy is enabled and you want to disable it for a port, select None. This is the default setting.
Port 443 always uses a preoptimization policy of SSL even if an in-path rule on the client-side appliance sets the preoptimization policy to None. To disable the SSL preoptimization for traffic to port 443, you can either:
1. Disable the SSL optimization on the client-side or server-side SteelHead.
—or—
2. Modify the peering rule on the server-side SteelHead by setting the SSL Capability control to No Check.
•  Oracle Forms - Enables preoptimization processing for Oracle Forms. This policy is not compatible with IPv6.
•  Oracle Forms over SSL - Enables preoptimization processing for both the Oracle Forms and SSL encrypted traffic through SSL secure ports on the client-side appliance. You must also set the Latency Optimization Policy to HTTP. This policy is not compatible with IPv6.
If the server is running over a standard secure port—for example, port 443—the Oracle Forms over SSL in-path rule needs to be before the default secure port pass-through rule in the in-path rule list.
•  SSL - Enables preoptimization processing for SSL encrypted traffic through SSL secure ports on the client-side appliance.
Latency Optimization Policy
Select one of these policies from the drop-down list:
•  Normal - Performs all latency optimizations (HTTP is activated for ports 80 and 8080). This is the default setting.
•  HTTP - Activates HTTP optimization on connections matching this rule.
•  Outlook Anywhere - Activates RPC over HTTP(S) optimization for Outlook Anywhere on connections matching this rule. This policy is not compatible with IPv6. To automatically detect Outlook Anywhere or HTTP on a connection, select the Normal latency optimization policy and enable the Auto-Detect Outlook Anywhere Connections option in the Optimization > Protocols: MAPI page. The Auto-Detect option in the MAPI page is best for simple configurations with only a single appliance at each site and when the IIS server is also handling Web sites. If the IIS server is only used as RPC Proxy, and for configurations with asymmetric routing, connection forwarding or Interceptor installations, add in-path rules that identify the RPC Proxy server IP addresses and select this latency optimization policy. After adding the in-path rule, disable the Auto-Detect option in the Optimization > Protocols: MAPI page.
•  Citrix - Activates Citrix-over-SSL optimization on connections matching this rule. This policy is not compatible with IPv6. Add an in-path rule to the client-side appliance that specifies the Citrix Access Gateway IP address, select this latency optimization policy on both the client-side and server-side SteelHeads, and set the preoptimization policy to SSL. The preoptimization policy must be set to SSL.
SSL must be enabled on the Citrix Access Gateway. On the server-side SteelHead, enable SSL and install the SSL server certificate for the Citrix Access Gateway.
The client-side and server-side appliances establish an SSL channel between themselves to secure the optimized ICA traffic. End users log in to the Access Gateway through a browser (HTTPS) and access applications through the Web Interface site. Clicking an application icon starts the Online Plug-in, which establishes an SSL connection to the Access Gateway. The ICA connection is tunneled through the SSL connection.
The appliance decrypts the SSL connection from the user device, applies ICA latency optimization, and reencrypts the traffic over the Internet. The server-side SteelHead decrypts the optimized ICA traffic and reencrypts the ICA traffic into the original SSL connection destined to the Access Gateway.
•  Exchange Autodetect - Automatically detects MAPI transport protocols (Autodiscover, Outlook Anywhere, and MAPI over HTTP) and HTTP traffic. For MAPI transport protocol optimization, enable SSL and install the SSL server certificate for the Exchange Server on the server-side appliance. To activate MAPI over HTTP bandwidth optimization, you must also choose Optimization > MAPI and select Enable MAPI over HTTP optimization on the client-side appliance. Both the client-side and server-side appliances must be running RiOS 9.1 for MAPI over HTTP bandwidth optimization.
•  None - Do not activate latency optimization on connections matching this rule. For Oracle Forms-over-SSL encrypted traffic, you must set the Latency Optimization Policy to HTTP.
Note: Setting the Latency Optimization Policy to None excludes all latency optimizations, such as HTTP, MAPI, and SMB.
Data Reduction Policy
Optionally, if the rule type is Auto Discover or Fixed-Target, you can configure these types of data reduction policies:
•  Normal - Perform LZ compression and SDR.
•  SDR-Only - Perform SDR; do not perform LZ compression.
•  SDR-M - Performs data reduction entirely in memory, which prevents the SteelHead from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. This data reduction policy is useful for
–  a very small amount of data: for example, interactive traffic.
–  point-to-point replication during off-peak hours when both the server-side and client-side SteelHeads are the same (or similar) size.
•  Compression-Only - Perform LZ compression; do not perform SDR.
•  None - Do not perform SDR or LZ compression.
To configure data reduction policies for the FTP data channel, define an in-path rule with the destination port 20 and set its data reduction policy. Setting QoS for port 20 on the client-side appliance affects passive FTP, while setting the QoS for port 20 on the server-side SteelHead affects active FTP.
To configure optimization policies for the MAPI data channel, define an in-path rule with the destination port 7830 and set its data reduction policy.
Cloud Acceleration
After you subscribe to a SaaS platform and enable it, ensure that cloud acceleration is ready and enabled. When cloud acceleration is enabled, connections to the subscribed SaaS platform are optimized by the SteelHead SaaS. You do not need to add an in-path rule unless you want to optimize specific users and exclude others. Select one of these choices from the drop-down list:
•  Auto - If the in-path rule matches, the connection is optimized by the SteelHead SaaS connection.
•  Pass Through - If the in-path rule matches, the connection is not optimized by the SteelHead SaaS, but it follows the other rule parameters so that the connection might be optimized by this SteelHead with other SteelHeads in the network, or it might be passed through.
Auto Kickoff
Enables kickoff, which resets preexisting connections to force them to go through the connection creation process again. If you enable kickoff, connections that preexist when the optimization service is started are reestablished and optimized.
Generally, connections are short-lived and kickoff is not necessary. It is suitable for certain long-lived connections, such as data replication, and very challenging remote environments. For example, in a remote branch-office with a T1 and a 35 ms round-trip time, you would want connections to migrate to optimization gracefully, rather than risk interruption with kickoff.
RiOS provides three ways to enable kickoff:
•  Globally for all existing connections in the Optimization > Network Services: General Service Settings page.
•  For a single pass-through or optimized connection in the Current Connections report, one connection at a time.
•  For all existing connections that match an in-path rule and the rule has kickoff enabled.
In most deployments, you do not want to set automatic kickoff globally because it disrupts all existing connections. When you enable kick off using an in-path rule, once the appliance detects packet flow that matches the IP and port specified in the rule, it sends an RST packet to the client and server maintaining the connection to try to close it. Next, it sets an internal flag to prevent any further kickoffs until the optimization service is once again restarted.
Note: If no data is being transferred between the client and server, the connection is not reset immediately. It resets the next time the client or server tries to send a message. Therefore, when the application is idle, it might take a while for the connection to reset.
By default, automatic kickoff per in-path rule is disabled.
The service applies the first matching in-path rule for an existing connection that matches the source and destination IP and port; it does not consider a VLAN tag ID when determining whether to kick off the connection. Consequently, the service automatically kicks off connections with matching source and destination addresses and ports on different VLANs.
The source and destination of a preexisting connection cannot be determined because the appliance did not see the initial TCP handshake, whereas an in-path rule specifies the source and destination IP address to which the rule should be applied. Hence this connection for this IP address pair is matched twice, once as source to destination and the other as destination to source to find an in-path rule.
As an example, the following in-path rule will kick off connections from 10.11.10.10/24 to 10.12.10.10/24 and 10.12.10.10/24 to 10.11.10.10/24:
Src 10.11.10.10/24 Dst 10.12.10.10/24 Auto Kickoff enabled
The first matching in-path rule will be considered during the kickoff check for a preexisting connection. If the first matching in-path rule has kickoff enabled, then that preexisting connection will be reset.
 
 
 
Note: Specifying automatic kickoff per in-path rule enables kickoff even when you disable the global kickoff feature. When global kickoff is enabled, it overrides this setting. You set the global kickoff feature using the Reset Existing Client Connections on Start Up feature, which appears in the Optimization > Network Services: General Service Settings page.
Note: This feature pertains only to autodiscover and fixed-target rule types and is dimmed for the other rule types.
Neural Framing Mode
Optionally, if the rule type is Auto Discover or Fixed-Target, you can select a neural framing mode for the in-path rule. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer.
Select a neural framing setting:
•  Never - Do not use the Nagle algorithm. The Nagle algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It works by combining a number of small outgoing messages and sending them all at once. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used. In general, this setting works well with time-sensitive and chatty or real-time traffic.
•  Always - Use the Nagle algorithm. This is the default setting. All data is passed to the codec, which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A timer (6 ms) backs up the codec and causes leftover data to be consumed. Neural heuristics are computed in this mode but are not used. This mode is not compatible with IPv6.
•  TCP Hints - If data is received from a partial frame packet or a packet with the TCP PUSH flag set, the encoder encodes the data instead of immediately coalescing it. Neural heuristics are computed in this mode but are not used. This mode is not compatible with IPv6.
•  Dynamic - Dynamically adjust the Nagle parameters. In this option, the system discerns the optimum algorithm for a particular type of traffic and switches to the best algorithm based on traffic characteristic changes. This mode is not compatible with IPv6.
For different types of traffic, one algorithm might be better than others. The considerations include: latency added to the connection, compression, and SDR performance.
To configure neural framing for an FTP data channel, define an in-path rule with the destination port 20 and set its data reduction policy. To configure neural framing for a MAPI data channel, define an in-path rule with the destination port 7830 and set its data reduction policy.
WAN Visibility Mode
Enables WAN visibility, which pertains to how packets traversing the WAN are addressed. RiOS provides three types of WAN visibility: correct addressing, port transparency, and full address transparency.
You configure WAN visibility on the client-side SteelHead (where the connection is initiated).
Port, full transparency, and full transparency modes are not compatible with IPv6.
Select one of these modes from the drop-down list:
•  Correct Addressing - Disables WAN visibility. Correct addressing uses appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting.
•  Port Transparency - Port address transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating appliances can view these preserved fields.
Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules written in terms of client and network addresses, port transparency enables your routers to use existing rules to classify the traffic without any changes.
Port transparency enables network analyzers deployed within the WAN (between the appliances) to monitor network activity and to capture statistics for reporting by inspecting traffic according to its original TCP port number.
Port transparency does not require dedicated port configurations on your appliances.
Note: Port transparency only provides server port visibility. It does not provide client and server IP address visibility, nor does it provide client port visibility.
•  Full Transparency - Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating SteelHeads can view these preserved fields.
If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent to enabling full address transparency. For details, see the SteelHead Deployment Guide.
However, if you must see your client or server IP addresses across the WAN, full transparency is your only configuration option.
Note: Enabling full address transparency requires symmetrical traffic flows between the client and server. If any asymmetry exists on the network, enabling full address transparency might yield unexpected results, up to and including loss of connectivity. For details, see the SteelHead Deployment Guide.
WAN Visibility Mode (continued)
RiOS supports Full Transparency with a stateful firewall. A stateful firewall examines packet headers, stores information, and then validates subsequent packets against this information. If your system uses a stateful firewall, the following option is available:
•  Full Transparency with Reset - Enables full address and port transparency and also sends a forward reset between receiving the probe response and sending the transparent inner channel SYN. This mode ensures the firewall does not block inner transparent connections because of information stored in the probe connection. The forward reset is necessary because the probe connection and inner connection use the same IP addresses and ports and both map to the same firewall connection. The reset clears the probe connection created by the appliance and allows for the full transparent inner connection to traverse the firewall.
Notes:
•  For details on configuring WAN visibility and its implications, see the SteelHead Deployment Guide.
•  WAN visibility works with autodiscover in-path rules only. It does not work with fixed-target rules or server-side out-of-path configurations.
•  To enable full transparency globally by default, create an in-path autodiscover rule, select Full, and place it above the default in-path rule and after the Secure, Interactive, and RBT-Proto rules.
•  You can configure an appliance for WAN visibility even if the server-side SteelHead does not support it, but the connection is not transparent.
•  You can enable full transparency for servers in a specific IP address range and you can enable port transparency on a specific server. For details, see the SteelHead Deployment Guide.
•  The Top Talkers report displays statistics on the most active, heaviest users of WAN bandwidth, providing some WAN visibility without enabling a WAN Visibility Mode.
Position
Select Start, End, or a rule number from the drop-down list. RiOS evaluates rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule do not match, the system consults the next rule. For example, if the conditions of rule 1 do not match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
In general, list rules in this order:
1. Deny 2. Discard 3. Pass Through 4. Fixed-Target 5. Auto Discover
Note: The default rule, Auto Discover, which optimizes all remaining traffic that has not been selected by another rule, cannot be removed and is always listed last.
Description
Describe the rule to facilitate administration.
Enable Rule
Select to enable the in-path rule.
Add
Adds the rule to the list. The Management Console redisplays the In-Path Rules table and applies your modifications to the running configuration, which is stored in memory.
Remove Selected Rules
Select the check box next to the name and click Remove Selected Rules.
Move Selected Rules
Moves the selected rules. Click the arrow next to the desired rule position; the rule moves to the new position.
3. The default rule, Auto, which optimizes all remaining traffic that has not been selected by another rule, cannot be removed and is always listed last.
The default rule maps to all IPv4 and IPv6 addresses (All-IP:*).
The default rule for TCP traffic, either IPv4 or IPv6, attempts autodiscovery with correct addressing as the WAN visibility mode.
For details on IPv6 deployment options, see the SteelHead Deployment Guide.
To edit an in-path rule
1. Choose Optimization > Network Services: In-Path Rules to display the In-Path Rules page.
2. Select the rule number in the rule list.
3. Edit the rule.
4. Click Save to save your settings permanently.
After the Management Console has applied your settings, you can verify whether changes have had the desired effect by reviewing related reports. When you have verified appropriate changes, you can write the active configuration that is stored in memory to the active configuration file (or you can save it as any filename you choose). For details, see Managing configuration files.
Related topics
•  In-path rules overview
•  Default in-path rules
•  Configuring general service settings
•  Enabling peering and configuring peering rules
•  Configuring port labels
•  Configuring HTTP optimization
•  Secure inner channel overview
•  Viewing current connection reports
•  Viewing connection history reports