Configuring Optimization Features : Configuring general service settings
  
Configuring general service settings
This section describes how to configure general optimization service settings in the Optimization > Network Services: General Service Settings page. It includes these topics:
•  Enabling basic deployment options
•  Enabling failover
•  Configuring general service settings
Enabling basic deployment options
General Service Settings include controls to enable or disable in-path, out-of-path, failover support, and to set connection limits and the maximum connection pooling size.
If you have a SteelFusion Edge that contains multiple bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your SteelFusion Edge.
The properties and values you set in this page depend on your deployment. For example, these deployment types would require different choices:
•  Physical In-Path - The SteelFusion Edge is physically in the direct path between the client and the server. The clients and servers continue to see client and server IP addresses. Physical in-path configurations are suitable for any location where the total bandwidth is within the limits of the installed SteelFusion Edge.
•  Virtual In-Path - The SteelFusion Edge is virtually in the path between the client and the server. This deployment differs from a physical in-path in that a packet redirection mechanism is used to direct packets to SteelFusion Edges that are not in the physical path. Redirection mechanisms include WCCP, Layer-4 switches, and PBR. In this configuration, clients and servers continue to see client and server IP addresses.
•  Out-of-Path - The SteelFusion Edge is not in the direct path between the client and the server. Servers see the IP address of the server-side SteelFusion Edge rather than the client IP address, which might impact security policies. An out-of-path configuration is suitable for data center locations where physical in-path or virtual in-path configurations are not possible.
For an overview of in-path and out-of-path deployment options, see the SteelHead Deployment Guide.
Enabling failover
In the event of appliance failure, the SteelFusion Edge enters bypass mode to avoid becoming a single point of failure in your network. If you want optimization to continue in the event of appliance failure, you can deploy redundant appliances as failover buddies.
For details about failover redundancy, see the SteelHead Deployment Guide.
Physical in-path failover deployment
For a physical in-path failover deployment, you configure a pair of appliances: one as a master and the other as a backup. The master appliance in the pair (usually the appliance closest to the LAN) is active and the backup appliance is passive. The master appliance is active unless it fails for some reason. The backup is passive while the master is active and becomes active if either the master fails or the master reaches its connection limit and enters admission control status. A backup appliance does not intercept traffic while the master appliance is active. It pings the master appliance to make sure that it is alive and processing data. If the master appliance fails, the backup takes over and starts processing all of the connections. When the master appliance comes back up, it sends a message to the backup that it has recovered. The backup appliance stops processing new connections (but continues to serve old ones until they end).
Out-of-path failover deployment
For an out-of-path failover deployment, you deploy two server-side appliances and add a fixed-target rule to the client-side appliance to define the master and backup target appliances. When both the master and backup appliances are functioning properly, the connections traverse the master appliance. If the master appliance fails, subsequent connections traverse the backup appliance.
The master appliance uses an Out-of-Band (OOB) connection. The OOB connection is a single, unique TCP connection that communicates internal information only; it does not contain optimized data. If the master appliance becomes unavailable, it loses this OOB connection and the OOB connection times out in approximately 40 to 45 seconds. After the OOB connection times out, the client-side appliance declares the master appliance unavailable and connects to the backup appliance.
During the 40 to 45 second delay before the client-side appliance declares a peer unavailable, it passes through any incoming new connections; they are not blackholed.
While the client-side appliance is using the backup appliance for optimization, it attempts to connect to the master appliance every 30 seconds. If the connection succeeds, the client-side appliance reconnects to the master appliance for any new connections. Existing connections remain on the backup appliance for their duration. This is the only time (immediately after a recovery from a master failure) that connections are optimized by both the master appliance and the backup.
If both the master and backup appliances become unreachable, the client-side appliance tries to connect to both appliances every 30 seconds. Any new connections are passed through the network unoptimized.
Synchronizing master and backup failover pairs
In addition to enabling failover and configuring buddy peering, you must synchronize the RiOS data stores for the master-backup pairs to ensure optimal use of SDR for warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN. For information on synchronizing RiOS data stores for master-backup pairs, see Synchronizing peer RiOS data stores.
Configuring general service settings
In the General Service Settings page, you can also modify default settings for the maximum half-opened connections from a single source IP address and the connection pool size. For details, pay careful attention to the configuration descriptions included in the following procedure.
To configure general optimization service settings
1. Choose Optimization > Network Services: General Service Settings to display the General Service Settings page.
2. Under In-Path Settings, complete the configuration as described in this table.
 
Control
Description
Enable In-Path Support
Enables optimization on traffic that is in the direct path of the client, server, and appliance.
Reset Existing Client Connections on Start Up
Enables kickoff globally. If you enable kickoff, connections that exist when the optimization service is started and restarted are disconnected. When the connections are retried they are optimized.
Generally, connections are short-lived and kickoff is not necessary. It is suitable for very challenging remote environments. In a remote branch-office with a T1 and 35-ms round-trip time, you would want connections to migrate to optimization gracefully, rather than risk interruption with kickoff.
RiOS provides a way to reset preexisting connections that match an in-path rule and the rule has kickoff enabled. You can also reset a single pass-through or optimized connection in the Current Connections report, one connection at a time.
Do not enable kickoff for in-path appliances that use autodiscover or if you do not have an appliance on the remote side of the network. If you do not set any in-path rules the default behavior is to autodiscover all connections. If kickoff is enabled, all connections that existed before the appliance started are reset.
Enable L4/PBR/WCCP Interceptor Support
Enables optional, virtual in-path support on all the interfaces for networks that use Layer-4 switches, PBR, WCCP, and Interceptor. External traffic redirection is supported only on the first in-path interface. These redirection methods are available:
•  Layer-4 Switch - You enable Layer-4 switch support when you have multiple Edges in your network, so that you can manage large bandwidth requirements.
•  Policy-Based Routing (PBR) - PBR allows you to define policies to route packets instead of relying on routing protocols. You enable PBR to redirect traffic that you want optimized by an appliance that is not in the direct physical path between the client and server.
•  Web Cache Communication Protocol (WCCP) - If your network design requires you to use WCCP, a packet redirection mechanism directs packets to RiOS appliances that are not in the direct physical path to ensure that they are optimized.
For details about configuring Layer-4 switch, PBR, and WCCP deployments, see the SteelHead Deployment Guide.
If you enable this option on a SteelFusion Edge, you must configure subnet side rules to identify LAN-side traffic, otherwise the appliance does not correctly support Layer-4 routers, PBR, WCCP on the client side, or SteelHead Interceptors. In the case of a client-side appliance in a WCCP environment, the appliance does not optimize client-side traffic unless you configure subnet side rules. In virtual in-path configurations, all traffic flows in and out of one physical interface, and the default subnet side rule causes all traffic to appear to originate from the WAN side of the device.
The AWS SteelHead-c does not support L4/PBR/WCCP and Interceptor, but the ESX SteelHead-c supports it.
Enable Agent-Intercept
This feature is only supported by the SteelHead-c.
Select this check box to enable configuration of the transparency mode in the SteelHead-c and transmit it to the Discovery Agent. The Discovery Agent in the server provides these transparency modes for client connections:
•  Restricted transparent - All client connections are transparent with these restrictions:
–  If the client connection is from a NATed network, the application server sees the private IP address of the client.
–  You can use this mode only if there is no conflict between the private IP address ranges (there are no duplicate IP addresses) and ports. This is the default mode.
•  Safe transparent - If the client is behind a NAT device, the client connection to the application server is nontransparent—the application server sees the connection as a connection from the SteelHead-c IP address and not the client IP address. All connections from a client that is not behind a NAT device are transparent and the server sees the connections from the client IP address instead of the SteelHead-c IP address.
•  Non-transparent - All client connections are nontransparent—the application server sees the connections from the server-side SteelHead IP address and not the client IP address. Riverbed recommends that you use this mode as the last option.
Enable Optimizations on Interface interface_name
Enables in-path support for additional NIC cards. Edge supports both bypass and nonbypass NICs.
If you have an appliance that contains multiple two-port or four-port bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your appliance.
The interface names for the bypass cards are a combination of the slot number and the port pairs (inpath<slot>_<pair>, inpath<slot>_<pair>). For example, if a four-port bypass card is located in slot 5 of your appliance, the interface names are inpath5_0 and inpath5_1. Alternatively, if the bypass card is located in slot 6 of your appliance, the interface names are inpath6_0 and inpath6_1. For details about installing additional bypass cards, see the SteelFusion Edge Hardware and Maintenance Guide.
3. Under Out-of-Path Settings, complete the configuration as described in this table.
Control
Description
Enable Out-of-Path Support
Enables out-of-path support on a server-side appliance, where only an appliance primary interface connects to the network. The appliance can be connected anywhere in the LAN. There is no redirecting device in an out-of-path appliance deployment. You configure fixed-target in-path rules for the client-side appliance. The fixed-target in-path rules point to the primary IP address of the out-of-path appliance. The out-of-path appliance uses its primary IP address when communicating to the server. The remote appliance must be deployed either in a physical or virtual in-path mode.
If you set up an out-of-path configuration with failover support, you must set fixed-target rules that specify the master and backup appliances.
4. Under Connection Settings, complete the configuration as described in this table.
Control
Description
Half-Open Connection Limit per Source IP
Restricts half-opened connections on a source IP address initiating connections (that is, the client machine).
Set this feature to block a source IP address that is opening multiple connections to invalid hosts or ports simultaneously (for example, a virus or a port scanner).
This feature does not prevent a source IP address from connecting to valid hosts at a normal rate. Thus, a source IP address could have more established connections than the limit.
The default value is 4096.
The appliance counts the number of half-opened connections for a source IP address (connections that check if a server connection can be established before accepting the client connection). If the count is above the limit, new connections from the source IP address are passed through unoptimized.
Note: If you have a client connecting to valid hosts or ports at a very high rate, some of its connections might be passed through even though all of the connections are valid.
Maximum Connection Pool Size
Specify the maximum number of TCP connections in a connection pool.
Connection pooling enhances network performance by reusing active connections instead of creating a new connection for every request. Connection pooling is useful for protocols that create a large number of short-lived TCP connections, such as HTTP.
To optimize such protocols, a connection pool manager maintains a pool of idle TCP connections, up to the maximum pool size. When a client requests a new connection to a previously visited server, the pool manager checks the pool for unused connections and returns one if available. Thus, the client and the appliance do not have to wait for a three-way TCP handshake to finish across the WAN. If all connections currently in the pool are busy and the maximum pool size has not been reached, the new connection is created and added to the pool. When the pool reaches its maximum size, all new connection requests are queued until a connection in the pool becomes available or the connection attempt times out.
The default value is 20. A value of 0 specifies no connection pool.
Note: You must restart the appliance after changing this setting.
Note: Viewing the Connection Pooling report can help determine whether to modify the default setting. If the report indicates an unacceptably low ratio of pool hits per total connection requests, increase the pool size.
5. Under Failover Settings, complete the configuration as described in this table.
Control
Description
Enable Failover Support
Configures a failover deployment on either a master or backup appliance. In the event of a failure in the master appliance, the backup appliance takes its place with a warm RiOS data store, and can begin delivering fully optimized performance immediately.
The master and backup appliances must be the same appliance model.
Current Appliance is
Select Master or Backup from the drop-down list. A master appliance is the primary appliance; the backup appliance is the appliance that automatically optimizes traffic if the master appliance fails.
IP Address (peer in-path interface)
Specify the IP address for the master or backup appliance. You must specify the in-path IP address (inpath0_0) for the appliance, not the primary interface IP address.
Note: You must specify the inpath0_0 interface as the other appliance’s in-path IP address.
6. Optionally, under Packet Mode Optimization Settings, complete the configuration as described in this table. For details about packet-mode optimization, see Creating in-path rules for packet-mode optimization.
Control
Description
Enable Packet Mode Optimization
Performs packet-by-packet SDR bandwidth optimization on TCP or UDP (over IPv4 or IPv6) flows. This feature uses fixed-target packet mode optimization in-path rules to optimize bandwidth for applications over these transport protocols.
By default, packet-mode optimization is disabled.
Enabling this feature requires an optimization service restart.
7. Click Apply to apply your settings.
8. Click Save to save your settings permanently.
Note: After applying the settings, you can verify whether changes have had the desired effect by reviewing related reports. When you have verified appropriate changes, you can write the active configuration that is stored in memory to the active configuration file (or you can save it as any filename you choose). For details about saving configurations, see Managing configuration files.
Related topics
•  Configuring in-path rules
•  Modifying general host settings
•  Enabling peering and configuring peering rules
•  Configuring the RiOS data store
•  Configuring service ports
•  Modifying in-path interfaces
•  Configuring connection forwarding features
•  Configuring subnet side rules