Configuring Optimization Features : Configuring general service settings
  
Configuring general service settings
You configure general optimization service settings in the Optimization > Network Services: General Service Settings page.
Enabling basic deployment options
General Service Settings include controls to enable or disable in-path, out-of-path, failover support, and to set connection limits and the maximum connection pooling size.
If you have a SteelHead that contains multiple bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your SteelHead.
The properties and values you set in this page depend on your deployment. For example, these deployment types would require different choices:
Physical in-pathThe SteelHead is physically in the direct path between the client and the server. The clients and servers continue to see client and server IP addresses. Physical in-path configurations are suitable for any location where the total bandwidth is within the limits of the installed SteelHead.
Virtual in-pathThe SteelHead is virtually in the path between the client and the server. This deployment differs from a physical in-path in that a packet redirection mechanism is used to direct packets to SteelHeads that aren’t in the physical path. Redirection mechanisms include SteelHead Interceptor, WCCP, Layer-4 switches, and PBR. In this configuration, clients and servers continue to see client and server IP addresses.
Out-of-pathThe SteelHead isn’t in the direct path between the client and the server. Servers see the IP address of the server-side SteelHead rather than the client IP address, which might impact security policies. An out-of-path configuration is suitable for data center locations where physically in-path or virtually in-path configurations aren’t possible.
For an overview of in-path and out-of-path deployment options, see the SteelHead Deployment Guide.
Enabling failover
In the event of appliance failure, the SteelHead enters bypass mode to avoid becoming a single point of failure in your network. If you want optimization to continue in the event of appliance failure, you can deploy redundant appliances as failover buddies.
For details about failover redundancy, see the SteelHead Deployment Guide.
Physical in-path failover deployment
For a physical in-path failover deployment, you configure a pair of SteelHeads: one as a master and the other as a backup. The master SteelHead in the pair (usually the SteelHead closest to the LAN) is active and the backup SteelHead is passive. The master SteelHead is active unless it fails for some reason. The backup is passive while the master is active and becomes active if either the master fails or the master reaches its connection limit and enters admission control status. A backup SteelHead doesn’t intercept traffic while the master appliance is active. It pings the master SteelHead to make sure that it is alive and processing data. If the master SteelHead fails, the backup takes over and starts processing all of the connections. When the master SteelHead comes back up, it sends a message to the backup that it has recovered. The backup SteelHead stops processing new connections (but continues to serve old ones until they end).
Out-of-path failover deployment
For an out-of-path failover deployment, you deploy two server-side SteelHeads and add a fixed-target rule to the client-side SteelHead to define the master and backup target appliances. When both the master and backup SteelHeads are functioning properly, the connections traverse the master appliance. If the master SteelHead fails, subsequent connections traverse the backup SteelHead.
The master SteelHead uses an Out-of-Band (OOB) connection. The OOB connection is a single, unique TCP connection that communicates internal information only; it doesn’t contain optimized data. If the master SteelHead becomes unavailable, it loses this OOB connection and the OOB connection times out in approximately 40 to 45 seconds. After the OOB connection times out, the client-side SteelHead declares the master SteelHead unavailable and connects to the backup SteelHead.
During the 40 to 45 second delay before the client-side SteelHead declares a peer unavailable, it passes through any incoming new connections; they’re not blackholed.
While the client-side SteelHead is using the backup SteelHead for optimization, it attempts to connect to the master SteelHead every 30 seconds. If the connection succeeds, the client-side SteelHead reconnects to the master SteelHead for any new connections. Existing connections remain on the backup SteelHead for their duration. This is the only time (immediately after a recovery from a master failure) that connections are optimized by both the master SteelHead and the backup.
If both the master and backup SteelHeads become unreachable, the client-side SteelHead tries to connect to both appliances every 30 seconds. Any new connections are passed through the network unoptimized.
Synchronizing master and backup failover pairs
In addition to enabling failover and configuring buddy peering, you must synchronize the RiOS data stores for the master-backup pairs to ensure optimal use of SDR for warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN. For information on synchronizing RiOS data stores for master-backup pairs, see Synchronizing peer RiOS data stores.
Configuring general service settings
In the General Service Settings page, you can also modify default settings for the maximum half-opened connections from a single source IP address and the connection pool size. For details, pay careful attention to the configuration descriptions included in the following procedure.
To configure general optimization service settings
1. Choose Optimization > Network Services: General Service Settings to display the General Service Settings page.
General Service Settings page
2. Under In-Path Settings, complete the configuration as described in this table.
Control
Description
Enable In-Path Support
Enables optimization on traffic that is in the direct path of the client, server, and SteelHead.
Reset Existing Client Connections on Start Up
Enables kickoff globally. If you enable kickoff, connections that exist when the optimization service is started and restarted are disconnected. When the connections are retried they’re optimized.
Generally, connections are short-lived and kickoff is not necessary. It is suitable for very challenging remote environments. In a remote branch-office with a T1 and 35-ms round-trip time, you would want connections to migrate to optimization gracefully, rather than risk interruption with kickoff.
RiOS provides a way to reset preexisting connections that match an in-path rule and the rule has kickoff enabled. You can also reset a single pass-through or optimized connection in the Current Connections report, one connection at a time.
Do not enable kickoff for in-path SteelHeads that use autodiscover or if you don’t have a SteelHead on the remote side of the network. If you don’t set any in-path rules the default behavior is to autodiscover all connections. If kickoff is enabled, all connections that existed before the SteelHead started are reset.
Enable L4/PBR/WCCP Interceptor Support
Enables optional, virtual in-path support on all the interfaces for networks that use Layer-4 switches, PBR, WCCP, and SteelHead Interceptor. External traffic redirection is supported only on the first in-path interface. These redirection methods are available:
Layer-4 Switch—You enable Layer-4 switch support when you have multiple SteelHeads in your network, so that you can manage large bandwidth requirements.
Policy-Based Routing (PBR)—PBR allows you to define policies to route packets instead of relying on routing protocols. You enable PBR to redirect traffic that you want optimized by a SteelHead that is not in the direct physical path between the client and server.
Web Cache Communication Protocol (WCCP)—If your network design requires you to use WCCP, a packet redirection mechanism directs packets to RiOS appliances that aren’t in the direct physical path to ensure that they’re optimized.
For details about configuring Layer-4 switch, PBR, and WCCP deployments, see the SteelHead Deployment Guide.
The AWS Cloud Accelerator doesn’t support L4/PBR/WCCP and Interceptor, but the Cloud Accelerator supports it.
Enable Agent-Intercept
This feature is only supported by the Cloud Accelerator.
Enables configuration of the transparency mode in the Cloud Accelerator and transmits it to the Discovery Agent. The Discovery Agent in the server provides these transparency modes for client connections:
Restricted transparent—All client connections are transparent with these restrictions:
If the client connection is from a NATted network, the application server sees the private IP address of the client.
You can use this mode only if there’s no conflict between the private IP address ranges (there are no duplicate IP addresses) and ports. This is the default mode.
Safe transparent—If the client is behind a NAT device, the client connection to the application server is nontransparent—the application server sees the connection as a connection from the Cloud Accelerator IP address and not the client IP address. All connections from a client that is not behind a NAT device are transparent and the server sees the connections from the client IP address instead of the Cloud Accelerator IP address.
Non-transparent—All client connections are nontransparent—the application server sees the connections from the server-side SteelHead IP address and not the client IP address. We recommend that you use this mode as the last option.
Enable Optimizations on Interface <interface-name>
Enables in-path support for additional bypass cards.
If you have an appliance that contains multiple two-port, four-port, or six-port bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your SteelHead.
The interface names for the bypass cards are a combination of the slot number and the port pairs (inpath<slot>_<pair>, inpath<slot>_<pair>): for example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are inpath0_0 and inpath0_1. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are inpath1_0 and inpath1_1. For details about installing additional bypass cards, see the Network and Storage Card Installation Guide.
3. Under Out-of-Path Settings, complete the configuration as described in this table.
Control
Description
Enable Out-of-Path Support
Enables out-of-path support on a server-side SteelHead, where only a SteelHead primary interface connects to the network. The SteelHead can be connected anywhere in the LAN. There is no redirecting device in an out-of-path SteelHead deployment. You configure fixed-target in-path rules for the client-side SteelHead. The fixed-target in-path rules point to the primary IP address of the out-of-path SteelHead. The out-of-path SteelHead uses its primary IP address when communicating to the server. The remote SteelHead must be deployed either in a physical or virtual in-path mode.
If you set up an out-of-path configuration with failover support, you must set fixed-target rules that specify the master and backup SteelHeads.
4. Under Connection Settings, complete the configuration as described in this table.
Control
Description
Half-Open Connection Limit per Source IP
Restricts half-opened connections on a source IP address initiating connections (that is, the client machine).
Set this feature to block a source IP address that is opening multiple connections to invalid hosts or ports simultaneously (for example, a virus or a port scanner).
This feature doesn’t prevent a source IP address from connecting to valid hosts at a normal rate. Thus, a source IP address could have more established connections than the limit.
The default value is 4096.
The appliance counts the number of half-opened connections for a source IP address (connections that check if a server connection can be established before accepting the client connection). If the count is above the limit, new connections from the source IP address are passed through unoptimized.
If you have a client connecting to valid hosts or ports at a very high rate, some of its connections might be passed through even though all of the connections are valid.
Maximum Connection Pool Size
Specify the maximum number of TCP connections in a connection pool.
Connection pooling enhances network performance by reusing active connections instead of creating a new connection for every request. Connection pooling is useful for protocols that create a large number of short-lived TCP connections, such as HTTP.
To optimize such protocols, a connection pool manager maintains a pool of idle TCP connections, up to the maximum pool size. When a client requests a new connection to a previously visited server, the pool manager checks the pool for unused connections and returns one if available. Thus, the client and the SteelHead don’t have to wait for a three-way TCP handshake to finish across the WAN. If all connections currently in the pool are busy and the maximum pool size has not been reached, the new connection is created and added to the pool. When the pool reaches its maximum size, all new connection requests are queued until a connection in the pool becomes available or the connection attempt times out.
The default value is 20. A value of 0 specifies no connection pool.
You must restart the SteelHead after changing this setting.
Viewing the Connection Pooling report can help determine whether to modify the default setting. If the report indicates an unacceptably low ratio of pool hits per total connection requests, increase the pool size.
5. Under Failover Settings, complete the configuration as described in this table.
Control
Description
Enable Failover Support
Configures a failover deployment on either a master or backup SteelHead. In the event of a failure in the master appliance, the backup appliance takes its place with a warm RiOS data store and can begin delivering fully optimized performance immediately.
The master and backup SteelHeads must be the same hardware model.
Current Appliance is
Select Master or Backup from the drop-down list. A master SteelHead is the primary appliance; the backup SteelHead is the appliance that automatically optimizes traffic if the master appliance fails.
IP Address (peer in-path interface)
Specify the IP address for the master or backup SteelHead. You must specify the in-path IP address (inpath0_0) for the SteelHead, not the primary interface IP address.
You must specify the inpath0_0 interface as the other appliance’s in-path IP address.
6. Optionally, under Packet Mode Optimization Settings, complete the configuration as described in this table. For details about packet-mode optimization, see Creating in-path rules for packet-mode optimization.
Control
Description
Enable Packet Mode Optimization
Performs packet-by-packet SDR bandwidth optimization on TCP or UDP (over IPv4 or IPv6) flows. This feature uses fixed-target packet mode optimization in-path rules to optimize bandwidth for applications over these transport protocols.
TCPv6 or UDPv4 flows are supported. TCPv4 and UDPv6 flows require a minimum RiOS version of 8.5.
By default, packet-mode optimization is disabled.
Enabling this feature requires an optimization service restart.
7. Click Apply to apply your settings.
8. Click Save to Disk to save your settings permanently.
After applying the settings, you can verify whether changes have had the desired effect by reviewing related reports. When you have verified appropriate changes, you can write the active configuration that is stored in memory to the active configuration file (or you can save it as any filename you choose). For details about saving configurations, see Managing configuration files.
Related topics
Modifying in-path interfaces
Configuring in-path rules
Enabling peering and configuring peering rules
Configuring the RiOS data store
Configuring service ports
Configuring connection forwarding features
Configuring subnet side rules