Reference: Policy Pages Settings
This appendix provides a reference for configuring policy page configuration settings, and it assumes you’re familiar with configuring and managing Riverbed products. It doesn’t include detailed overviews of the individual feature sets associated with the policies.
For detailed information about creating policies and configuring policy pages, see
About Policies.Networking policy settings
This section describes the Networking Policy feature set. These procedures assume you have already created a Networking Policy.
Host settings page
You can view and modify general host settings for the selected networking policy in the Host Settings page. For detailed information about host settings, see the SteelHead User Guide.
When you initially ran the installation wizard, you set required network host settings for the appliance. Use these groups of controls on this page only if modifications or additional configuration is required:
• DNS Settings—We recommend you use DNS resolution. For details, see
DNS settings. • Hosts—If you don’t use DNS resolution, or if the host doesn’t have a DNS entry, you can create a host-IP address resolution map. For details, see
Hosts. • Proxies—Configure proxy addresses for network and FTP proxy access to the appliance. For details, see
Web/FTP Proxy. DNS settings
These configuration options are available:
Primary DNS Server
Specifies the IP address for the primary name server. The IP address can be either IPv4 or IPv6. For IPv6 specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Secondary DNS Server
Specifies the IP address for the secondary name server.
Tertiary DNS Server
Specifies the IP address for the tertiary name server.
DNS Domain List
Specifies an ordered list of domain names. If you specify domains the system automatically finds the appropriate domain for each of the hosts that you specify in the system.
Hosts
These configuration options are available:
Add a New Host
Displays the controls for adding a new host.
IP Address
Specifies the IP address for the host. The IP address can be either IPv4 or IPv6. For IPv6 specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Hostname
Specifies a hostname.
Add
Adds the host.
To modify the host-IP mapping, in the table row for the mapping, click the hostname to display controls you can use to modify the mapping. Complete the configuration as above.
Web/FTP Proxy
These configuration options are available:
Enable Web Proxy
Provides network and FTP proxy access to the SCC. Enables the SCC to use a proxy to contact the Riverbed Licensing Portal and fetch licenses in a secure environment. You can optionally require user credentials to communicate with the proxy, and you can specify the method used to authenticate and negotiate user credentials. Proxy access is disabled by default. RiOS supports these proxies: Squid, Blue Coat Proxy SG, Microsoft WebSense, and McAfee Web Gateway.
Web/FTP Proxy
Specifies the IP address for the web or FTP proxy. The IP address can be either IPv4 or IPv6. For IPv6 specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Port
Specifies the port for the web/FTP proxy.
Enable Authentication
Requires user credentials for use with network or FTP proxy traffic. Specify these items to authenticate the users:
• Username—Specify a username.
• Password—Specify a password.
• Authentication Type—Select an authentication method from the drop-down list:
– Basic—Authenticates user credentials by requesting a valid username and password. This is the default setting.
– NTLM—Authenticates user credentials based on an authentication challenge and response.
– Digest—Provides the same functionality as basic authentication; however, digest authentication improves security because the system sends the user credentials across the network as a Message Digest 5 (MD5) hash.
To modify server properties, in the table row for the server, click the server name to display controls you can use to modify the properties. Complete the configuration as above.
Proxy Whitelist
The proxy whitelist allows you to configure an exception list for optimizing HTTP traffic.
Add Domain
Specifies the domain to add to the proxy whitelist.
Remove Selected
Removes a domain from the proxy whitelist.
WCCP
WCCP Service Groups
These options are available to configure a WCCP service group:
Enable WCCP v2 Support
Enables WCCPv2 support on all groups added to the Service Group list.
Multicast TTL
Specifies the TTL boundary for the WCCP protocol packets. The default value is 16.
Under WCCP groups, these options are available to add, modify, or remove a service group.
Add a New Service Group
Displays the controls for adding a new service group.
Interface
Selects a SteelHead interface to participate in a WCCP service group. In virtual in-path configurations, all traffic flows in and out of one physical interface, and the default subnet side rule causes all traffic to appear to originate from the WAN side of the device.
RiOS allows multiple SteelHead interfaces to participate in WCCP on one or more routers for redundancy (RiOS 6.0 and earlier allow a single SteelHead interface). If one of the links goes down, the router can still send traffic to the other active links for optimization.
You must include an interface with the service group ID. More than one SteelHead in-path interface can participate in the same service group. For WCCP configuration examples, see the SteelHead Deployment Guide.
If multiple SteelHeads are used in the topology, they must be configured as neighbors.
RiOS 6.5 and later require connection forwarding in a WCCP cluster.
Service Group ID
Enables WCCPv2 support on all groups added to the Service Group list. Specify a number from 0 to 255 to identify the service group on the router. A value of 0 specifies the standard HTTP service group. We recommend that you use WCCP service groups 61 and 62. The service group ID is local to the site where WCCP is used. The service group number is not sent across the WAN.
Protocol
Selects a traffic protocol from the drop-down list: TCP, UDP, or ICMP. The default value is TCP.
Password/Password Confirm
Assigns a password to the SteelHead interface. This password must be the same password that is on the router. WCCP requires that all routers in a service group have the same password. Passwords are limited to eight characters.
Priority
Specifies the WCCP priority for traffic redirection. If a connection matches multiple service groups on a router, the router chooses the service group with the highest priority. The range is 0 to 255. The default value is 200. The priority value must be consistent across all SteelHeads within a particular service group.
Weight
Specifies the percentage of connections that are redirected to a particular SteelHead interface, which is useful for traffic load balancing and failover support. The number of TCP, UDP, or ICMP connections a SteelHead supports determines its weight. The more connections a SteelHead model supports, the heavier the weight of that model. In RiOS 6.1 and later, you can modify the weight for each in-path interface to manually tune the proportion of traffic a SteelHead interface receives.
A higher weight redirects more traffic to that SteelHead interface. The ratio of traffic redirected to a SteelHead interface is equal to its weight divided by the sum of the weights of all the SteelHead interfaces in the same service group. For example, if there are two SteelHeads in a service group and one has a weight of 100 and the other has a weight of 200, the one with the weight 100 receives 1/3 of the traffic and the other receives 2/3 of the traffic.
However, since it’s generally undesirable for a SteelHead with two WCCP in-path interfaces to receive twice the proportion of traffic, for SteelHeads with multiple in-paths connected, each of the in-path weights is divided by the number of that SteelHead's interfaces participating in the service group.
As an example, if there are two SteelHeads in a service group and one has a single interface with weight 100 and the other has two interfaces each with weight 200, the total weight will still equal 300 (100 + 200/2 + 200/2). The one with the weight 100 receives 1/3 of the traffic and each of the other's in-path interfaces receives 1/3 of the traffic.
The range is 0 to 65535. The default value corresponds to the number of TCP connections your SteelHead supports.
To enable single in-path failover support with WCCP groups, define the service group weight to be 0 on the backup SteelHead. If one SteelHead has a weight 0, but another one has a nonzero weight, the SteelHead with weight 0 doesn’t receive any redirected traffic. If all the SteelHeads have a weight 0, the traffic is redirected equally among them.
The best way to achieve multiple in-path failover support with WCCP groups is to use the same weight on all interfaces from a given SteelHead for a given service group. For example, suppose you have SteelHead A and SteelHead B with two in-path interfaces each. When you configure SteelHead A with weight 100 from both inpath0_0 and inpath0_1 and SteelHead B with weight 200 from both inpath0_0 and inpath0_1, RiOS distributes traffic to SteelHead A and SteelHead B in the ratio of 1:2 as long as at least one interface is up on both SteelHeads.
In a service group, if an interface with a nonzero weight fails, its weight transfers over to the weight 0 interface of the same service group.
For details on using the weight parameter to balance traffic loads and provide failover support in WCCP, see the SteelHead Deployment Guide.
Encapsulation Scheme
Specifies the method for transmitting packets between a router or a switch and a SteelHead interface. Select one of these encapsulation schemes from the drop-down list:
• Either uses Layer 2 first; if Layer 2 is not supported, GRE is used. This is the default value.
• GRE uses Generic Routing Encapsulation. The GRE encapsulation method appends a GRE header to a packet before it’s forwarded. This method can cause fragmentation and imposes a performance penalty on the router and switch, especially during the GRE packet deencapsulation process. This performance penalty can be too great for production deployments.
• L2 uses Layer-2 redirection. The L2 method is generally preferred from a performance standpoint because it requires fewer resources from the router or switch than the GRE does. The L2 method modifies only the destination Ethernet address. However, not all combinations of Cisco hardware and IOS revisions support the L2 method. Also, the L2 method requires the absence of L3 hops between the router or switch and the SteelHead.
Assignment Scheme
Determines which SteelHead interface in a WCCP service group the router or switch selects to redirect traffic to for each connection. The assignment scheme also determines whether the SteelHead interface or the router processes the first traffic packet. The optimal assignment scheme achieves both load balancing and failover support. Select one of these schemes from the drop-down list:
• Either uses Hash assignment unless the router doesn’t support it. When the router doesn’t support Hash, it uses Mask. This is the default setting.
• Hash redirects traffic based on a hashing scheme and the Weight of the SteelHead interface, providing load balancing and failover support. This scheme uses the CPU to process the first packet of each connection, resulting in slightly lower performance. However, this method generally achieves better load distribution. We recommend Hash assignment for most SteelHeads if the router supports it. The Cisco switches that don’t support Hash assignment are the 3750, 4000, and 4500 series, among others.
Your hashing scheme can be a combination of the source IP address, destination IP address, source port, or destination port.
• Mask redirects traffic operations to the SteelHeads, significantly reducing the load on the redirecting router. Mask assignment processes the first packet in the router hardware, using less CPU cycles and resulting in better performance.
Mask assignment supports load-balancing across multiple active SteelHead interfaces in the same service group.
The default mask scheme uses an IP address mask of 0x1741, which is applicable in most situations. However, you can change the IP mask by clicking the service group ID and changing the service group settings and flags.
In multiple SteelHead environments, it’s often desirable to send all users in a subnet range to the same SteelHead. Using mask provides a basic ability to leverage a branch subnet and SteelHead to the same SteelHead in a WCCP cluster.
For details and best practices for using assignment schemes, see the SteelHead Deployment Guide.
If you use mask assignment you must ensure that packets on every connection and in both directions (client-to-server and server-to-client), are redirected to the same SteelHead. For details, see the SteelHead Deployment Guide.
Source Mask
• IP Mask specifies the service group source IP mask. The default value is 0x1741.
• Port Mask specifies the service group source port mask.
• IP Hash specifies that the router hash the source IP address to determine traffic to redirect.
• Port Hash specifies that the router hash the source port to determine traffic to redirect.
Destination Mask
• IP Mask specifies the service group destination IP mask.
• Port Mask specifies the service group destination port mask.
• IP Hash specifies that the router hash the destination IP address to determine traffic to redirect.
• Port Hash specifies that the router hash the destination port to determine traffic to redirect.
Source Hash
Specifies source IP Hash and/or Source Port Hash.
Destination Hash
Specifies the destination IP hash and/or destination port hash.
Ports Mode
• Ports Disabled disables the ports.
• Use Source Ports indicates the router determines traffic to redirect based on source ports.
• Use Destination Ports indicates the router determines traffic to redirect based on destination ports.
Ports
Specifies a comma-separated list of up to seven ports that the router will redirect. Use this option only after selecting either the Use Source Ports or the Use Destination Ports mode.
Router IP Address(es)
Specifies a multicast group IP address or a unicast router IP address. You can specify up to 32 routers.
Hardware Assist Rules
You configure hardware assist rules in the Hardware Assist Rules page.
This feature only appears on an appliance equipped with compatible NICs.
Appliances equipped with one or more of the following NICs can use HAP rules:
• Two-Port LR4 Fiber 40 Gigabit-Ethernet PCI-E
• Two-Port SR4 Fiber 40 Gigabit-Ethernet PCI-E
• Four-Port SR Multimode Fiber 10 Gigabit-Ethernet PCI-E
• Two-Port LR Single Mode Fiber 10 Gigabit-Ethernet PCI-E
• Two-Port SR Multimode Fiber 10 Gigabit-Ethernet PCI-E
When using IPv6 only or a combination of IPv6 and IPv4, the maximum number of rules is 120. With IPv4 traffic only, a maximum of 240 HAP rules can be configured.
Hardware-assist rules can automatically bypass all UDP (User Datagram Protocol) connections. You can also configure rules for bypassing specific TCP (Transmission Control Protocol) connections. Automatically bypassing these connections decreases the work load on the local SteelHeads because the traffic is immediately sent to the kernel of the host machine or out of the other interface before the SteelHead receives it.
To be safe, change hardware-assist rules only during a maintenance window, or during light traffic and with a full understanding of the implications. For details, go to Knowledge Base article
S12992.
For a hardware-assist rule to be applied to a specific 10G bypass card, the corresponding in-path interface must be enabled and have an IP address.
For more details about Hardware Assist Rules, see the SteelHead User Guide.
Hardware Assist Rules Settings
These configuration options are available:
Enable Hardware Passthrough of All UDP Traffic
Automatically passes through all UDP traffic. Hardware assist rules are ignored unless this check box is selected. No TCP traffic will be passed through.
Enable Hardware Passthrough of TCP Traffic Defined in the Rules Below
Passes through TCP traffic based on the configured rules. TCP pass-through is controlled by rules. The next step describes how to step up hardware assist rules. All hardware assist rules are ignored unless this check box is selected. No TCP traffic will be passed through.
TCP hardware assist rules
These configuration options are available:
Add a New Rule
Displays the controls for adding a new rule.
Type
Specifies one of these rule types:
• Accept—Accepts rules matching the Subnet A or Subnet B IP address and mask pattern for the optimized connection.
• Pass-Through—Identifies traffic to be passed through the network unoptimized.
Position
Determines the order which the system evaluates the rule. Select start, end, or a rule number from the drop-down list. The system evaluates rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied and the system moves on to the next rule. For example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted. In general, filter traffic that’s to be unoptimized, discarded, or denied before processing rules for traffic that’s to be optimized.
Subnet A
Specifies an IP address and mask for the subnet that can be both source and destination together with Subnet B. Use this format: xxx.xxx.xxx.xxx./xx. You can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Subnet B
Specifies an IP address and mask for the subnet that can be both source and destination together with Subnet A. Use this format: xxx.xxx.xxx.xxx./xx. You can specify all or 0.0.0.0/0 as the wildcard for all traffic.
VLAN Tag ID
Specifies a numeric VLAN tag identification number. Select all to specify the rule applies to all VLANs. Select untagged to specify the rule applies to nontagged connections.
Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
To complete the implementation of VLAN tagging, you must set the VLAN tag IDs for the in-path interfaces that the appliance uses to communicate with other appliances.
Description
Includes a description of the rule.
Add
Adds the new hardware assist rules to the list.
• RiOS applies the same rule to both LAN and WAN interfaces.
• Every 10-G card has the same rule set.
The appliance refreshes the Hardware Assist Rules table and applies your modifications to the running configuration, which is stored in memory.
Simplified routing
Simplified routing collects the IP address for the next hop MAC address from each packet it receives to address traffic. With simplified routing, you can use either the WAN or LAN side device as a default gateway. The SteelHead learns the right gateway to use by watching where the switch or router sends the traffic, and associating the next-hop Ethernet addresses with IP addresses. Enabling simplified routing eliminates the need to add static routes when the SteelHead is in a different subnet from the client and the server.
Without simplified routing, if a SteelHead is installed in a different subnet from the client or server, you must define one router as the default gateway and static routes for the other routers so that traffic isn’t redirected back through the SteelHead. In some cases, even with the static routes defined, the ACL on the default gateway can still drop traffic that should have gone through the other router. Enabling simplified routing eliminates this issue.
Simplified routing has these constraints:
• WCCP can’t be enabled.
• The default route must exist on each SteelHead in your network.
Mapping data collection setting
This configuration option is available:
Collect Mappings From
Specifies one of these options from the drop-down list:
• None—Don’t collect mappings.
• Destination Only—Collects destination MAC data. Use this option in connection-forwarding deployments. This is the default setting.
• Destination and Source—Collects mappings from destination and source MAC data. Use this option in connection-forwarding deployments.
• All—Collects mappings for destination, source, and inner MAC data. Also collect data for connections that are unNATed (that is, connections that aren’t translated using NAT).
Asymmetric routing
You enable asymmetric route detection for the selected optimization policy in the Asymmetric Routing page.
Asymmetric route detection automatically detects and reports asymmetric routing conditions and caches this information to avoid losing connectivity between a client and a server.
Asymmetric routing is when a packet traverses from a source to a destination in one path and takes a different path when it returns to the source. This is commonly seen in Layer-3 routed networks. Asymmetric routing is common within most networks; the larger the network, the more likely there is asymmetric routing in the network.
For details about asymmetric routing types, see the SteelHead User Guide.
You can also use the SteelHead CLI to detect and analyze asymmetric routes. For details, see the Riverbed Command-Line Interface Reference Guide or the SteelHead Deployment Guide.
Asymmetric routing settings
These configuration options are available:
Enable Asymmetric Routing Detection
Detects asymmetric routes in your network. This feature is enabled by default.
Enable Asymmetric Routing Pass-Through
Enables pass-through traffic if asymmetric routing is detected. This feature is enabled by default.
If asymmetric routing is detected, the pair of IP addresses, defined by the client and server addresses of this connection, is cached on the appliance. Further connections between these hosts are passed through unoptimized until that particular asymmetric routing cache entry times out.
Detecting and caching asymmetric routes doesn’t optimize these packets. If you want to optimize asymmetric routed packets you must make sure that the packets going to the WAN always go through an appliance either by using a multiport appliance, connection forwarding, or using external ways to redirect packets, such as WCCP or PBR.
For detailed information, see the SteelHead Deployment Guide.
Flow statistics
You enable flow statistics settings in the Flow Statistics page. You can also enable flow export to an external collector and to a SteelFlow collector. SteelFlow collectors can aggregate information about QoS configuration and other application statistics to send to a SteelCentral NetProfiler. The Enterprise NetProfiler summarizes and displays the QoS configuration statistics.
By default, flow export is disabled.
External collectors use information about network data flows to report trends, such as the top users, peak usage times, traffic accounting, security, and traffic routing. You can export preoptimization and postoptimization data to an external collector.
The Top Talkers feature enables a report that details the hosts, applications, and host and application pairs that are either sending or receiving the most data on the network. Top Talkers doesn’t use a NetFlow Collector.
For details about flow statistics deployments, see the SteelHead User Guide.
Flow statistics settings
These configuration options are available:
Enable Application Visibility
Continuously collects detailed application-level statistics for both pass-through and optimized traffic. The Application Visibility and Application Statistics reports display these statistics. This statistic collection is disabled by default.
Enable WAN Throughput Statistics
Continuously collects WAN throughput statistics. This statistic collection is enabled by default; however, you can disable the collection to save processing power.
Enable Top Talkers
Continuously collects statistics for the most active traffic flows. A traffic flow consists of data sent and received from a single source IP address and port number to a single destination IP address and port number over the same protocol.
The most active, heaviest users of WAN bandwidth are called the Top Talkers. A flow collector identifies the top consumers of the available WAN capacity (the top 50 by default) and displays them in the Top Talkers report. Collecting statistics on the Top Talkers provides visibility into WAN traffic without applying an in-path rule to enable a WAN visibility mode.
You can analyze the Top Talkers for accounting, security, troubleshooting, and capacity planning purposes. You can also export the complete list in CSV format.
The collector gathers statistics on the Top Talkers based on the proportion of WAN bandwidth consumed by the top hosts, applications, and host and application pair conversations. The statistics track pass-through or optimized traffic, or both. Data includes TCP or UDP traffic, or both (configurable in the Top Talkers report page).
A NetFlow collector isn’t required for this feature.
You can set the Active Flow Timeout even if the option is enabled.
Optionally, select a time period to adjust the collection interval:
• 24-hour Report Period—For a five-minute granularity (the default setting).
• 48-hour Report Period—For a ten-minute granularity.
The system also uses the time period to collect SNMP top talker statistics. For top talkers displayed in the Top Talker report and SNMP top talker statistics, the system updates the top talker data ranks either every 300 seconds (for a 24-hour reporting period), or 600 seconds (for a 48-hour reporting period).
The system saves a maximum of 300 top talker data snapshots, and aggregates these to calculate the top talkers for the 24-hour or 48-hour reporting period.
The system never clears top talker data at the time of polling; however, every 300 or 600 seconds, it replaces the oldest top talker data snapshot of the 300 with the new data snapshot.
After you change the reporting period, it takes the system one day to update the top talker rankings to reflect the new reporting period. In the interim, the data used to calculate the top talkers still includes data snapshots from the original reporting period. This delay applies to Top Talker report queries and SNMP top talker statistics.
Flow export settings
These configuration options are available:
Enable Flow Export
Enables the SteelHead to export network statistics about the individual flows that it sees as they traverse the network. By default, this setting is disabled.
Export QoS and Application Statistics to SteelFlow Collectors
Sends application-level statistics from all sites to a SteelFlow collector on a Cascade appliance. Cascade appliances provide central reporting capabilities. The collector aggregates QoS and application statistics to provide visibility using detailed records specific to flows traversing the SteelHead.
The SteelHead sends the SteelCentral an enhanced version of NetFlow called SteelFlow. SteelFlow includes:
• NetFlow 9 extensions for round-trip time measurements that enable you to understand volumes of traffic across your WAN and end-to-end response time.
• Extensions that enable a SteelCentral NetExpress to properly measure and report on the benefits of optimization.
After the statistics are aggregated on a Cascade appliance, you can use its central reporting capabilities to:
• Analyze overall WAN use, such as traffic generated by application, most active sites, and so on.
• Troubleshoot a particular application by viewing how much bandwidth it received, checking for any retransmissions, interference from other applications, and so on.
• Compare actual application use against your outbound QoS policy configuration to analyze whether your policies are effective. For example, if your QoS policy determines that Citrix should get a minimum of 10 percent of the link, and the application statistics reveal that Citrix performance is unreliable and always stuck at 10 percent, you may want to increase that minimum guarantee.
The SteelFlow Collector collects read-only statistics on both pass-through and optimized traffic. When you use SteelFlow, the SteelHead sends four flow records for each optimized TCP session: ingress and egress for the inner channel connection, and ingress and egress for the outer channel. A pass-through connection still sends four flow records even though there are no separate inner and outer channel connections. In either case, the SteelCentral NetExpress merges these flow records together with flow data collected for the same flow from other devices.
For details, see the Riverbed Network Performance Management Deployment Guide.
Enable IPv6
Enables export of IPv6 network statistics of individual flows that traverse your network. The IPv6 network statistics can be collected only by these collectors:
• NetFlow 9
• SteelFlow 9.1/SteelFlow
By default, this setting is disabled.
Active Flow Timeout
Specifies the amount of time, in seconds, the collector retains the list of active traffic flows. The default value is 1800 seconds.
Inactive Flow Timeout
Specifies the amount of time, in seconds, the collector retains the list of inactive traffic flows. The default value is 15 seconds.
Enable interfaces
This configuration option is available:
lan/wanX_X
Specifies the interfaces to include when adding new flow collectors.
Flow collectors
These configuration options are available:
Add a New Flow Collector
Displays the controls to add a flow collector.
Collector Hostname or IP Address
Specifies the IP address or hostname for the flow collector. A NetFlow collector isn’t required for this feature.
Port
Specifies the UDP port the flow collector is listening on. The default value is 2055.
Version
Specifies one of these versions from the drop-down list:
• SteelFlow—Use with Cascade Profiler 8.4 or later.
• SteelFlow-compatible—Use with Cascade Profiler 8.3.2 or earlier, and select the LAN Address check box.
• NetFlow V5—Enables ingress flow records.
• NetFlow V9—Enables both ingress and egress flow records.
For details about using NetFlow records with Cascade, see the Riverbed Network Performance Management Deployment Guide.
SteelFlow and SteelFlow-compatible are enhanced versions of flow statistics to the SteelCentral. These versions allow automatic discovery and interface grouping for SteelHeads in a SteelCentral NetProfiler or a Flow Gateway and support WAN and optimization reports in SCC. For details, see the Riverbed NetProfiler and NetExpress User Guide and the Riverbed Flow Gateway User Guide.
Packet Source Interface
Specifies the interface to use as the source IP address of the flow packets (Primary, Aux, or MIP) from the drop-down list. NetFlow records sent from the SteelHead appear to be sent from the IP address of the selected interface.
LAN Address
Causes the TCP/IP addresses and ports reported for optimized flows to contain the original client and server IP addresses and not those of the SteelHead. The default setting displays the IP addresses of the original client and server without the IP address of the SteelHeads.
This setting is unavailable with NetFlow 9, because the optimized flows are always sent out with both the original client server IP addresses and the IP addresses used by the SteelHead.
Capture Interface/Type
Specifies the traffic type to export to the flow collector. Select one of these types from the drop-down list:
• All—Exports both optimized and nonoptimized traffic.
• Optimized—Exports optimized traffic.
• Optimized—Exports optimized LAN or WAN traffic when WCCP is enabled.
• Passthrough—Exports pass-through traffic.
• None—Disables traffic flow statistics.
The default is All for LAN and WAN interfaces, for all four collectors. The default for the other interfaces (Primary, rios_lan, and rios_wan) is None. You can’t select a MIP interface.
Enable Filter
(SteelFlow and NetFlow 9 only.) Filters flow reports by IP and subnets or IP:ports included in the Filter list. When disabled, reports include all IP addresses and subnets.
Filter
(SteelFlow and NetFlow 9 only.) Specifies the IP and subnet or IP:port to include in the report, one entry per line, up to 25 filters maximum.
(NetFlow 9 and SteelFlow 9.1/ SteelFlow only.) In SCC 9.8, you can also filter by IPv6 addresses.
If you enter a single IPv6 address, use this format: [IPv6-address]:port-number
Enter the IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits.
For example: [2001:38dc:0052:0000:0000:e9a4:00c5:6282]:2055
You don’t need to include leading zeros. For example: [2001:38dc:52:0:0:e9a4:c5:6282]:2055
You can replace consecutive zero strings with double colons (::). For example: [2001:38dc:52::e9a4:c5:6282]:2055
If you are entering a subnet or a group of IPv6 addresses, use this format: IPv6-address/prefix
Enter the IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits.
For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Enter the prefix. The prefix length is 0 to 128, separated from the address by a forward slash (/). In this example, 60 is the prefix: 2001:38dc:52::e9a4:c5:6282/60
Add
Adds the collector to the Collector list.
Outbound QoS (basic)
We recommend that you migrate legacy QoS profiles to QoS 9.0 or later. Advanced and basic QoS profiles have policy push restrictions. You can’t push legacy QoS profiles to SteelHeads running 8.0 or 9.0 and later. For detailed information about migrating to QoS 9.0 or later, see
Migrating legacy QoS policies or the
SteelHead User Guide.
Basic QoS simplifies QoS configuration by accurately identifying business applications and classifying traffic according to priorities. The SteelHead uses this information to control the amount of WAN resources that each application can use. This ensures that your important applications are prioritized and removes the guesswork from protecting performance of key applications. In addition, basic QoS prevents recreational applications from interfering with business applications.
Basic QoS comes with a predefined set of six classes, a list of global applications, and a predefined set of profiles. All interfaces have the same link rate.
Basic QoS includes a default site that’s tied to the predefined service profile Medium Office. The bandwidth for the default site is automatically set to the same bandwidth as the interface's WAN throughput value. You can edit the bandwidth for the default site but you can’t edit the subnet.
You can’t add or delete classes in basic QoS. For details about Basic QoS, see the SteelHead User Guide.
QoS settings
These configuration options are available:
Enable QoS Shaping
Enables QoS classification to control the prioritization of different types of network traffic and to ensure that the SteelHead gives certain network traffic (for example, Voice over IP) higher priority than other network traffic. Traffic isn’t classified until at least one WAN interface is enabled. To disable QoS, clear this check box and restart the optimization service.
WAN Bandwidth (kbps)
Specifies the interface bandwidth link rate in kilobits per second. The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. As an example, if your SteelHead connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1 or T3). Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Enable QoS on <interface>
Specifies a WAN interface <X-Y> to enable.
Enable Local WAN Oversubscription
Allows the sum of remote site bandwidths to exceed the WAN uplink speed. Bandwidth oversubscription shares the bandwidth fairly when the network includes remote site bandwidths that collectively exceed the available bandwidth of the local WAN uplink interface speed. The link sharing provides bandwidth guarantees when some of the sites are partially or fully inactive.
As an example, your data center uplink can be 45 Mbps with three remote office sites each with 20 Mbps uplinks.
When disabled, you can only allocate bandwidth for the remote sites such that the total bandwidth doesn’t exceed the bandwidth of any of the interfaces on that QoS is enabled.
Enabling this option can degrade latency guarantees when the remote sites are fully active.
Enable QoS Marking
Identifies traffic using marking values. You can mark traffic using header parameters, such as VLAN, DSCP, and protocols. In RiOS 7.0, you can also use Layer-7 protocol information through AppFlow Engine (AFE) inspection to apply DSCP marking values to traffic flows.
In RiOS 7.0 and later, the DSCP or IP TOS marking only has local significance. This means you can set the DSCP or IP TOS values on the server-side appliance to values different to those set on the client-side appliance.
Global DSCP
Specifies a DSCP value from 0 to 63, or No Setting. If your existing network provides multiple classes of service based on DSCP values, and you’re integrating a SteelHead into your environment, you can use the Global DCSP feature to prevent dropped packets and other undesired effects.
Sites
These configuration options are available:
Add Site
Displays the controls to define a remote site.
Name
Specifies the site name. The site name can contain spaces.
Position
Specifies Start, End, or the rule number from the drop-down list.
Appliances evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule don’t match, the system consults the next rule. For example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted. The default site, which is tied to the Medium Office policy, can’t be removed and is always listed last.
Subnet
Specifies a maximum of five destination subnets that represent individual sites. You can’t edit the subnet for the default site.
Remote Link Bandwidth
Specifies the maximum WAN bandwidth in kilobits per second.
Service Policy
Specifies a service policy from the drop-down list. The default policy is Large Office.
Service Class
Specifies a service class for the application from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conferences.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet, and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum service class guarantees; if better service is available, it is provided: for example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The service class describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
DSCP
Specifies a DSCP value from 0 to 63, Reflect, or Inherit from Service Class for site traffic that doesn’t match any application.
Path
Specifies the default paths for site traffic that doesn’t match any application.
Relay traffic from the interface normally
Sends traffic unmodified out of the WAN side of whichever in-path it came in on. This is the default setting.
Drop traffic
Drops packets in case of failure of all three (primary, secondary, tertiary) paths. Select this option when you don’t want the traffic to pass on any of the uplinks specified in the rule, not just the primary.
Add
Adds the site to the list. The SCC redisplays the Sites table and applies your modifications to the running configuration, which is stored in memory. This button is dimmed and unavailable until you enter the WAN bandwidth.
Applications
These configuration options are available:
Add a Site
Displays the controls to define a remote site.
Name
Specifies the name.
Description
Specifies a description.
Position
Specifies a position from 1 to 21 or End for the lowest position.
For traffic with the following characteristics:
Local Subnet or Host Label
Specifies an IP address and mask for the traffic source, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx/xx
-or-
Specify a host label.
Port or Port Label
Specifies all source ports, a single source port value or a port range of port1-port2, where port1 must be less than port2. The default setting is all ports.
-or-
Specify a port label.
Remote Subnet or Host Label
Specifies an IP address and mask pattern for the traffic destination, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx/xx
-or-
Specify a host label.
Protocol
Specifies All, TCP, UDP GRE, ICMP, IPsec AH, IPsec ESP, or a protocol number from the drop-down list. The default setting is All.
VLAN Tag ID
Specifies a VLAN tag as follows:
• Specify a numeric VLAN tag identification number from 0 to 4094.
• Specify all to specify the rule applies to all VLANs.
• Specify none to specify the rule applies to untagged connections.
RiOS supports VLAN 802.1Q. To configure VLAN tagging, configure transport rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
DSCP
Specifies a DSCP value from 0 to 63, or all to use all DSCP values.
Traffic Type
Specifies Optimized, Passthrough, or All from the drop-down list. The default setting is All.
Application
Specifies an application from the drop-down list of global applications. To narrow the search, type the first characters in the application name. You can define and add any applications that don’t appear in the list.
Selecting HTTP expands the control to include the Domain Name and Relative Path controls. Enter the domain name and relative path. The relative path is the part of the URL that follows the domain name.
To facilitate configuration, you can use wildcards in the name and relative path controls: for example, *.akamaitechnologies.com will match Anything.akamaitechnologies.com. Examples:
a.akamaitechnologies.com
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
Using more than one wildcard (for example, *.*.akamaitechnologies.com) will match Anything.Anything.akamaitechnologies.com. You must include the second period (.). Examples:
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
But not a.akamaitechnologies.com.
Selecting SSL expands the control to allow classification of pass-through SSL traffic matching the TLS/SSL server common name. In the Common Name control, specify the common name of a certificate.
To facilitate configuration, you can use wildcards in the name: for example, *.nbttech.com. If you have three origin servers using different certificates such as webmail.nbttech.com, internal.nbttech.com, and marketingweb.nbttech.com, on the server-side SteelHeads, all three server configurations can use the same certificate name *.nbttech.com.
You can’t classify SSL optimized traffic using the Common Name control. Instead, you can create a QoS HTTP rule to match the domain and server name.
Apply these QoS Settings:
Service Class
Indicates how delay-sensitive a traffic class is to the QoS scheduler. Select a service class for the application from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conferences.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet, and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum service class guarantees; if better service is available, it is provided: for example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The service class describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
DSCP
Specifies a DSCP value from 0 to 63, Inherit from Service Class, or Reflect.
Apply these Path Selections:
Path 1, Path 2, Path 3
Specifies the path preference order (only one path will be used).
If paths are configured and all down:
Relay traffic from the interface normally
Sends traffic unmodified out of the WAN side of whichever in-path it came in on. This is the default setting.
Drop traffic
Drops packets in case of failure of all three (primary, secondary, tertiary) paths. Select this option when you don’t want the traffic to pass on any of the uplinks specified in the rule, not just the primary.
Add
Adds a site.
Service policies
These configuration options are available:
Add Service Policy
Displays the controls to add a service policy.
Name
Specifies the policy name: for example, New York Office.
Realtime
Specifies the percentage to allocate for the guaranteed and maximum bandwidth. The guaranteed bandwidth is the percentage of the bandwidth that’s guaranteed to be allocated to the applications in the traffic class. A lower value indicates that the traffic in the class is more likely to be delayed. The maximum bandwidth is the maximum percentage of the bandwidth that can be allocated to the applications in the traffic class.
Interactive
Specifies the percentage to allocate for the guaranteed and maximum bandwidth.
Business-Critical
Specifies the percentage to allocate for the guaranteed and maximum bandwidth.
Normal
Specifies the percentage to allocate for the guaranteed and maximum bandwidth.
Low-Priority
Specifies the percentage to allocate for the guaranteed and maximum bandwidth. This is the default service policy.
Best Effort
Specifies the percentage to allocate for the guaranteed and maximum bandwidth.
Add
Adds the service policy to the list. The SCC redisplays the Policies table and applies your modifications to the running configuration, which is stored in memory.
Outbound QoS (advanced)
We recommend that you migrate legacy QoS profiles to QoS 9.0 or later. Advanced and basic QoS profiles have policy push restrictions. You can’t push legacy QoS profiles to SteelHeads running 8.0 or 9.0 and later. For detailed information about migrating to QoS 9.0 or later, see
Migrating legacy QoS policies or the
SteelHead User Guide.
If you have legacy basic QoS profiles and you don’t want to migrate to QoS 9.0 or later, you still must migrate to advanced QoS on both the client-side and server-side appliances before configuring advanced QoS.
• If you’re configuring QoS for the first time, you need to migrate from basic to advanced QoS.
• If you’re upgrading a SteelHead with an existing QoS configuration running RiOS 6.1.x or earlier, the system automatically upgrades to advanced QoS.
You can also migrate from basic to advanced QoS after configuring basic if you find you need more control. For details about Advanced QoS, see the SteelHead User Guide.
If you have a basic outbound QoS configuration and you’re previewing the Advanced Outbound QoS page, the page displays a preview of what the Advanced Outbound QoS page looks like after you migrate. You can’t make changes using the Advanced Outbound QoS page while you have a basic outbound QoS configuration.
Your basic outbound QoS settings will be migrated to advanced outbound QoS, which provides a greater degree of configurability. After migration has completed, you can’t revert your QoS settings in this policy back to basic outbound QoS bode. You’re encouraged to create a copy of this policy before you migrate to advanced outbound QoS and so that you can undo the operation.
To migrate to advanced outbound QoS mode, click Migrate. The Advanced Outbound QoS page is displayed.
QoS settings
These configuration options are available:
Enable QoS Shaping
Enables QoS classification to control the prioritization of different types of network traffic and to ensure that the SteelHead gives certain network traffic (for example, Voice over IP) higher priority than other network traffic. Traffic isn’t classified until at least one WAN interface is enabled. To disable QoS, clear this check box and restart the optimization service.
Mode
Specifies Flat or Hierarchical. Changing modes while QoS is enabled can cause momentary network disruptions. Use a hierarchical tree structure to:
• segregate traffic based on flow source or destination and apply different shaping rules and priorities to each leaf-class.
• effectively manage and support remote sites with different bandwidth characteristics.
Network Interfaces:
Enable QoS on <interface> with WAN Bandwidth
Specifies the interface bandwidth link rate in kilobits per second.
The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. As an example, if your SteelHead connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1 or T3).
Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Enable Local WAN Oversubscription
Allows the sum of remote site bandwidths to exceed the WAN uplink speed. Bandwidth oversubscription shares the bandwidth fairly when the network includes remote site bandwidths that collectively exceed the available bandwidth of the local WAN uplink interface speed. The link sharing provides bandwidth guarantees when some of the sites are partially or fully inactive.
As an example, your data center uplink can be 45 Mbps with three remote office sites each with 20 Mbps uplinks.
When disabled, you can only allocate bandwidth for the remote sites such that the total bandwidth doesn’t exceed the bandwidth of any of the interfaces on that QoS is enabled.
Enabling this option can degrade latency guarantees when the remote sites are fully active.
Enable QoS Marking
Identifies traffic using marking values. You can mark traffic using header parameters, such as VLAN, DSCP, and protocols. In RiOS 7.0, you can also use Layer-7 protocol information through AppFlow Engine (AFE) inspection to apply DSCP marking values to traffic flows.
In RiOS 7.0 and later, the DSCP or IP TOS marking only has local significance. This means you can set the DSCP or IP TOS values on the server-side appliance to values different to those set on the client-side appliance.
Global DSCP
Specifies a DSCP value from 0 to 63, or No Setting. If your existing network provides multiple classes of service based on DSCP values, and you’re integrating a SteelHead into your environment, you can use the Global DCSP feature to prevent dropped packets and other undesired effects.
QoS classes
These configuration options are available:
Add a New QoS Class
Displays the controls for adding a class.
Name
Specifies a name for the QoS class.
Shaping Parameters:
Class Parent
(Appears only when a QoS hierarchy is enabled.) Specifies the parent for a child class. The class inherits the parent’s definitions: for example, if the parent class has a business critical latency priority, and its child has a real-time latency priority, the child inherits the business critical priority from its parent, and uses a real-time priority only with respect to its siblings. Select a class parent from the drop-down list.
Latency Priority
Indicates how delay-sensitive a traffic class is to the QoS scheduler. Select the latency priority for the class from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conference.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class for all traffic that doesn’t fall into any other service class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum priority guarantees; if better service is available, it is provided. For example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The latency priority describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low latency priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
Minimum Bandwidth
Specifies the minimum amount of bandwidth (as a percentage) to guarantee to a traffic class when there is bandwidth contention. All of the classes combined can’t exceed 100 percent. During contention for bandwidth, the class is guaranteed the amount of bandwidth specified. The class receives more bandwidth if there is unused bandwidth remaining.
Excess bandwidth is allocated based on the relative ratios of minimum bandwidth. The total minimum guaranteed bandwidth of all QoS classes must be less than or equal to 100 percent of the parent class.
A default class is automatically created with minimum bandwidth of 10 percent. Traffic that doesn’t match any of the rules is put into the default class. We recommend that you change the minimum bandwidth of the default class to the appropriate value. You can adjust the value as low as 0 percent. The system rounds decimal numbers to 5 points.
Maximum Bandwidth
Specifies the maximum allowed bandwidth (as a percentage) a class receives as a percentage of the parent class maximum bandwidth. The limit is applied even if there is excess bandwidth available. The system rounds decimal numbers to 5 points.
Upper Bandwidth
Specifies the maximum allowed bandwidth (as a percentage) a class receives as a percentage of the parent class guaranteed bandwidth. The limit is applied even if there is excess bandwidth available. Upper Bandwidth doesn’t apply to MX-TCP queues.
Optimized Connection Limit
Specifies the maximum number of optimized connections for the class. When the limit is reached, all new connections are passed through unoptimized.
In hierarchical mode, a parent class connection limit doesn’t affect its child. Each child class optimized connection is limited by the connection limit specified for their class. For example, if B is a child of A, and the connection limit for A is set to 5, while the connection limit for B is set to 10, the connection limit for B is 10.
Connection Limit is supported only in in-path configurations. It isn’t supported in out-of-path or virtual-in-path configurations. Connection Limit doesn’t apply to the packet-order queue or Citrix ICA traffic.
RiOS doesn’t support a connection limit assigned to any QoS class that’s associated with a QoS rule with an AFE component. An AFE component consists of a Layer-7 protocol specification. RiOS can’t honor the class connection limit because the QoS scheduler may subsequently reclassify the traffic flow after applying a more precise match using AFE identification.
Outbound Queue
Specifies one of these queue methods for the leaf class from the drop-down list (the queue doesn’t apply to the inner class):
• SFQ—Shared Fair Queuing (SFQ) is the default queue for all classes. Determines SteelHead behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class.
• FIFO—Transmits all flows in the order that they’re received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages.
• MX-TCP—Has very different use cases than the other queue parameters. MX-TCP also has secondary effects that you must understand before configuring:
– When optimized traffic is mapped into a QoS class with the MX-TCP queuing parameter, the TCP congestion-control mechanism for that traffic is altered on the SteelHead. The normal TCP behavior of reducing the outbound sending rate when detecting congestion or packet loss is disabled, and the outbound rate is made to match the guaranteed bandwidth configured on the QoS class.
– You can use MX-TCP to achieve high-throughput rates even when the physical medium carrying the traffic has high-loss rates. For example,
– MX-TCP is commonly used for ensuring high throughput on satellite connections where a lower-layer-loss recovery technique isn’t in use. RiOS 8.5 and later support rate pacing for satellite deployments, which combines MX-TCP with a congestion-control method.
– Another use of MX-TCP is to achieve high throughput over high-bandwidth, high-latency links, especially when intermediate routers don’t have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high-throughput rates.
MX-TCP is incompatible with AFE identification. A traffic flow can’t be classified as MX-TCP and then subsequently classified in a different queue. This reclassification can occur if there is a more exact match of the traffic using AFE identification. Ensure these when you enable MX-TCP:
• The QoS rule for MX-TCP is at the top of QoS rules list.
• The rule doesn’t use AFE identification.
• You only use MX-TCP for optimized traffic. MX-TCP doesn’t work for unoptimized traffic.
Use caution when specifying MX-TCP. The outbound rate for the optimized traffic in the configured QoS class immediately increases to the specified bandwidth, but it doesn’t decrease in the presence of network congestion. The SteelHead always tries to transmit traffic at the specified rate. If no QoS mechanism (either parent classes on the SteelHead, or another QoS mechanism in the WAN or WAN infrastructure) is in use to protect other traffic; that other traffic might be impacted by MX-TCP not backing off to fairly share bandwidth.
There is a maximum bandwidth setting for MX-TCP that allows traffic in the MX class to burst to the maximum level if the bandwidth is available.
Marking Parameters:
DSCP
Specifies a DSCP value from 0 to 63, or Reflect.
Add
Adds the QoS class.
QoS sites and rules
These configuration options are available:
Add a Site or Rule
Displays the controls to define a remote site.
Add a:
Specifies Site or Rule.
Name
Specifies the name.
Description
Specifies a description.
For traffic with these characteristics:
Local Subnet or Host Label
Specifies an IP address and mask for the traffic source, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx/xx
-or-
Specify a host label.
Port or Port Label
Specifies all source ports, a single source port value or a port range of port1-port2, where port1 must be less than port2. The default setting is all ports.
-or-
Specify a port label.
Remote Subnet or Host Label
Specifies an IP address and mask pattern for the traffic destination, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx/xx
-or-
Specify a host label.
Protocol
Specifies All, TCP, UDP GRE, ICMP, IPsec AH, IPsec ESP, or a protocol number from the drop-down list. The default setting is All.
VLAN Tag ID
Specifies a VLAN tag as follows:
• Specify a numeric VLAN tag identification number from 0 to 4094.
• Specify all to specify the rule applies to all VLANs.
• Specify none to specify the rule applies to untagged connections.
RiOS supports VLAN 802.1Q. To configure VLAN tagging, configure transport rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
DSCP
Specifies a DSCP value from 0 to 63, or all to use all DSCP values.
Traffic Type
Specifies optimized, Passthrough, or All from the drop-down list. The default setting is All.
Application
Specifies an application from the drop-down list of global applications. To narrow the search, type the first characters in the application name. You can define and add any applications that don’t appear in the list. Selecting HTTP expands the control to include the Domain Name and Relative Path controls. Enter the domain name and relative path. The relative path is the part of the URL that follows the domain name.
To facilitate configuration, you can use wildcards in the name and relative path controls: for example, *.akamaitechnologies.com will match Anything.akamaitechnologies.com. Examples:
a.akamaitechnologies.com
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
Using more than one wildcard (for example, *.*.akamaitechnologies.com) will match Anything.Anything.akamaitechnologies.com. You must include the second period (.).
Examples:
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
But not a.akamaitechnologies.com.
Selecting SSL expands the control to allow classification of pass-through SSL traffic matching the TLS/SSL server common name. In the Common Name control, specify the common name of a certificate.
To facilitate configuration, you can use wildcards in the name: for example, *.nbttech.com. If you have three origin servers using different certificates such as webmail.nbttech.com, internal.nbttech.com, and marketingweb.nbttech.com, on the server-side SteelHeads, all three server configurations can use the same certificate name *.nbttech.com.
You can’t classify SSL optimized traffic using the Common Name control. Instead, you can create a QoS HTTP rule to match the domain and server name.
Apply these QoS Settings:
Service Class
Indicates how delay-sensitive a traffic class is to the QoS scheduler. Select a service class for the application from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conferences.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet, and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum service class guarantees; if better service is available, it is provided: for example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The service class describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
DSCP
Specifies a DSCP value from 0 to 63, Inherit from Service Class, or Reflect.
Apply these Path Selections:
Path 1, Path 2, Path 3
Specifies the path preference order (only one path will be used).
If paths are configured and all down:
Relay traffic from the interface normally
Sends traffic unmodified out of the WAN side of whichever in-path it came in on. This is the default setting.
Drop traffic
Drops packets in case of failure of all three (primary, secondary, tertiary) paths. Select this option when you don’t want the traffic to pass on any of the uplinks specified in the rule, not just the primary.
Add
Adds a site or rule.
Outbound QoS interfaces
You configure outbound QoS (Basic) and outbound QoS (Advanced) interfaces in the Outbound QoS Interfaces page. For details about Outbound QoS Interfaces see the SteelHead User Guide.
Outbound QoS (basic) WAN link
These configuration options are available:
WAN Bandwidth (kbps)
Specifies its bandwidth link rate in kilobits per second. The bandwidth for the default site is automatically set to this value. Inbound QoS supports in-path interfaces only; it doesn’t support primary or auxiliary interfaces.
The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. For example, if your appliance connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1 or T3).
Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Enable QoS on <interface>
Specifies WAN link interfaces.
Outbound QoS (advanced) WAN Link
This configuration option is available:
Enable QoS on <interface> with WAN bandwidth <kbps>
Specifies WAN link interfaces and WAN bandwidth. Specify its bandwidth link rate in kilobits per second. The bandwidth for the default site is automatically set to this value.
Inbound QoS supports in-path interfaces only; it doesn’t support primary or auxiliary interfaces.
The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. For example, if your appliance connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1 or T3).
Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Version Incompatibilities for Outbound QoS Interfaces
SCC 9.0 and later don’t support outbound QoS interface-policy pushes for SteelHead 9.0.x. If you’re running this version of the software on your remote appliance, you must migrate your QoS policies to simplified QoS on the SCC.
Only SteelHead 8.x.x and lower are supported for this feature. If you’re running these versions of the software, you have limited QoS functionality. We recommend you upgrade to simplified QoS. For detailed information about migrating QoS, see
Migrating legacy QoS policies.
Inbound QoS
You configure inbound QoS in the Inbound QoS page.
We recommend that you migrate legacy QoS profiles to QoS 9.0 or later. Basic and advanced QoS profiles have policy push restrictions. You can’t push legacy QoS profiles to SteelHeads running 9.0 or later. For detailed information about migrating to QoS 9.0 or later, see
Migrating legacy QoS policies or the
SteelHead User Guide.
Inbound QoS allocates bandwidth and prioritizes traffic flowing into the LAN network behind the SteelHead appliance. This provides the benefits of QoS for environments that can’t meet their QoS requirements with outbound QoS.
For details about Inbound QoS environments and deployments, see the SteelHead User Guide.
WAN link
These configuration options are available:
Enable Inbound QoS Shaping and Enforcement
Enables QoS to control the prioritization of different types of inbound network traffic and to ensure that the SteelHead gives certain network traffic (for example, Voice over IP) higher priority than other network traffic. Traffic isn’t classified until at least one WAN interface is enabled. By default, inbound QoS classification is disabled.
To disable inbound QoS, clear this check box and restart the optimization service.
Enable QoS on <interface> with WAN bandwidth: <kbps> kbps
Enables a WAN interface <X-Y>. Specify its bandwidth link rate in kbps. The bandwidth for the default site is automatically set to this value.
Inbound QoS supports in-path interfaces only; it doesn’t support primary or auxiliary interfaces.
The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. For example, if your appliance connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1, T3).
Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Inbound QoS classes
These configuration options are available:
Add a Class
Displays the controls to add a class.
Class Name
Specifies a name for the QoS class.
Priority
Specifies the priority from the drop-down list. Priority indicates how delay-sensitive a traffic class is to the QoS scheduler. Select a service class for the application from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conferences.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet, and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum service class guarantees; if better service is available, it is provided: for example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The service class describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
Minimum Bandwidth
Specifies the minimum amount of bandwidth (as a percentage) to guarantee to a traffic class when there is bandwidth contention. All of the classes combined can’t exceed 100 percent. During contention for bandwidth, the class is guaranteed the amount of bandwidth specified. The class receives more bandwidth if there is unused bandwidth remaining.
Excess bandwidth is allocated based on the relative ratios of minimum bandwidth. The total minimum guaranteed bandwidth of all QoS classes must be less than or equal to 100 percent of the parent class.
A default class is automatically created with minimum bandwidth of 10 percent. Traffic that doesn’t match any of the rules is put into the default class. We recommend that you change the minimum bandwidth of the default class to the appropriate value. You can adjust the value as low as 0 percent. The system rounds decimal numbers to 5 points.
Maximum Bandwidth
Specifies the maximum allowed bandwidth (as a percentage) a class receives as a percentage of the parent class maximum bandwidth. The limit is applied even if there is excess bandwidth available. The system rounds decimal numbers to 5 points.
Link Share Weight
Specifies the weight for the class. Applies to flat mode only. The link share weight determines how the excess bandwidth is allocated among sibling classes. Link share doesn’t depend on the minimum guaranteed bandwidth. By default, all the link shares are equal. Classes with a larger weight are allocated more of the excess bandwidth than classes with a lower link share weight.
You can’t specify a Link Share Weight in Hierarchical QoS. In Hierarchical QoS, the link share weight is the same proportion as the guaranteed bandwidth of the class. The Link Share Weight doesn’t apply to MX-TCP queues.
Add
Adds a class.
Inbound QoS rules
These configuration options are available:
Add a Rule
Displays the controls to add a QoS rule.
Name
Specifies a rule name.
Insert Rule At
Inserts a QoS rule for a QoS class. Select Start, End, or a rule number from the drop-down list. Appliances evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule don’t match, the system consults the next rule: for example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
Description
Specifies the rule to facilitate administration.
For traffic with these characteristics:
Remote Subnet or Host Label
Specifies an IP address and mask pattern for the traffic destination, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx./xx
-or-
Specify a host label.
Port or Port Label
Specifies all source ports, a single source port value or a port range of port1-port2, where port1 must be less than port2. The default setting is all ports.
-or-
Specify a port label.
Local Subnet or Host Label
Specifies an IP address and mask for the traffic source, or you can specify all or 0.0.0.0/0 as the wildcard for all traffic.
Use this format: xxx.xxx.xxx.xxx./xx
-or-
Specify a host label.
Port or Port Label
Specifies all destination ports, a single source port value or a port range of port1-port2, where port1 must be less than port2. The default setting is all ports.
-or-
Specify a port label.
Protocol
Specifies All, TCP, GRE, UDP, ICMP, IPsec Authentication Header (AH), IPsec Encapsulating Security Payload (ESP), or a number from the drop-down list. All specifies all TCP and UDP-based protocols.
Traffic Type
Specifies All, Optimized, or Passthrough from the drop-down list. The system applies the QoS rules to optimized and pass-through (ingress only) traffic.
Session reliability (port 2598) isn’t supported with pass-through Citrix traffic.
DSCP
Specifies a DSCP value from 0 to 63.
VLAN Tag ID
Specifies the VLAN tag for the rule.
Application
Specifies an application from the drop-down list of global applications. To narrow the search, type the first characters in the application name. You can define and add any applications that don’t appear in the list.
Selecting HTTP expands the control to include the Domain Name and Relative Path controls. Enter the domain name and relative path. The relative path is the part of the URL that follows the domain name.
To facilitate configuration, you can use wildcards in the name and relative path controls: for example, *.akamaitechnologies.com will match Anything.akamaitechnologies.com. Examples:
a.akamaitechnologies.com
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
Using more than one wildcard (for example, *.*.akamaitechnologies.com) will match Anything.Anything.akamaitechnologies.com. You must include the second period (.).
Examples:
a.b.akamaitechnologies.com
a.b.c.akamaitechnologies.com
a.b.c.d.akamaitechnologies.com
But not a.akamaitechnologies.com.
Selecting SSL expands the control to allow classification of pass-through SSL traffic matching the TLS/SSL server common name. In the Common Name control, specify the common name of a certificate.
To facilitate configuration, you can use wildcards in the name: for example, *.nbttech.com. If you have three origin servers using different certificates such as webmail.nbttech.com, internal.nbttech.com, and marketingweb.nbttech.com, on the server-side SteelHeads, all three server configurations can use the same certificate name *.nbttech.com.
You can’t classify SSL optimized traffic using the Common Name control. Instead, you can create a QoS HTTP rule to match the domain and server name.
Apply these QoS Settings:
Service Class Name
Specifies the latency priority for the class from the drop-down list (highest priority to lowest):
• Realtime—Specifies real-time traffic class. Give this value to your highest priority traffic: for example, VoIP or video conference.
• Interactive—Specifies an interactive traffic class: for example, Citrix, RDP, Telnet and SSH.
• Business Critical—Specifies the high priority traffic class: for example, Thick Client Applications, ERPs, and CRMs.
• Normal Priority—Specifies a normal priority traffic class: for example, internet browsing, file sharing, and email.
• Low Priority—Specifies a low priority traffic class for all traffic that doesn’t fall into any other service class: for example, FTP, backup, other high-throughput data transfers, and recreational applications such as audio file sharing.
• Best Effort—Specifies the lowest priority.
These are minimum priority guarantees; if better service is available, it is provided. For example, if a class is specified as low priority and the higher priority classes aren’t active, then the low priority class receives the highest possible available priority for the current traffic conditions. This parameter controls the priority of the class relative to the other classes.
The latency priority describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Typically, you configure low latency priority for high-throughput, nonpacket delay sensitive applications like FTP and backup.
Add
Adds a rule to the inbound QoS rule list.
Inbound QoS interfaces
This feature applies only to SteelHead.
For details about Inbound QoS environments and deployments, see the SteelHead User Guide or the SteelHead Deployment Guide.
WAN link
This configuration option is available:
Enable QoS on <interface> with WAN bandwidth: <kbps> kbps
Enables a WAN interface <X-Y>. Specify its bandwidth link rate in kilobits per second. The bandwidth for the default site is automatically set to this value.
Inbound QoS supports in-path interfaces only; it doesn’t support primary or auxiliary interfaces.
The link rate is the bottleneck WAN bandwidth, not the interface speed out of the WAN interface into the router or switch. For example, if your appliance connects to a router with a 100-Mbps link, don’t specify this value—specify the actual WAN bandwidth (for example, T1 or T3).
Different WAN interfaces can have different WAN bandwidths; you must enter the bandwidth link rate correctly for QoS to function properly.
Version Incompatibilities for Inbound QoS Interfaces
SCC 9.0 and later don’t support inbound QoS interface policy pushes on SteelHead 9.0.x. If you’re running this version of the software on your remote appliance, you must migrate your QoS policies to simplified QoS on the SCC. For detailed information about QoS migration best practices, see
Managing QoS.
Only SteelHead 8.x.x and lower are supported for this feature. If you’re running these versions of the software, you have limited QoS functionality. We recommend you upgrade to simplified QoS. For detailed information about migrating QoS, see
Migrating legacy QoS policies.
Path selection
You enable a legacy path selection settings in the Path Selection page.
We recommend that you migrate your path selection rules. For detailed information, see
Managing path selection.
For path selection use case examples, see the SteelHead User Guide.
These configuration options are available:
Enable Path Selection
Enables path selection configuration.
Apply
Applies your settings.
Path Definition:
Add a New Path
Displays the controls to define a path.
Name
Specifies the path.
Probe Packet Settings:
Remote IP Address
Specifies the IP address of the remote host to ping when monitoring the path status.
DSCP
Specifies the DSCP marking for the ping packet. The marking is necessary in case the service providers are applying path selection metrics based on DSCP marking and it happens to be of a different type for each provider.
The default marking is reflect. Reflect specifies that the DSCP level or IP ToS value found on pass-through and optimized traffic is unchanged when it passes through the appliance.
Timeout
Specifies how much time, in seconds, elapses before the system considers the path to be unavailable. The default value is 2 seconds. Path selection uses ICMP pings to probe the paths. If the ping responses don’t make it back within this timeout setting and the system loses the number of packets defined by the threshold value, it considers the path to be down and triggers an alarm.
Threshold
Specifies how many timed-out probes to count before the system considers the path to be unavailable and triggers an alarm. The default is 3 packets. This value also determines the how many probes the system must receive to consider the path to be available.
Path selection uses ICMP pings to monitor path availability. If the ping responses don’t make it back within the probe timeout and the system loses the number of packets defined by this threshold, it considers the path to be down and triggers an alarm.
Add
Adds the new path.
Version Incompatibilities for Path Selection
SCC 9.0 and later don’t support path selection-policy pushes for SteelHead 8.0, 9.0.x or later.
If you’re running this version of the software on your remote appliance, you must create a new set of path selection rules based on sites, networks, and uplinks to apply to simplified QoS or secure transport. For details about creating sites, networks, and uplinks, see
Managing Interceptor Clusters. For details about configuration path selection in the SCC, see
Managing path selection.
Port labels
You create port labels for the selected networking policy in the Port Labels page. Port labels are names given to sets of port numbers. You use port labels when configuring in-path rules. For example, you can use port labels to define a set of ports for that the same in-path, peering, QoS classification, and QoS marking rules apply.
When you configure QoS and path selection for RiOS 9.0, SteelHeads using host or port labels must be assigned to the Global group.
For details about the port labels, see the SteelHead User Guide.
These configuration options are available:
Add a New Port Label
Displays the controls to add a new port label.
Name
Specifies the label name. These rules apply:
• Port labels aren’t case sensitive and can be any string consisting of letters, the underscore, or the hyphen. There can’t be spaces in port labels.
• The fields in the various rule pages of the SCC that take a physical port number also take a port label.
• To avoid confusion, don’t use a number for a port label.
• Port labels that are used in in-path and other rules, such as QoS and peering rules, can’t be deleted.
• Port label changes (that is, adding and removing ports inside a label) are applied immediately by the rules that use the port labels that you have modified.
Ports
Specifies a comma-separated list of ports.
Add
Adds the port label.
Host labels
You configure host labels in the Host Labels page.
Host labels are names given to sets of hostnames and subnets to streamline configuration. Host labels provide flexibility because you can create a logical set of hostnames to use in place of a destination IP/subnet and then apply a rule, such as a QoS rule or an in-path rule, to the entire set instead of creating individual rules for each hostname or IP subnet.
When you define hostnames in host labels (as opposed to subnets), RiOS performs a DNS query and retrieves a set of IP addresses that correspond to that fully qualified domain name (hostname). It uses these IP addresses to match the destination IP addresses for a rule using the host label. You can also specify a set of IP subnets in a host label to use as the destination IP addresses for a rule using the host label.
Host labels are compatible with autodiscovery, pass-through, and fixed-target (not packet mode) in-path rules. RiOS 9.16.0 and later support IPv6 for host labels.
Host labels are optional.
When to use host labels
You can define a set of file servers in a host label, use that host label in a single QoS or in-path rule, and apply a policy limiting all IP traffic to and from the servers (independent of what protocol or application is in use).
Other ways to use host labels:
• List multiple dedicated application servers by hostname in a single rule and apply a policy
• List multiple business websites and servers to protect
• List recreational websites to restrict
When you configure QoS and path selection for RiOS 9.0, SteelHeads using host or port labels must be assigned to the Global group.
If you intend to use host labels with clusters, you must configure host labels in a policy before you can perform a cluster push.
For detailed information on configuring host labels, see the SteelHead User Guide.
These configuration options are available:
Add a New Host Label
Displays the controls to add a new host label.
Name
Specifies the label name: for example, YouTube. These rules apply:
• Host label names are case sensitive and can be any string consisting of letters, numbers, the underscore (_), or the hyphen (-). There can’t be spaces in host labels.
• We suggest starting the name with a letter or underscore.
• To avoid confusion, don’t use a number for a host label.
• You can’t delete host labels that a QoS or in-path rule is using.
Hostnames/Subnets
Specifies a comma-separated list of hostnames and subnets. Hostnames aren’t case sensitive. You can also separate hostname and subnet names with spaces or new lines.
Use this format:
xxx.xxx.xxx.xxx/xx where /xx is a subnet mask value between 0 and 32.
A host label can be a fully qualified domain name.
A hostname can appear in multiple host labels. You can use up to 100 unique hostnames.
A host label can contain up to 64 subnets and hostnames.
Add New Host Label
Adds the host label. The page updates the host label table with the new host label. Because the system resolves new hostnames through the DNS, wait a few seconds and then refresh your browser.
Domain labels
You create domain labels for the selected networking policy in the Domain Labels page.
Domain labels are names given to a group of domains to streamline configuration. You can specify an internet domain with wildcards to define a wider group. For example, you can create a domain label called Office365 and add *.microsoftonline.com, *.office365.com, or *.office.com.
Domain labels provide flexible domain and hostname-based interception through a dynamic IP address to accommodate network environments that are changing from static to dynamic IP addresses.
Domain labels are optional.
When to use domain labels
Use domain labels to:
• create a logical set of domain names. Apply an in-path rule to the entire set instead of creating individual rules for each domain name. One rule replaces many rules. For example, you can define a set of services in a domain label, use that domain label in an in-path rule, and apply an optimization policy based on the application or service being accessed.
• match a specific set of services. Domain labels can be especially useful when an IP address and subnet hosts many services and you don’t need your in-path rule to match them all.
• replace a fixed IP address for a server. Some SaaS providers and the O365 VNext architecture that serve multiple O365 applications such as SharePoint, Lync, and Exchange no longer provide a fixed IP address for the server. With many IP addresses on the same server, a single address is no longer enough to match with an in-path rule. Let’s suppose you need to select and optimize a specific SaaS service. Create a domain label and then use it with a host label and an in-path rule to intercept and optimize the traffic.
Dependencies for domain labels
Domain labels have these dependencies:
• They’re compatible with autodiscovery, pass-through, and fixed-target (not packet mode) in-path rules.
• They don’t replace the destination IP address. The in-path rule still sets the destination using IP/subnet (or uses a host label or port). The in-path rule matches the IP address and port first, and then matches the domain label second. The rule must match both the destination and the domain label.
• RiOS 9.16.0 and later support IPv6 for domain labels.
• The client-side and server-side SteelHeads must be running RiOS 9.2 or later.
• A fixed-target rule with a domain label match followed by an auto-discover rule will not use autodiscovery but will instead pass through the traffic. This happens because the matching SYN packet for a fixed-target rule with a domain-label isn’t sent with a probe.
• Domain labels and cloud acceleration are mutually exclusive. When you add a domain label to an in-path rule that has cloud acceleration enabled, the system automatically sets cloud acceleration to Pass Through and connections to the subscribed SaaS platform are no longer optimized by the SteelHead SaaS. To use cloud acceleration with domain labels, place the domain label rules lower than cloud acceleration rules in your rule list so the cloud rules match before the domain label rules.
• We recommend adding domain label rules last in the list, so RiOS matches all previous rules before matching the domain label rule.
• When you add a domain label to an in-path rule with the ports set to All, the in-path rule defaults to ports HTTP (80) and HTTPS (443) for optimization. A warning states that only HTTP and HTTPS ports are in use. When you choose a specific port number or port range, the in-path rule matches those ports.
• They’re not compatible with connection forwarding.
• You can’t use domain labels with QoS rules.
You can also use the CLI to configure domain labels. For detailed information, see the SteelHead Deployment Guide.
Adding a domain label
These configuration options are available:
Add a New Domain Label
Displays the controls to add a new domain label.
Name
Specifies the label name. These rules apply:
• A domain label name can be up to 64 characters long.
• Domain label names are case sensitive and can be any string consisting of letters, numbers, the underscore (_), or the hyphen (-). There can’t be spaces in domain label names.
• We suggest starting the name with a letter or underscore, although the first letter can be a number.
• To avoid confusion, don’t use a number for a domain label.
Domains
Specifies a comma-separated list of domains. Keep in mind that the URL might use other domains. For example, www.box.com might also use srv1.box.net and other domains. Determine all of the domains whose traffic you want to optimize, and make an entry in the domain label for each one. Domain labels are most useful when they specify a narrow destination IP range, so use the smallest destination IP/range you can. Using a host label can help to narrow the destination IP range.
These rules apply to domain label entries:
• Matching is case insensitive.
• You must include a top-level domain: for example, .com. You can’t include a wildcard in a top-level domain.
• You must specify second-level domains: for example, *.outlook.com, but not *.com.
• You can also separate domains with spaces or new lines.
• A domain name can be up to 64 characters long.
• Characters must be alphanumeric (0-9, a-z, A-Z), periods, underscores, wildcards, and hyphens.
• Don’t use consecutive periods.
• Don’t use consecutive wildcards.
• Don’t use IP addresses.
A domain can appear in multiple domain labels. You can create up to 63 unique domain labels.
Add New Domain Label
Adds the domain label. The page updates the domain label table with the new domain label.
Optimization policy settings
The Optimization Policy optimize connections using scalable data reduction, compression, both, or none.
This section describes the Optimization Policy feature set. The procedures in this section assume you have already created an Optimization Policy.
General service settings
In the General Service Settings page, you can modify default settings for the maximum half-opened connections from a single source IP address and the connection pool size. Pay careful attention to the configuration descriptions included in this procedure.
General Service Settings include controls to enable or disable in-path, out-of-path, failover support, and to set connection limits and the maximum connection pooling size.
If you have an appliance that contains multiple bypass cards, the SCC displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your appliance.
For detailed information about general service settings for optimization, see the SteelHead User Guide.
General service settings
These configuration options are available:
In-Path Settings
Enable In-Path Support
Enables in-path support.
• Reset Existing Client Connections on Start Up—(Not recommended for production environments).
Enables kickoff globally. If you enable kickoff, connections that exist when the optimization service is started and restarted are disconnected. When the connections are retried they’re optimized.
Generally, connections are short-lived and kickoff isn’t necessary. It is suitable for very challenging remote environments. In a remote branch office with a T1 and 35-ms round-trip time, you would want connections to migrate to optimization gracefully, rather than risk interruption with kickoff. RiOS provides a way to reset preexisting connections that match an in-path rule and the rule has kickoff enabled. You can also reset a single pass-through or optimized connection in the Current Connections report, one connection at a time.
Don’t enable kickoff for in-path SteelHeads that use autodiscover or if you don’t have a SteelHead on the remote side of the network. If you don’t set any in-path rules the default behavior is to autodiscover all connections. If kickoff is enabled, all connections that existed before the SteelHead started are reset.
• Enable L4/PBR/WCCP/Interceptor Support—Enables optional, virtual in-path support on all the interfaces for networks that use Layer-4 switches, PBR, WCCP, and Interceptor. External traffic redirection is supported only on the first in-path interface. These redirection methods are available:
– Layer-4 Switch—You enable Layer-4 switch support when you have multiple SteelHeads in your network, so that you can manage large bandwidth requirements.
– Policy-Based Routing (PBR)—PBR allows you to define policies to route packets instead of relying on routing protocols. You enable PBR to redirect traffic that you want optimized by a SteelHead that’s not in the direct physical path between the client and server.
– Web Cache Communication Protocol (WCCP)—If your network design requires you to use WCCP, a packet redirection mechanism directs packets to RiOS appliances that aren’t in the direct physical path to ensure that they’re optimized.
For details about configuring Layer-4 switch, PBR, and WCCP deployments, see the SteelHead Deployment Guide.
• Interface <inpathx_y> Present—Specify the interface upon which you want to enable optimization support.
– Enable Optimizations on Interface <inpathx_y>—Enables in-path support for additional bypass cards.
If you have an appliance that contains multiple two-port or four-port bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your SteelHead.
The interface names for the bypass cards are a combination of the slot number and the port pairs (inpath<slot>_<pair>, inpath<slot>_<pair>): for example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are inpath0_0 and inpath0_1. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are inpath1_0 and inpath1_1. For details about installing additional bypass cards, see the Riverbed Hardware Platforms Guide.
Out-of-Path Settings:
For details about configuring Layer-4 switch, PBR, and WCCP deployments, see the SteelHead Deployment Guide.
• Interface <inpathx_y> Present—Specify the interface upon which you want to enable optimization support.
– Enable Optimizations on Interface <inpathx_y>—Enables in-path support for additional bypass cards.
If you have an appliance that contains multiple two-port or four-port bypass cards, the Management Console displays options to enable in-path support for these ports. The number of these interface options depends on the number of pairs of LAN and WAN ports that you have enabled in your SteelHead.
The interface names for the bypass cards are a combination of the slot number and the port pairs (inpath<slot>_<pair>, inpath<slot>_<pair>): for example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are inpath0_0 and inpath0_1. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are inpath1_0 and inpath1_1. For details about installing additional bypass cards, see the Riverbed Hardware Platforms Guide.
Enable Out-of-Path Support
(Server-side appliances only.) Enables out-of-path support on a server-side SteelHead, where only a SteelHead primary interface connects to the network. The SteelHead can be connected anywhere in the LAN. There is no redirecting device in an out-of-path SteelHead deployment. You configure fixed-target in-path rules for the client-side SteelHead. The fixed-target in-path rules point to the primary IP address of the out-of-path SteelHead. The out-of-path SteelHead uses its primary IP address when communicating to the server. The remote SteelHead must be deployed either in a physical or virtual in-path mode.
If you set up an out-of-path configuration with failover support, you must set fixed-target rules that specify the primary and backup SteelHeads.
Connection Settings:
Half-Open Connection Limit per Source IP
Restricts half-opened connections on a source IP address initiating connections (that is, the client machine).
Set this feature to block a source IP address that’s opening multiple connections to invalid hosts or ports simultaneously (for example, a virus or a port scanner).
This feature doesn’t prevent a source IP address from connecting to valid hosts at a normal rate. Thus, a source IP address could have more established connections than the limit. The default value is 4096.
The appliance counts the number of half-opened connections for a source IP address (connections that check if a server connection can be established before accepting the client connection). If the count is above the limit, new connections from the source IP address are passed through unoptimized.
If you have a client connecting to valid hosts or ports at a very high rate, some of its connections might be passed through even though all of the connections are valid.
Maximum Connection Pool Size
Specifies the maximum number of TCP connections in a connection pool.
Connection pooling enhances network performance by reusing active connections instead of creating a new connection for every request. Connection pooling is useful for protocols that create a large number of short-lived TCP connections, such as HTTP.
To optimize such protocols, a connection pool manager maintains a pool of idle TCP connections, up to the maximum pool size. When a client requests a new connection to a previously visited server, the pool manager checks the pool for unused connections and returns one if available. Thus, the client and the SteelHead don’t have to wait for a three-way TCP handshake to finish across the WAN. If all connections currently in the pool are busy and the maximum pool size hasn’t been reached, the new connection is created and added to the pool. When the pool reaches its maximum size, all new connection requests are queued until a connection in the pool becomes available or the connection attempt times out.
The default value is 20. A value of 0 specifies no connection pool.
You must restart the SteelHead after changing this setting.
Viewing the Connection Pooling report can help determine whether to modify the default setting. If the report indicates an unacceptably low ratio of pool hits per total connection requests, increase the pool size.
Failover Settings:
Enable Failover Support
Configures a failover deployment on either a primary or backup SteelHead. In the event of a failure in the primary appliance, the backup appliance takes its place with a warm RiOS data store, and can begin delivering fully optimized performance immediately.
The primary and backup SteelHeads must be the same hardware model.
Current Appliance is
Specifies Master or Backup from the drop-down list. A master SteelHead is the primary appliance; the backup SteelHead is the appliance that automatically optimizes traffic if the primary appliance fails.
IP Address (peer in-path interface)
Specifies the IP address in IPv6 or IPv4 format for the primary or backup SteelHead. You must specify the in-path IP address (inpath0_0) for the SteelHead, not the primary interface IP address.
You must specify the inpath0_0 interface as the other appliance’s in-path IP address.
Packet Mode Optimization Settings:
Enable Packet Mode Optimization
Performs packet-by-packet SDR bandwidth optimization on TCP or UDP (over IPv4 or IPv6) flows. This feature uses fixed-target packet mode optimization in-path rules to optimize bandwidth for applications over these transport protocols.
Both SteelHeads must be running RiOS 8.5 or later for TCPv4 and UDPv6 flows. Both SteelHeads must be running RiOS 7.0 or later for TCPv6 or UDPv4 flows.
By default, packet-mode optimization is disabled.
Enabling this feature requires an optimization service restart.
In-path rules
We recommend you consult the SteelHead User Guide for detailed information about in-path rules.
In-path rules are used only when a connection is initiated. Because connections are usually initiated by clients, in-path rules are configured for the initiating, or client-side, SteelHead. In-path rules determine SteelHead behavior with SYN packets.
In-path rules are an ordered list of fields a SteelHead uses to match with SYN packet fields (for example, source or destination subnet, IP address, VLAN, or TCP port). Each in-path rule has an action field. When a SteelHead finds a matching in-path rule for a SYN packet, the SteelHead treats the packet according to the action specified in the in-path rule.
In-path rules are used only in these scenarios:
• TCP SYN packet arrives on the LAN interface of physical in-path deployments.
• TCP SYN packet arrives on the WAN0_0 interface of virtual in-path deployments.
Both of these scenarios are associated with the first, or initiating, SYN packet of the connection. Because most connections are initiated by the client, you configure your in-path rules on the client-side SteelHead. In-path rules have no effect on connections that are already established, regardless of whether the connections are being optimized.
In-path rule configurations differ depending on the action. For example, both the fixed-target and the autodiscovery actions allow you to choose what type of optimization is applied, what type of data reduction is used, what type of latency optimization is applied, and so on.
For detailed information about in-path rules, including packet-mode optimization, see the SteelHead User Guide.
These configuration options are available:
Add a New In-Path Rule
Displays the controls for adding a new rule.
Type
Specifies one of these rule types from the drop-down list:
• Auto-Discover—Uses the autodiscovery process to determine if a remote SteelHead is able to optimize the connection attempting to be created by this SYN packet. By default, Auto-Discover is applied to all IP addresses and ports that aren’t secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting.
• Fixed-Target—Skips the autodiscovery process and uses a specified remote SteelHead as an optimization peer.
You must specify at least one remote target SteelHead (and, optionally, which ports and backup SteelHeads), and add rules to specify the network of servers, ports, port labels, and out-of-path SteelHeads to use.
In RiOS 8.5 and later, a fixed-target rule enables you to optimize traffic end-to-end using IPv6 addresses. You must change the use of All IP (IPv4 + IPv6) to All IPv6.
If you don’t change to All IPv6, use specific source and destination IPv6 addresses. The inner channel between SteelHeads forms a TCP connection using the manually assigned IPv6 address. This method is similar to an IPv4 fixed-target rule and you configure it the same way.
• Fixed-Target (Packet Mode Optimization)—Skips the autodiscovery process and uses a specified remote SteelHead as an optimization peer to perform bandwidth optimization on TCPv4, TCPv6, UDPv4, or UDPv6 connections.
Packet-mode optimization rules support both physical in-path and primary/backup SteelHead configurations.
You must specify which TCP or UDP traffic flows need optimization, at least one remote target SteelHead, and, optionally, which ports and backup SteelHeads to use.
In addition to adding fixed-target packet-mode optimization rules, you must go to Optimization > Network Services: General Service Settings, enable packet-mode optimization, and restart the optimization service.
Packet-mode optimization rules are unidirectional; a rule on the client-side SteelHead optimizes traffic to the server only. To optimize bidirectional traffic, define two rules:
– A fixed-target packet-mode optimization rule on the client-side SteelHead to the server.
– A fixed-target packet-mode optimization rule on the server-side SteelHead to the client.
Packet-mode optimization rules perform packet-by-packet optimization, as opposed to traffic-flow optimization. After you create the in-path rule to intercept the connection, the traffic flows enter the SteelHead. The SteelHead doesn’t terminate the connection, but instead rearranges the packet headers and payload for SDR optimization. Next, it provides SDR optimization and sends the packets through a TCPv4 or TCPv6 channel to the peer SteelHead. The peer SteelHead decodes the packet and routes it to the destined server. The optimized packets are sent through a dedicated channel to the peer, depending on which in-path rule the packet's flow was matched against.
To view packet-mode optimized traffic, choose Reports > Networking: Current Connections or Connection History. You can also enter the show flows CLI command at the system prompt.
Requirements:
– Both the client-side SteelHead and the server-side SteelHead must be running RiOS 7.0 or later.
– IPv6 is enabled by default in RiOS 8.0.x and later.
– To view the packet-mode flows in the Current Connections and Connection History reports, the SteelHead must be running RiOS 8.5 or later.
Packet-mode optimization rules don’t support:
– Automatic reflection of DSCP markings.
– Latency optimization and preoptimization policies. Selecting this rule type automatically sets the preoptimization policy and latency optimization policies to none.
– Autodiscovery of the peer SteelHead. Because this is a fixed-target rule, the SteelHead determines the IP address of its peer from the rule configuration.
– Connection forwarding, simplified routing, or asymmetric routing.
– QoS, MIP interfaces, NetFlow, transparency, or the automatic kickoff feature.
– Automatically assigned IPv6 addresses.
• Pass-Through—Allows the SYN packet to pass through the SteelHead unoptimized. No optimization is performed on the TCP connection initiated by this SYN packet. You define pass-through rules to exclude subnets from optimization. Traffic is also passed through when the SteelHead is in bypass mode. (Pass through of traffic might occur because of in-path rules or because the connection was established before the SteelHead was put in place or before the optimization service was enabled.)
• Discard—Drops the SYN packets silently. The SteelHead filters out traffic that matches the discard rules. This process is similar to how routers and firewalls drop disallowed packets: the connection-initiating device has no knowledge that its packets were dropped until the connection times out.
• Deny—Drops the SYN packets, sends a message back to its source, and resets the TCP connection being attempted. Using an active reset process rather than a silent discard allows the connection initiator to know that its connection is disallowed.
Enable Email Notification periodically sends an email reminder to evaluate in-path pass-through rules. Frequently pass-through in-path rules are created as a temporary workaround for an acute problem. These rules often end up becoming permanent because the administrator forgets to remove them.
You must also set Send Reminder of Passthrough Rules and specify an email address in the Email policy page.
This field is active only when you specify a pass-through rule. You can’t create notifications for other types of rules.
Email is sent every 15 days.
On the SteelHead Email policy page, you must also:
• Select the Report Events via Email check box and specify an email address.
• Select the Send Reminder of Pass-through Rules via Email option.
Ignore Latency Detection
Does not perform latency detection for this in-path rule when global latency detection is enabled.
Web Proxy
Uses a single-ended Web Proxy to transparently intercept all traffic bound to the internet. Enable this option on a client-side appliance with Auto Discover and Pass-Through rules. Enabling the Web Proxy improves performance by providing optimization services such as web object caching and SSL decryption to enable content caching and logging services.
You can use the same SteelHead for optimizing dual-ended connections (between clients and servers in the data center) and web proxy connections destined for internet-based servers.
Web object caching includes all objects delivered through HTTP(S) that can be cached, including large video objects like static video on demand (VoD) objects and YouTube video. YouTube video caching is enabled by default.
The number of objects that can be cached is limited only by the total available cache space, determined by the SteelHead model. The cache sizes range from 50 GB to 500 GB.
The maximum size of a single object is unlimited. An object remains in the cache for the amount of the time specified in the cache control header. When the time limit expires, the SteelHead evicts the object from the cache.
The proxy cache is separate from the RiOS data store. When objects for a given website are already present in the cache, the system terminates the connection locally and serves the content from the cache. This saves the connection setup time and also reduces the bytes to be fetched over the WAN.
The proxy cache is persistent; its contents remain intact after service restarts and appliance reboots.
Select one of these options from the drop-down list:
• None—Don’t direct traffic through the Web Proxy.
• Force—Select with a pass-through rule to direct any private or intranet IP address and port matching this rule through the Web Proxy. You can also specify port labels to proxy. When enabled, the full and port transparency WAN visibility modes have no impact.
• Auto—Automatically directs all internet-bound traffic destined to public IP addresses on port 80 and 443 through the Web Proxy. This is the default setting. Only IPv4 traffic is supported.
When enabled on an Auto Discover rule, and the SteelHead is prioritizing the traffic through the Web Proxy, the full or port transparency WAN visibility modes have no impact. When the traffic can’t be prioritized through the Web Proxy, autodiscovery occurs and the full or port transparency modes are used.
You can enable the Web Proxy in a single-ended or asymmetric SteelHead deployment model. A server-side SteelHead isn’t required.
The client-side SteelHead must be able to access internet traffic from the in-path interface. For the interface that’s configured to access the internet, on the SteelHead choose Networking > In-Path Interfaces: In-Path Interface Settings, and add the in-path gateway IP address.
Because this is a client-side feature, it’s controlled and managed from an SCC. You can configure the in-path rule on the client-side SteelHead running the Web Proxy or on the SCC. You must also enable the Web Proxy globally on the SCC and add domains to the global HTTPS whitelist.
Alternatively, you can enter command-line interface commands on the SteelHead to configure the Web Proxy without configuring on an SCC. For details, see the Riverbed Command-Line Interface Reference Guide.
The xx60 models or the SteelHead don’t support this feature.
You can’t enable the Web Proxy with these rule types:
• Fixed-target
• Fixed-target packet mode optimization
• Discard
• Deny
To view the connections going through the web proxy, choose Reports > Networking: Current Connections on the client-side SteelHead. The report shows the optimized HTTP (destination port 80) connections with a W under the connection type (CT) column and Web Proxy under the Application column.
Source
Subnet
Specifies the subnet IP address and netmask for the source network:
• All IP (IPv4 + IPv6)—Maps to all IPv4 and IPv6 networks.
• All IPv4—Maps to 0.0.0.0/0.
• All IPv6—Maps to ::/0.
• IPv4—Prompts you for a specific IPv4 address. Use this format for an individual subnet IP address and netmask: xxx.xxx.xxx.xxx/xx (IPv4)
• IPv6—Prompts you for a specific IPv6 address. Use this format for an individual subnet IP address and netmask: x:x:x::x/xxx (IPv6)
In a virtual in-path configuration using packet-mode optimization, don’t use the wildcard All IP option for both the source and destination IP addresses on the server-side and client-side SteelHeads. Doing so can create a loop between the SteelHeads if the server-side SteelHead forms an inner connection with the client-side SteelHead before the client-side SteelHead forms an inner connection with the server-side SteelHead. Instead, configure the rule using the local subnet on the LAN side of the SteelHead.
When creating a fixed-target packet-mode rule, you must configure an IPv6 address and route for each interface, unless you’re optimizing UDP traffic.
Destination
Subnet
Specifies the subnet IP address and netmask for the destination network:
• All IP (IPv4 + IPv6)—Maps to all IPv4 and IPv6 networks.
• All IPv4—Maps to 0.0.0.0/0.
• All IPv6—Maps to ::/0.
• IPv4—Prompts you for a specific IPv4 address. Use this format for an individual subnet IP address and netmask: xxx.xxx.xxx.xxx/xx (IPv4)
• IPv6—Prompts you for a specific IPv6 address. Use this format for an individual subnet IP address and netmask: x:x:x::x/xxx (IPv6)
• Host Label—Specify the destination host label in the text box to selectively optimize connections to specific services: for example, *.sharepoint.com or *.outlook.com. A host label includes a hostname or a set of IP addresses and subnets, allowing you to select specific hosts to optimize. Host labels are useful because some SaaS providers and the O365 VNext architecture serve multiple O365 applications (for example, SharePoint, Lync, and Exchange) through the same dynamic virtual IP address. Choose a host label as a destination to provide flexible hostname-based interception.
When you define hostnames in host labels (as opposed to subnets), RiOS performs a DNS query and retrieves a set of IP addresses that correspond to that fully qualified domain name (hostname). It uses these IP addresses to match the destination IP addresses for a rule using the host label.
Host labels replace the destination. When you add a host label, RiOS ignores any destination IP address specified within the in-path rule.
• RiOS 9.16.0 and later support IPv6 for host labels.
• You can use both host labels and domain labels within a single in-path rule.
• The rules table shows any host label, domain label, and/or port label name in use in the Destination column.
In a virtual in-path configuration using packet-mode optimization, don’t use the wildcard All IP option for both the source and destination IP addresses on the server-side and client-side SteelHeads. Doing so can create a loop between the SteelHeads if the server-side SteelHead forms an inner connection with the client-side SteelHead before the client-side SteelHead forms an inner connection with the server-side SteelHead. Instead, configure the rule using the local subnet on the LAN side of the SteelHead. When creating a fixed-target packet mode optimization rule, you must configure an IPv6 address and route for each interface.
Port
Specifies All Ports, Specific Port, or Port Label. Select All to use all ports. For Specific Port, specify the destination port number. Valid port numbers are between 1 and 65535, inclusively. When you select Port Label, specify the port label in the text box. The rules table shows the port label name in use under the Destination column.
Domain Label
Specifies a domain label to optimize a specific service or application with an autodiscover, pass-through, or fixed-target rule. Domain labels are names given to sets of hostnames to streamline configuration. For example, you can create domain labels and use them with an in-path rule to selectively optimize specific services. Because some SaaS providers and the O365 VNext architecture serve multiple O365 applications (for example, SharePoint, Lync, and Exchange) through the same dynamic virtual IP address, using domain labels allows flexible hostname-based interception.
An in-path rule with a domain label uses two layers of match conditions. The in-path rule still sets a destination IP address and subnet (or uses a host label or port). Any traffic that matches the destination first must also be going to a domain that matches the entries in the domain label. The connection must match both the destination and the domain label. When the entries in the domain label don’t match, the system looks to the next matching rule. There are exceptions listed in the Notes that follow.
Choose Networking > App Definitions: Domain Labels to create a domain label.
You can use both host and domain labels within a single in-path rule.
The rules table shows any host label, domain label, and/or port label name in use in the Destination column.
Notes:
• RiOS 9.16.0 and later support IPv6 for domain labels.
• Both the server-side and the client-side SteelHeads must be running RiOS 9.2 or later.
• We recommend that you position rules using a domain label below others. A fixed-target rule with a domain label match followed by an auto-discover rule will not use auto discovery but will instead pass through the traffic. This happens because the matching SYN packet isn’t sent with a probe.
• Domain labels and cloud acceleration are mutually exclusive. When you use a domain label with an in-path rule that has cloud acceleration enabled, the system automatically sets cloud acceleration to Pass Through and connections to the subscribed SaaS platform are no longer optimized by the SteelHead SaaS. Setting domain label back to n/a doesn’t reset the cloud acceleration setting back to the original setting after it has been changed to Pass Through.
• When you add a domain label to an in-path rule with the ports set to All Ports, the system interprets the request as all ports that match the domain label and uses ports HTTP (80) and HTTPS (443) for optimization. A warning states that only the HTTP and HTTPS ports are in use. When you choose a specific port number, the in-path rule honors the port.
For a complete list of domain label compatibility and dependencies, see the SteelHead User Guide.
Target Appliance IP Address
Specifies the target appliance address for a fixed-target rule. When the protocol is TCP and you don’t specify an IP address, the rule defaults to all IPv6 addresses.
• Port—Specify the target port number for a fixed-target rule.
Backup Appliance IP Address
Specifies the backup appliance address for a fixed-target rule.
• Port—Specify the backup destination port number for a fixed-target rule.
VLAN Tag ID
Specifies a VLAN identification number from 0 to 4094. Enter all to apply the rule to all VLANs, or enter untagged to apply the rule to nontagged connections.
RiOS supports VLAN 802.1Q. To configure VLAN tagging, configure in-path rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
Protocol
(Appears only for fixed-target packet-mode optimization rules.) Specifies a traffic protocol from the drop-down list:
• TCP—Specifies the TCP protocol. Supports TCP-over-IPv6 only.
• UDP—Specifies the UDP protocol. Supports UDP-over-IPv4 only.
• Any—Specifies all TCP-based and UDP-based protocols. This is the default setting.
Preoptimization Policy
Specifies a traffic type from the drop-down list:
• None—If the Oracle Forms, SSL, or Oracle Forms-over-SSL preoptimization policy is enabled and you want to disable it for a port, select None. This is the default setting.
Port 443 always uses a preoptimization policy of SSL even if an in-path rule on the client-side SteelHead sets the preoptimization policy to None. To disable the SSL preoptimization for traffic to port 443, you can either:
1. Disable the SSL optimization on the client-side or server-side SteelHead.
-or-
2. Modify the peering rule on the server-side SteelHead by setting the SSL Capability control to No Check.
• Oracle Forms—Enables preoptimization processing for Oracle Forms. This policy isn’t compatible with IPv6.
• Oracle Forms over SSL—Enables preoptimization processing for both the Oracle Forms and SSL encrypted traffic through SSL secure ports on the client-side SteelHead. You must also set the Latency Optimization Policy to HTTP. This policy isn’t compatible with IPv6.
If the server is running over a standard secure port—for example, port 443—the Oracle Forms over SSL in-path rule needs to be before the default secure port pass-through rule in the in-path rule list.
• SSL—Enables preoptimization processing for SSL encrypted traffic through SSL secure ports on the client-side SteelHead.
Latency Optimization Policy
Specifies one of these policies from the drop-down list:
• Normal—Performs all latency optimizations (HTTP is activated for ports 80 and 8080). This is the default setting.
• HTTP—Activates HTTP optimization on connections matching this rule.
HTTP optimization is unavailable in cloud appliances models. This feature may become available in future releases of those models.
• Outlook Anywhere—Activates RPC over HTTP(S) optimization for Outlook Anywhere on connections matching this rule. To automatically detect Outlook Anywhere or HTTP on a connection, on the SteelHead, select the Normal latency optimization policy and enable the Auto-Detect Outlook Anywhere Connections option in the Optimization > Protocols: MAPI page. The auto-detect option in the MAPI page is best for simple SteelHead configurations with only a single SteelHead at each site and when the Internet Information Services (IIS) server is also handling websites. If the IIS server is only used as RPC Proxy, and for configurations with asymmetric routing, connection forwarding, or Interceptor installations, add in-path rules that identify the RPC Proxy server IP addresses and select this latency optimization policy. After adding the in-path rule, disable the auto-detect option in the Optimization > Protocols: MAPI page.
• Citrix—Activates Citrix-over-SSL optimization on connections matching this rule. This policy isn’t compatible with IPv6. Add an in-path rule to the client-side SteelHead that specifies the Citrix Access Gateway IP address, select this latency optimization policy on both the client-side and server-side SteelHeads, and set the preoptimization policy to SSL. Both the client-side and the server-side SteelHeads must be running RiOS 7.0 or later. The preoptimization policy must be set to SSL.
SSL must be enabled on the Citrix Access Gateway. On the server-side SteelHead, enable SSL and install the SSL server certificate for the Citrix Access Gateway.
The client-side and server-side SteelHeads establish an SSL channel between themselves to secure the optimized ICA traffic. End users log in to the Access Gateway through a browser (HTTPS) and access applications through the Web Interface site. Clicking an application icon starts the Online Plug-in, which establishes an SSL connection to the Access Gateway. The ICA connection is tunneled through the SSL connection. The SteelHead decrypts the SSL connection from the user device, applies ICA latency optimization, and reencrypts the traffic over the internet. The server-side SteelHead decrypts the optimized ICA traffic and reencrypts the ICA traffic into the original SSL connection destined to the Access Gateway.
• Exchange Autodetect—Automatically detects MAPI transport protocols (Autodiscover, Outlook Anywhere, and MAPI over HTTP) and HTTP traffic. For MAPI transport protocol optimization, enable SSL and install the SSL server certificate for the Exchange Server on the server-side SteelHead. To activate MAPI over HTTP bandwidth and latency optimization, on the client-side SteelHead, you must also choose Optimization > Protocols: MAPI and select Enable MAPI over HTTP optimization. Both the client-side and server-side SteelHeads must be running RiOS 9.2 or later for MAPI over HTTP bandwidth and latency optimization.
HTTP optimization is unavailable in cloud appliances models. This feature may become available in future releases of those models.
• None—Don’t activate latency optimization on connections matching this rule. For Oracle Forms-over-SSL encrypted traffic, you must set the Latency Optimization Policy to HTTP. Setting the Latency Optimization Policy to None excludes all latency optimizations, such as HTTP, MAPI, and SMB.
Data Reduction Policy
Allows you to configure these types of data reduction policies if the rule type is Auto-Discover or Fixed Target:
• Normal—Perform LZ compression and SDR.
• SDR-Only—Perform SDR; don’t perform LZ compression.
• SDR-M—Performs data reduction entirely in memory, which prevents the SteelHead from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. This data reduction policy is useful for:
– a very small amount of data: for example, interactive traffic.
– point-to-point replication during off-peak hours when both the server-side and client-side SteelHeads are the same (or similar) size.
Both SteelHeads must be running RiOS 6.0.x or later.
• Compression-Only—Perform LZ compression; don’t perform SDR.
• None—Don’t perform SDR or LZ compression.
To configure data reduction policies for the FTP data channel, define an in-path rule with the destination port 20 and set its data reduction policy. Setting QoS for port 20 on the client-side SteelHead affects passive FTP, while setting the QoS for port 20 on the server-side SteelHead affects active FTP.
To configure optimization policies for the MAPI data channel, define an in-path rule with the destination port 7830 and set its data reduction policy.
Cloud Acceleration
Ensures that cloud acceleration is ready and enabled after you subscribe to a SaaS platform and enable it. When cloud acceleration is enabled, connections to the subscribed SaaS platform are optimized by the SteelHead SaaS. You don’t need to add an in-path rule unless you want to optimize specific users and exclude others. Select one of these choices from the drop-down list:
• Auto—If the in-path rule matches, the connection is optimized by the SteelHead SaaS connection.
• Pass Through—If the in-path rule matches, the connection isn’t optimized by the SteelHead SaaS, but it follows the other rule parameters so that the connection might be optimized by this SteelHead with other SteelHeads in the network, or it might be passed through.
Domain labels and cloud acceleration are mutually exclusive. When using a domain label, the Management Console dims this control and sets it to Pass Through.
To use host labels with cloud acceleration, set this control to Auto.
SteelHead SaaS doesn’t support host labels.
Auto Kickoff
Enables kickoff, which resets preexisting connections to force them to go through the connection creation process again. If you enable kickoff, connections that preexist when the optimization service is started are reestablished and optimized.
Generally, connections are short-lived and kickoff isn’t necessary. It is suitable for certain long-lived connections and very challenging remote environments: for example, in a remote branch office with a T1 and a 35-ms round-trip time, you would want connections to migrate to optimization gracefully, rather than risk interruption with kickoff.
RiOS provides three ways to enable kickoff:
• Globally for all existing connections in the Optimization > Network Services: General Service Settings page.
• For a single pass-through or optimized connection in the Current Connections report, one connection at a time.
• For all existing connections that match an in-path rule and the rule has kickoff enabled.
In most deployments, you don’t want to set automatic kickoff globally because it disrupts all existing connections. When you enable kick off using an in-path rule, once the SteelHead detects packet flow that matches the IP and port specified in the rule, it sends an RST packet to the client and server maintaining the connection to try to close it. Next, it sets an internal flag to prevent any further kickoffs until the optimization service is once again restarted.
If no data is being transferred between the client and server, the connection isn’t reset immediately. It resets the next time the client or server tries to send a message. Therefore, when the application is idle, it might take a while for the connection to reset.
By default, automatic kickoff per in-path rule is disabled.
The service applies the first matching in-path rule for an existing connection that matches the source and destination IP and port; it doesn’t consider a VLAN tag ID when determining whether to kick off the connection. Consequently, the service automatically kicks off connections with matching source and destination addresses and ports on different VLANs.
The source and destination of a preexisting connection can’t be determined because the SteelHead didn’t get the initial TCP handshake, whereas an in-path rule specifies the source and destination IP address to which the rule should be applied. The connection for this IP address pair is matched twice: once as source to destination and the other as destination to source to find an in-path rule.
As an example, this in-path rule will kick off connections from 10.11.10.10/24 to 10.12.10.10/24 and 10.12.10.10/24 to 10.11.10.10/24:
Src 10.11.10.10/24 Dst 10.12.10.10/24 Auto Kickoff enabled
The first matching in-path rule will be considered during the kickoff check for a preexisting connection. If the first matching in-path rule has kickoff enabled, then that preexisting connection will be reset.
Specifying automatic kickoff per in-path rule enables kickoff even when you disable the global kickoff feature. When global kickoff is enabled, it overrides this setting. You set the global kickoff feature using the Reset Existing Client Connections on Start Up feature, which appears in the Optimization > Network Services: General Service Settings page.
This feature pertains only to autodiscover and fixed-target rule types and is dimmed for the other rule types.
Neural Framing Mode
Specifies a neural framing mode for the in-path rule if the rule type is Auto-Discover or Fixed Target. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer.
Select a neural framing setting:
• Never—Don’t use the Nagle algorithm. The Nagle algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It works by combining a number of small outgoing messages and sending them all at once. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but aren’t used. In general, this setting works well with time-sensitive and chatty or real-time traffic.
• Always—Use the Nagle algorithm. This is the default setting. All data is passed to the codec, which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A timer (6 ms) backs up the codec and causes leftover data to be consumed. Neural heuristics are computed in this mode but aren’t used. This mode isn’t compatible with IPv6.
• TCP Hints—If data is received from a partial frame packet or a packet with the TCP PUSH flag set, the encoder encodes the data instead of immediately coalescing it. Neural heuristics are computed in this mode but aren’t used. This mode isn’t compatible with IPv6.
• Dynamic—Dynamically adjust the Nagle parameters. In this option, the system discerns the optimum algorithm for a particular type of traffic and switches to the best algorithm based on traffic characteristic changes. This mode isn’t compatible with IPv6.
For different types of traffic, one algorithm might be better than others. The considerations include: latency added to the connection, compression, and SDR performance.
To configure neural framing for an FTP data channel, define an in-path rule with the destination port 20 and set its data reduction policy. To configure neural framing for a MAPI data channel, define an in-path rule with the destination port 7830 and set its data reduction policy.
WAN Visibility Mode
Enables WAN visibility, which pertains to how packets traversing the WAN are addressed. RiOS provides three types of WAN visibility: correct addressing, port transparency, and full address transparency.
You configure WAN visibility on the client-side SteelHead (where the connection is initiated).
Select one of these modes from the drop-down list:
• Correct Addressing—Disables WAN visibility. Correct addressing uses SteelHead IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting.
• Port Transparency—Port address transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating SteelHeads can view these preserved fields. Port transparency is supported for IPv4 and IPv6 modes.
Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules written in terms of client and network addresses, port transparency enables your routers to use existing rules to classify the traffic without any changes.
Port transparency enables network analyzers deployed within the WAN (between the SteelHeads) to monitor network activity and to capture statistics for reporting by inspecting traffic according to its original TCP port number.
Port transparency doesn’t require dedicated port configurations on your SteelHeads.
Port transparency only provides server port visibility. It doesn’t provide client and server IP address visibility, nor does it provide client port visibility.
• Full Transparency—Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating SteelHeads can view these preserved fields.
If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent to enabling full address transparency. Full transparency is supported for IPv4 and IPv6 modes. For details, see the SteelHead Deployment Guide.
However, if you must see your client or server IP addresses across the WAN, full transparency is your only configuration option.
Enabling full address transparency requires symmetrical traffic flows between the client and server. If any asymmetry exists on the network, enabling full address transparency might yield unexpected results, up to and including loss of connectivity. For details, see the SteelHead Deployment Guide.
RiOS supports Full Transparency with a stateful firewall. A stateful firewall examines packet headers, stores information, and then validates subsequent packets against this information. If your system uses a stateful firewall, these option is available:
• Full Transparency with Reset—Enables full address and port transparency and also sends a forward reset between receiving the probe response and sending the transparent inner channel SYN. This mode ensures the firewall doesn’t block inner transparent connections because of information stored in the probe connection. The forward reset is necessary because the probe connection and inner connection use the same IP addresses and ports and both map to the same firewall connection. The reset clears the probe connection created by the SteelHead and allows for the full transparent inner connection to traverse the firewall. Both the client-side and server-side SteelHeads must be running RiOS 6.0 or later. Full transparency with reset is supported on IPv6 RiOS 9.7 or later.
Notes:
• For details on configuring WAN visibility and its implications, see the SteelHead Deployment Guide.
• WAN visibility works with autodiscover in-path rules only. It doesn’t work with fixed-target rules or server-side out-of-path SteelHead configurations.
• To enable full transparency globally by default, create an in-path autodiscover rule, select Full, and place it above the default in-path rule and after the Secure, Interactive, and RBT-Proto rules.
• You can configure a SteelHead for WAN visibility even if the server-side SteelHead doesn’t support it, but the connection isn’t transparent.
• You can enable full transparency for servers in a specific IP address range and you can enable port transparency on a specific server. For details, see the SteelHead Deployment Guide.
• The Top Talkers report displays statistics on the most active, heaviest users of WAN bandwidth, providing some WAN visibility without enabling a WAN Visibility Mode.
Position
Specifies Start, End, or a rule number from the drop-down list. SteelHeads evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule don’t match, the system consults the next rule. For example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it’s applied, and no further rules are consulted.
In general, list rules in this order:
1. Deny 2. Discard 3. Pass-through 4. Fixed-Target 5. Auto-Discover
Place rules that use domain labels below others.
The default rule, Auto-Discover, which optimizes all remaining traffic that hasn’t been selected by another rule, can’t be removed and is always listed last.
Description
Describes the rule to facilitate administration.
Enable Rule
Enables the in-path rule. Use this option to test in-path rules.
Add
Adds the rule to the list. The Management Console redisplays the In-Path Rules table and applies your modifications to the running configuration, which is stored in memory.
If necessary, you can reorder your rules. In the In-Path Rules table, use the drop-down lists in the Rule column.
The default rule, which optimizes all remaining traffic that hasn’t been selected by another rule, can’t be removed and is always listed last.
General Service Settings (Interceptor)
You can set virtual in-path settings in the General Service Settings (Interceptor) page when the Interceptor is running in standard mode.
WCCP isn’t supported when the Interceptor is running in VLAN segregation mode.
For details, see the SteelHead Interceptor User Guide or the SteelHead Interceptor Deployment Guide.
These configuration options are available:
Enable PBR/WCCP
Enables the virtual in-path support on all the interfaces for networks that use PBR or WCCP.
• Policy-based routing (PBR)—PBR allows you to define policies to route packets instead of relying on routing protocols. You enable PBR to redirect traffic that you want optimized by an Interceptor that’s not in the direct physical path between the client and server.
• Web Cache Communication Protocol (WCCP)—If your network design requires you to use WCCP, a packet redirection mechanism, it directs packets to RiOS appliances that aren’t in the direct physical path to ensure that they’re optimized.
External traffic redirection is supported on only the first in-path interface.
Enable CDP for PBR
Specifies Enable CDP for PBR for a failover deployment that uses PBR rather than WCCP to redirect traffic to a backup appliance.
• CDP Hold Time—Specify the CDP message hold time, in seconds. The default value is 180 seconds.
• CDP Interval—Specify the CDP message polling interval, in seconds. The default value is 10 seconds.
Peering rules
You configure peering rules for the selected optimization policy in the Peering Rules page.
Peering rules are an ordered list of fields an appliance uses to match with incoming SYN packet fields (for example, source or destination subnet, IP address, VLAN, or TCP port) as well as the IP address of the probing appliance. Only the first matching rule is applied. This is especially useful in complex networks. For detailed information about peering rules, see the SteelHead User Guide.
Automatic peering is disabled by default. For detailed information about enhanced autodiscovery and automatic peering, see the SteelHead User Guide.
These configuration options are available:
Enable Enhanced IPv4 Auto-Discovery
Enables enhanced autodiscovery for IPv4 and mixed (dual-stack) IPv4 and IPv6 networks.
With enhanced autodiscovery, the SteelHead automatically finds the furthest SteelHead along the connection path of the TCP connection, and optimization occurs there: for example, in a deployment with four SteelHeads (A, B, C, D), where D represents the appliance that is furthest from A, the SteelHead automatically finds D. This feature simplifies configuration and makes your deployment more scalable.
By default, enhanced autodiscovery peering is enabled. Without enhanced autodiscovery, the SteelHead uses regular autodiscovery. With regular autodiscovery, the SteelHead finds the first remote SteelHead along the connection path of the TCP connection, and optimization occurs there: for example, if you had a deployment with four SteelHeads (A, B, C, D), where D represents the appliance that is furthest from A, the SteelHead automatically finds B, then C, and finally D, and optimization takes place in each.
This option uses an IPv4 channel to the peer SteelHead over a TCP connection, and your network connection must support IPv4 for the inner channels between the SteelHead and the SteelCentral Controller for Client Accelerator. If you have an all-IPv6 (single-stack IPv6) network, select the Enable Enhanced IPv6 Auto-Discovery option.
For detailed information about deployments that require enhanced autodiscovery peering, see the SteelHead Deployment Guide.
Enable Enhanced IPv6 Auto-Discovery
Enables enhanced autodiscovery for single-stack IPv6 networks.
Enable Extended Peer Table
Enables support for up to 20,000 peers on high-end server-side SteelHeads (models 5050, 5520, 6020, 6050, 6120, 7050, CX models 5055 and 7055) to accommodate large SteelHead client deployments. The RiOS data store maintains the peers in groups of 1,024 in the global peer table.
We recommend enabling the extended peer table if you have more than 4,000 peers.
By default, this option is disabled and it is unavailable on SteelHead models that don’t support it.
Before enabling this feature you must have a thorough understanding of performance and scaling issues. When deciding whether to use extended peer table support, you should compare it with a serial cluster deployment. For details on serial clusters, see the SteelHead Deployment Guide.
After enabling this option, you must clear the RiOS data store and stop and restart the service.
Enable Latency Detection
Enables peer appliances to pass through traffic without optimizing it when the latency between the peers is below the configured threshold. The latency threshold is in milliseconds and the default is 10 ms.
When latency between peers is low enough, simply passing through unoptimized traffic can be faster than transmitting optimized traffic.
When enabled, you can specify the Ignore Latency Detection flag in peer in-path rules to disable the feature on specific rules as needed.
Add peering rules
These configuration options are available:
Add a New Peering Rule
Displays the controls for adding a new peering rule.
Rule Type
Determines which action the SteelHead takes on the connection. Select one of these rule types from the drop-down list:
• Auto—Allows built-in functionality to determine the response for peering requests (performs the best peering possible). If the receiving SteelHead isn’t using automatic autodiscovery, this has the same effect as the Accept peering rule action. If automatic autodiscovery is enabled, the SteelHead only becomes the optimization peer if it is the last SteelHead in the path to the server.
• Accept—Accepts peering requests that match the source-destination-port pattern. The receiving SteelHead responds to the probing SteelHead and becomes the remote-side SteelHead (that is, the peer SteelHead) for the optimized connection.
• Passthrough—Allows pass-through peering requests that match the source and destination port pattern. The receiving SteelHead doesn’t respond to the probing SteelHead, and allows the SYN+probe packet to continue through the network.
Insert Rule At
Determines the order in which the system evaluates the rule. Select Start, End, or a rule number from the drop-down list.
The system evaluates rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied and the system moves on to the next rule: for example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
The Rule Type of a matching rule determines which action the SteelHead takes on the connection.
Source Subnet
Specifies an IP address and mask for the traffic source.
You can also specify wildcards:
• All-IPv4 is the wildcard for single-stack IPv4 networks.
• All-IPv6 is the wildcard for single-stack IPv6 networks.
• All-IP is the wildcard for all IPv4 and IPv6 networks.
Use these formats:
xxxxxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxx (IPv6)
Destination Subnet
Specifies an IP address and mask for the traffic destination.
You can also specify wildcards:
• All-IPv4 is the wildcard for single-stack IPv4 networks.
• All-IPv6 is the wildcard for single-stack IPv6 networks.
• All-IP is the wildcard for all IPv4 and IPv6 networks.
Use these formats:
xxxxxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxx (IPv6)
• Port—Specify the destination port number, port label, or all.
Peer IP Address
Specifies the in-path IPv4 or IPv6 address of the probing SteelHead. If more than one in-path interface is present on the probing SteelHead, apply multiple peering rules, one for each in-path interface.
You can also specify wildcards:
• All-IPv4 is the wildcard for single-stack IPv4 networks.
• All-IPv6 is the wildcard for single-stack IPv6 networks.
• All-IP is the wildcard for all IPv4 and IPv6 networks.
SSL Capability
Enables an SSL capability flag, which specifies criteria for matching an incoming connection with one of the rules in the peering rules table. This flag is typically set on a server-side SteelHead.
Select one of these options from the drop-down list to determine how to process attempts to create secure SSL connections:
• No Check—The peering rule doesn’t determine whether the server SteelHead is present for the particular destination IP address and port combination.
• Capable—The peering rule determines that the connection is SSL-capable if the destination port is 443 (irrespective of the destination port value on the rule), and the destination IP and port don’t appear on the bypassed servers list. The SteelHead accepts the condition and, assuming all other proper configurations and that the peering rule is the best match for the incoming connection, optimizes SSL.
• Incapable—The peering rule determines that the connection is SSL-incapable if the destination IP and port appear in the bypassed servers list. The service adds a server to the bypassed servers list when there is no SSL certificate for the server or for any other SSL handshake failure. The SteelHead passes the connection through unoptimized without affecting connection counts.
We recommend that you use in-path rules to optimize SSL connections on non-443 destination port configurations.
Cloud Acceleration
Ensures that cloud acceleration is ready and enabled after you subscribe to a SaaS platform and enable it. When cloud acceleration is enabled, connections to the subscribed SaaS platform are optimized by the SteelHead SaaS. You don’t need to add an in-path rule unless you want to optimize specific users and exclude others. Select one of these choices from the drop-down list:
• Auto—If the in-path rule matches, the connection is optimized by the SteelHead SaaS connection.
• Pass Through—If the in-path rule matches, the connection isn’t optimized by the SteelHead SaaS, but it follows the other rule parameters so that the connection might be optimized by this SteelHead with other SteelHeads in the network, or it might be passed through.
Domain labels and cloud acceleration are mutually exclusive.
SteelHead SaaS doesn’t support host labels.
Description
Specifies a description to help you identify the peering relationship.
Add
Adds a peering rule to the list. The Management Console redisplays the Peering Rules table and applies your modifications to the running configuration, which is stored in memory.
Xbridge
Xbridge is a software-packet-processing enhancement supported on Interceptor appliances equipped with compatible NICs.
You must reboot managed appliances after pushing an Xbridge policy to them.
Xbridge speeds up optimized traffic handling. When it is enabled, Xbridge provides significant line-throughput improvement for optimized and pass-through traffic on an Interceptor. For details about configuring Layer-4 switch, PBR, and WCCP deployments, see the SteelHead Interceptor User Guide or the SteelHead Interceptor Deployment Guide.
You can enable or disable the Xbridge feature for Interceptor 9600 appliances in the Xbridge page. Xbridge is enabled by default on Interceptor 9800 appliances and cannot be disabled.
This configuration option is available:
Enable Xbridge
Enables the Xbridge feature.
Transport Settings
You configure the TCP settings for the selected optimization policy in the Transport Settings page.
To properly configure transport settings for your environment, you need to understand its characteristics. For information on gathering performance characteristics for your environment, see the SteelHead Deployment Guide.
For detailed information about transport settings, see the SteelHead User Guide.
Enabling congestion control algorithm
These configuration options are available:
Congestion Control Algorithm
Specifies method for congestion control from the drop-down list:
• Standard (RFC-Compliant)—Optimizes non-SCPS TCP connections by applying data and transport streamlining for TCP traffic over the WAN. This control forces peers to use standard TCP as well. For details on data and transport streamlining, see the SteelHead Deployment Guide. This option clears any advanced bandwidth congestion control that was previously set.
• HighSpeed—Enables high-speed TCP optimization for more complete use of long fat pipes (high-bandwidth, high-delay networks). Don’t enable for satellite networks.
We recommend that you enable high-speed TCP optimization only after you have carefully evaluated whether it will benefit your network environment. For details about the trade-offs of enabling high-speed TCP, see the tcp highspeed enable command in the Riverbed Command-Line Interface Reference Guide.
• Bandwidth Estimation—Uses an intelligent bandwidth estimation algorithm along with a modified slow-start algorithm to optimize performance in long lossy networks. These networks typically include satellite and other wireless environments, such as cellular networks, longer microwave, or Wi-Max networks.
Bandwidth estimation is a sender-side modification of TCP and is compatible with the other TCP stacks in the RiOS system. The intelligent bandwidth estimation is based on analysis of both ACKs and latency measurements. The modified slow-start mechanism enables a flow to ramp up faster in high latency environments than traditional TCP. The intelligent bandwidth estimation algorithm allows it to learn effective rates for use during modified slow start, and also to differentiate BER loss from congestion-derived loss and deal with them accordingly. Bandwidth estimation has good fairness and friendliness qualities toward other traffic along the path.
• SkipWare Error-Tolerant—Enables SkipWare optimization with the error-rate detection and recovery mechanism on the SteelHead. This method is compatible with IPv6.
This method tolerates some loss due to corrupted packets (bit errors), without reducing the throughput, using a modified slow-start algorithm and a modified congestion avoidance approach. It requires significantly more retransmitted packets to trigger this congestion-avoidance algorithm than the SkipWare per-connection setting. Error-tolerant TCP optimization assumes that the environment has a high BER and most retransmissions are due to poor signal quality instead of congestion. This method maximizes performance in high-loss environments, without incurring the additional per-packet overhead of a FEC algorithm at the transport layer.
Use caution when enabling error-tolerant TCP optimization, particularly in channels with coexisting TCP traffic, because it can be quite aggressive and adversely affect channel congestion with competing TCP flows.
The Management Console dims this setting until you install a SkipWare license.
Enable Rate Pacing
Imposes a global data transmit limit on the link rate for all SCPS connections between peer SteelHeads or on the link rate for a SteelHead paired with a third-party device running TCP-PEP (Performance Enhancing Proxy).
Rate pacing combines MX-TCP and a congestion-control method of your choice for connections between peer SteelHeads and SEI connections (on a per-rule basis). The congestion-control method runs as an overlay on top of MX-TCP and probes for the actual link rate. It then communicates the available bandwidth to MX-TCP.
Enable rate pacing to prevent these problems:
• Congestion loss while exiting the slow start phase. The slow-start phase is an important part of the TCP congestion-control mechanisms that starts slowly increasing its window size as it gains confidence about the network throughput.
• Congestion collapse.
• Packet bursts.
Rate pacing is disabled by default.
With no congestion, the slow start ramps up to the MX-TCP rate and settles there. When RiOS detects congestion (either due to other sources of traffic, a bottleneck other than the satellite modem, or because of a variable modem rate), the congestion-control method kicks in to avoid congestion loss and exit the slow start phase faster.
Enable rate pacing on the client-side SteelHead along with a congestion-control method. The client-side SteelHead communicates to the server-side SteelHead that rate pacing is in effect. You must also:
• Enable Auto-Detect TCP Optimization on the server-side SteelHead to negotiate the configuration with the client-side SteelHead.
• Configure an MX-TCP QoS rule to set the appropriate rate cap. If an MX-TCP QoS rule isn’t in place, rate pacing isn’t applied and the congestion-control method takes effect. You can’t delete the MX-TCP QoS rule when rate pacing is enabled.
The Management Console dims this setting until you install a SkipWare license.
You can also enable rate pacing for SEI connections by defining an SEI rule for each connection.
Configuring buffer settings
The buffer settings in the Transport Settings page support high-speed TCP and are also used in data protection scenarios to improve performance. For details about data protection deployments, see the SteelHead Deployment Guide.
To properly configure buffer settings for a satellite environment, you need to understand its characteristics. For information on gathering performance characteristics for your environment, see the SteelHead Deployment Guide.
The high-speed TCP feature provides acceleration and high throughput for high-bandwidth links (also known as Long Fat Networks, or LFNs) where the WAN pipe is large but latency is high. High-speed TCP is activated for all connections that have a BDP larger than 100 packets.
For details about using HS-TCP in data protection scenarios, see the SteelHead Deployment Guide.
Automatic HighSpeed TCP is disabled by default. For details about HighSpeed TCP, see the SteelHead User Guide.
These configuration options are available:
LAN Send Buffer Size
Specifies the send buffer size used to send data out of the LAN. The default value is 81920.
LAN Receive Buffer Size
Specifies the receive buffer size used to receive data from the LAN. The default value is 32768.
WAN Default Send Buffer Size
Specifies the send buffer size used to send data out of the WAN. The default value is 262140.
WAN Default Receive Buffer Size
Specifies the receive buffer size used to receive data from the WAN. The default value is 262140.
Enabling and adding single-ended connection rules
You can optionally add rules to control single-ended SCPS connections. The SteelHead uses these rules to determine whether to enable or pass through SCPS connections.
A SteelHead receiving an SCPS connection on the WAN evaluates only the single-ended connection rules table.
To pass through an SCPS connection, we recommend setting both an in-path rule and a single-ended connection rule.
This configuration option is available:
Enable Single-Ended Connection Rules Table
Enables transport optimization for single-ended interception connections with no SteelHead peer. These connections appear in the rules table.
In RiOS 8.5 and later, you can impose rate pacing for single-ended interception connections with no peer SteelHead. By defining an SEI connection rule, you can enforce rate pacing even when the SteelHead isn’t peered with an SCPS device and SCPS isn’t negotiated.
To enforce rate pacing for a single-ended interception connection, create an SEI connection rule for use as a transport-optimization proxy, select a congestion method for the rule, and then configure a QoS rule (with the same client/server subnet) to use MX-TCP. RiOS 8.5 and later accelerate the WAN-originated or LAN-originated proxied connection using MX-TCP.
By default, the SEI connection rules table is disabled. When enabled, two default rules appear in the rules table. The first default rule matches all traffic with the destination port set to the interactive port label and bypasses the connection for SCPS optimization.
The second default rule matches all traffic with the destination port set to the RBT-Proto port label and bypasses the connection for SCPS optimization.
This option doesn’t affect the optimization of SCPS connections between SteelHeads.
When you disable the table, you can still add, move, or remove rules, but the changes don’t take effect until you reenable the table.
The Management Console dims the SEI rules table until you install a SkipWare license.
Enable SkipWare Legacy Compression—Enables negotiation of SCPS-TP TCP header and data compression with a remote SCPS-TP device.
Legacy compression is disabled by default.
After enabling or disabling legacy compression, you must restart the optimization service.
The Management Console dims legacy compression until you install a SkipWare license and enable the SEI rules table.
Legacy compression also works with non-SCPS TCP algorithms.
These limits apply to legacy compression:
• This feature isn’t compatible with IPv6.
• Packets with a compressed TCP header use IP protocol 105 in the encapsulating IP header; this might require changes to intervening firewalls to permit protocol 105 packets to pass.
• This feature supports a maximum of 255 connections between any pair of end-host IP addresses. The connection limit for legacy SkipWare connections is the same as the appliance-connection limit.
• QoS limits for the SteelHead apply to the legacy SkipWare connections.
Adding single-ended connection rules
You can optionally add rules to control single-ended SCPS connections. The SteelHead uses these rules to determine whether to enable or pass through SCPS connections.
A SteelHead receiving a SCPS connection on the WAN evaluates only the single-ended connection rules table.
To pass through a SCPS connection, we recommend setting both an in-path rule and a single-ended connection rule.
These configuration options are available:
Add New Rule
Displays the controls for adding a new rule.
Position
Specifies Start, End, or a rule number from the drop-down list. SteelHeads evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule don’t match, the system consults the next rule. As an example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
Source Subnet
Specifies an IPv4 or IPv6 address and mask for the traffic source; otherwise, specify All-IP for all IPv4 and IPv6 traffic.
You can also specify wildcards:
• All-IPv4 is the wildcard for single-stack IPv4 networks.
• All-IPv6 is the wildcard for single-stack IPv6 networks.
• All-IP is the wildcard for all IPv4 and IPv6 networks.
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxxx (IPv6)
Destination Subnet
Specifies an IPv4 or IPv6 address and mask pattern for the traffic destination; otherwise, specify All-IP for all traffic.
Use these formats:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxxx (IPv6)
Port or Port Label
Specifies the destination port number, port label, or all.
Click Port Label to go to the Networking > App Definitions: Port Labels page for reference.
VLAN Tag ID
Specifies one of these:
• a VLAN identification number from 1 to 4094
• all to specify that the rule applies to all VLANs
• untagged to specify the rule applies to untagged connections.
RiOS supports VLAN 802.1Q. To configure VLAN tagging, configure SCPS rules to apply to all VLANs or to a specific VLAN. By default, rules apply to all VLAN values unless you specify a particular VLAN ID. Pass-through traffic maintains any preexisting VLAN tagging between the LAN and WAN interfaces.
Web Proxy
Specifies one of these options from the drop-down list:
• Ignore—Ignores web proxy settings for this rule.
• Disabled—Disables web proxy settings for this rule.
• Enabled—Enables web proxy settings for this rule.
Traffic
Specifies the action that the rule takes on an SCPS connection. To allow single-ended interception SCPS connections to pass through the SteelHead unoptimized, disable SCPS Discover and TCP Proxy.
Select one of these options:
• SCPS Discover—Turns on SCPS and turns off TCP proxy.
• TCP Proxy—Turns off SCPS and turns on TCP proxy.
Congestion Control Algorithm
Specifies a method for congestion control from the drop-down list:
• Standard (RFC-Compliant)—Optimizes non-SCPS TCP connections by applying data and transport streamlining for TCP traffic over the WAN. This control forces peers to use standard TCP as well. For details on data and transport streamlining, see the SteelHead Deployment Guide. This option clears any advanced bandwidth congestion control that was previously set.
• Bandwidth Estimation—Uses an intelligent bandwidth estimation algorithm along with a modified slow-start algorithm to optimize performance in long lossy networks. These networks typically include satellite and other wireless environments, such as cellular networks, longer microwave, or Wi-Max networks.
Bandwidth estimation is a sender-side modification of TCP and is compatible with the other TCP stacks in the RiOS system. The intelligent bandwidth estimation is based on analysis of both ACKs and latency measurements. The modified slow-start mechanism enables a flow to ramp up faster in high latency environments than traditional TCP. The intelligent bandwidth estimation algorithm allows it to learn effective rates for use during modified slow start, and also to differentiate BER loss from congestion-derived loss and deal with them accordingly. Bandwidth estimation has good fairness and friendliness qualities toward other traffic along the path.
SkipWare Per-Connection—Applies TCP congestion control to each SCPS-capable connection. This method is compatible with IPv6. The congestion control uses:
• a pipe algorithm that gates when a packet should be sent after receipt of an ACK.
• the NewReno algorithm, which includes the sender's congestion window, slow start, and congestion avoidance.
• time stamps, window scaling, appropriate byte counting, and loss detection.
This transport setting uses a modified slow-start algorithm and a modified congestion-avoidance approach. This method enables SCPS per connection to ramp up flows faster in high-latency environments, and handle lossy scenarios, while remaining reasonably fair and friendly to other traffic. SCPS per connection does a very good job of efficiently filling up satellite links of all sizes. SkipWare per connection is a high-performance option for satellite networks.
The Management Console dims this setting until you install a SkipWare license.
SkipWare Error-Tolerant—Enables SkipWare optimization with the error-rate detection and recovery mechanism on the SteelHead. This method is compatible with IPv6.
This method tolerates some loss due to corrupted packets (bit errors), without reducing the throughput, using a modified slow-start algorithm and a modified congestion avoidance approach. It requires significantly more retransmitted packets to trigger this congestion-avoidance algorithm than the SkipWare per-connection setting. Error-tolerant TCP optimization assumes that the environment has a high BER and most retransmissions are due to poor signal quality instead of congestion. This method maximizes performance in high-loss environments, without incurring the additional per-packet overhead of a FEC algorithm at the transport layer.
Use caution when enabling error-tolerant TCP optimization, particularly in channels with coexisting TCP traffic, because it can be quite aggressive and adversely affect channel congestion with competing TCP flows.
The Management Console dims this setting until you install a SkipWare license.
Cubic—Enables the Cubic congestion control algorithm. Cubic is the local default congestion control algorithm when two peer SteelHeads are both configured to auto-detect. Cubic offers better performance and faster recovery after congestion events than NewReno, the previous local default.
Enable Rate Pacing
Imposes a global data transmit limit on the link rate for all SCPS connections between peer SteelHeads or on the link rate for a SteelHead paired with a third-party device running TCP-PEP (Performance Enhancing Proxy).
Rate pacing combines MX-TCP and a congestion-control method of your choice for connections between peer SteelHeads and SEI connections (on a per-rule basis). The congestion-control method runs as an overlay on top of MX-TCP and probes for the actual link rate. It then communicates the available bandwidth to MX-TCP.
Enable rate pacing to prevent these problems:
• Congestion loss while exiting the slow start phase. The slow-start phase is an important part of the TCP congestion-control mechanisms that starts slowly increasing its window size as it gains confidence about the network throughput.
• Congestion collapse.
• Packet bursts.
Rate pacing is disabled by default.
With no congestion, the slow start ramps up to the MX-TCP rate and settles there. When RiOS detects congestion (either due to other sources of traffic, a bottleneck other than the satellite modem, or because of a variable modem rate), the congestion-control method kicks in to avoid congestion loss and exit the slow start phase faster.
Enable rate pacing on the client-side SteelHead along with a congestion-control method. The client-side SteelHead communicates to the server-side SteelHead that rate pacing is in effect. You must also:
• Enable Auto-Detect TCP Optimization on the server-side SteelHead to negotiate the configuration with the client-side SteelHead.
• Configure an MX-TCP QoS rule to set the appropriate rate cap. If an MX-TCP QoS rule isn’t in place, rate pacing isn’t applied and the congestion-control method takes effect. You can’t delete the MX-TCP QoS rule when rate pacing is enabled.
The Management Console dims this setting until you install a SkipWare license.
Rate pacing doesn’t support IPv6.
You can also enable rate pacing for SEI connections by defining an SEI rule for each connection.
Add
Adds the rule to the list. The Management Console redisplays the SCPS Rules table and applies your modifications to the running configuration, which is stored in memory.
Service ports
Service ports are the ports used for inner connections between SteelHeads.
You can configure multiple service ports on the server-side of the network for multiple QoS mappings. You define a new service port and then map destination ports to that port, so that QoS configuration settings on the router are applied to that service port.
Configuring service port settings is optional.
For details about service ports, see the SteelHead User Guide.
Service port settings
These configuration options are available:
Service Ports
Specifies ports in a comma-separated list. The default service ports are 7800 and 7810.
Default Port
Specifies the default service port from the drop-down list. The default service ports are 7800 and 7810.
Service ports
These configuration options are available:
Add a New Service Port Mapping
Displays the controls to add a new mapping.
Destination Port
Specifies a destination port number.
Service Port
Specifies a port number.
Add
Adds the port numbers.
Data Store
You can display and modify RiOS data store settings for the selected optimization policy on the Data Store page.
SteelHeads transparently intercept and analyze all of your WAN traffic. TCP traffic is segmented, indexed, and stored as segments of data, and the references representing that data are stored on the RiOS data store within SteelHeads on both sides of your WAN. After the data has been indexed, it is compared to data already on the disk. Segments of data that have been seen before aren’t transferred across the WAN again; instead a reference is sent in its place that can index arbitrarily large amounts of data, thereby massively reducing the amount of data that needs to be transmitted. One small reference can refer to megabytes of existing data that has been transferred over the WAN before.
General settings
Encrypting the RiOS data store significantly limits the exposure of sensitive data in the event an appliance is compromised by loss, theft, or a security violation. The secure data is difficult for a third-party to retrieve.
Encrypting the RiOS data store can have performance implications; generally, higher security means less performance. Several encryption strengths are available to provide the right amount of security while maintaining the desired performance level. When selecting an encryption type, you must evaluate the network structure, the type of data that travels over it, and how much of a performance trade-off is worth the extra security.
For details about RiOS data store encryption and synchronization, see the SteelHead User Guide.
These configuration options are available:
Data Store Encryption Type
Specifies one of these encryption types from the drop-down list. The encryption types are listed from the least to the most secure.
• None—Turns off data encryption.
• AES_128—Encrypts data using the AES cryptographic key length of 128 bits.
• AES_192—Encrypts data using the AES cryptographic key length of 192 bits.
• AES_256—Encrypts data using the AES cryptographic key length of 256 bits.
Enable Automated Data Store Synchronization
Enables automated RiOS data store synchronization. Data store synchronization ensures that each RiOS data store in your network has warm data for maximum optimization.
All operations occur in the background and don’t disrupt operations on any of the systems.
Current Appliance
Specifies Master or Backup from the drop-down list.
Peer IP Address
Specifies the IP address for the peer appliance. You must specify either the IP address for the primary or auxiliary interface (if you use the auxiliary interface in place of the primary).
Synchronization Port
Specifies the destination TCP port number used when establishing a connection to synchronize data. The default value is 7744.
Reconnection Interval
Specifies the number of seconds to wait for reconnection attempts. The default value is 30.
Enable Branch Warming for Client Accelerator Clients
Enables branch warming for Client Accelerator Clients. By default, branch warming is enabled.
Enable Data Store Wrap Notifications
Enables data store wrap notifications. The default value is 1 day.
You must clear the RiOS data store and reboot the SteelHead service on the SteelHead after turning on, changing, or turning off the encryption type. After you clear the RiOS data store, the data can’t be recovered. If you don’t want to clear the RiOS data store, reselect your previous encryption type and reboot the service. The appliance uses the previous encryption type and encrypted RiOS data store.
Performance
You enable settings to improve network and RiOS data store performance in the Performance page.
For details about performance optimization, see the SteelHead User Guide.
Data store
These configuration options are available:
Segment Replacement Policy
• Riverbed LRU—Replaces the least recently used data in the RiOS data store, which improves hit rates when the data in the RiOS data store aren’t equally used. This is the default setting.
• FIFO—Replaces data in the order received (first in, first out).
Adaptive data streamlining
The adaptive data streamlining mode monitors and controls the different resources available on the SteelHead and adapts the utilization of these system resources to optimize LAN throughput. Changing the default setting is optional; we recommend you select another setting only with guidance from Riverbed Support or the Riverbed Sales Team.
Generally, the default setting provides the most data reduction. When choosing an adaptive streamlining mode for your network, contact Support to help you evaluate the setting based on:
• the amount of data replication your SteelHead is processing.
• the type of data being processed and its effects on disk throughput on the SteelHeads.
• your primary goal for the project, which could be maximum data reduction or maximum throughput. Even when your primary goal is maximum throughput you can still achieve high data reduction.
These configuration options are available:
Default
Is enabled by default and works for most implementations. The default setting:
• Provides the most data reduction.
• Reduces random disk seeks and improves disk throughput by discarding very small data margin segments that are no longer necessary. This margin segment elimination (MSE) process provides network-based disk defragmentation.
• Writes large page clusters.
• Monitors the disk write I/O response time to provide more throughput.
SDR-Adaptive
• Legacy—Includes the default settings and also:
– Balances writes and reads.
– Monitors both read and write disk I/O response, and CPU load. Based on statistical trends, can employ a blend of disk-based and nondisk-based data reduction techniques to enable sustained throughput during periods of disk/CPU-intensive workloads.
Use caution with the SDR-Adaptive Legacy setting, particularly when you’re optimizing CIFS or NFS with prepopulation. Contact Support for more information.
• Advanced—Maximizes LAN-side throughput dynamically under different data workloads. This switching mechanism is governed with a throughput and bandwidth reduction goal using the available WAN bandwidth. Both SteelHeads must be running RiOS 6.0.x or later.
Upgrade notes: If you have enabled SDR-Adaptive prior to upgrading to RiOS 6.0, the default setting is SDR-Adaptive Legacy.
If you didn’t change the SDR-Adaptive setting prior to upgrading to RiOS 6.0, the default setting is SDR-Adaptive Advanced.
SDR-M
Performs data reduction entirely in memory, which prevents the SteelHead from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. This is typically the preferred configuration mode for SAN replication environments.
SDR-M is most efficient when used between two identical high-end SteelHead models: for example, 6050 - 6050. When used between two different SteelHead models, the smaller model limits the performance.
After enabling SDR-M on both the client-side and the server-side SteelHeads, restart both SteelHeads to avoid performance degradation.
You can’t use peer RiOS data store synchronization with SDR-M.
CPU settings
Use the CPU settings to balance throughput with the amount of data reduction and balance the connection load. The CPU settings are useful with high-traffic loads to scale back compression, increase throughput, and maximize Long Fat Network (LFN) utilization.
These configuration options are available:
Compression Level
Specifies the relative trade-off of data compression for LAN throughput speed. Generally, a lower number provides faster throughput and slightly less data reduction.
Select a RiOS data store compression value of 1 (minimum compression, uses less CPU) through 9 (maximum compression, uses more CPU) from the drop-down list. The default value corresponds to level 6.
We recommend setting the compression level to 1 in high-throughput environments.
Adaptive Compression
Detects LZ data compression performance for a connection dynamically and turns it off (sets the compression level to 0) momentarily if it isn’t achieving optimal results. Improves end-to-end throughput over the LAN by maximizing the WAN throughput. By default, this setting is disabled.
Multi-Core Balancing
Enables multicore balancing, which ensures better distribution of workload across all CPUs, thereby maximizing throughput by keeping all CPUs busy. Core balancing is useful when handling a small number of high-throughput connections (approximately 25 or fewer). By default, this setting is disabled and should be enabled only after careful consideration and consulting with Sales Engineering or Riverbed Support.
CIFS (SMB1)
You can display and modify CIFS optimization feature settings for the selected optimization policy in the CIFS page.
CIFS SMB1 optimization performs latency and SDR optimizations on SMB1 traffic. Without this feature, SteelHeads perform only SDR optimization without improving CIFS latency.
When sharing files, Windows provides the ability to sign CIFS messages to prevent man-in-the-middle attacks. Each CIFS message has a unique signature that prevents the message from being tampered with. This security feature is called SMB signing.
You can enable the RiOS SMB signing feature on a server-side SteelHead to alleviate latency in file access with CIFS acceleration while maintaining message security signatures. With SMB signing on, the SteelHead optimizes CIFS traffic by providing bandwidth optimizations (SDR and LZ), TCP optimizations, and CIFS latency optimizations—even when the CIFS messages are signed.
RiOS 8.5 and later include support for optimizing SMB3-signed traffic for native SMB3 clients and servers. You must enable SMB3 signing if the client or server uses any of these settings:
• SMB2/SMB3 signing set to required. SMB3 signing is enabled by default.
• SMB3 secure dialect negotiation (enabled by default on the Windows 8 client).
• SMB3 encryption.
RiOS 6.5 and later include support for optimizing SMB2-signed traffic for native SMB2 clients and servers. SMB2 signing support includes:
• Windows domain integration, including domain join and domain-level support.
• Authentication using transparent mode and delegation mode. Delegation mode is the default for SMB2. Transparent mode works out of the box with Windows Vista (but not Windows 7). To use transparent mode with Windows 7, you must join the server-side SteelHead as an Active Directory integrated (Windows 2003) or an Active Directory integrated (Windows 2008 and later).
• Secure inner-channel SSL support.
Domain security
The RiOS SMB signing feature works with Windows domain security and is fully compliant with the Microsoft SMB signing version 1, version 2, and version 3 protocols. RiOS supports domain security in both native and mixed modes for:
• Windows 2000
• Windows 2003 R2
• Windows 2008
• Windows 2008 R2
RiOS optimizes signed CIFS traffic even when the logged-in user or client machine and the target server belong to different domains, provided service accounts are configured in the SteelHead for the domains that need to be optimized. RiOS supports delegation for users that are in domains trusted by the server's domain. The trust relationships include:
• a basic parent and child domain relationship. Users from the child domain access CIFS/MAPI servers in the parent domain. For example, users in ENG.RVBD.COM accessing servers in RVBD.COM.
• a grandparent and child domain relationship. Users from grandparent domain access resources from the child domain. For example, users from RVBD.COM accessing resources in DEV.ENG.RVBD.COM.
• a sibling domain relationship. For example, users from ENG.RVBD.COM access resources in MARKETING.RVBD.COM.
Authentication
The process RiOS uses to authenticate domain users depends upon its version.
RiOS features these authentication modes:
• NTLM transparent mode—Uses NTLM authentication end to end between the client-side and server-side SteelHeads and the server-side SteelHead and the server. This is the default mode for SMB1 and SMB2/3 signing starting with RiOS 9.6. Transparent mode in RiOS 6.1 and later supports all Windows servers, including Windows 2008 R2, that have NTLM enabled. We recommend using this mode.
• NTLM delegation mode—Uses Kerberos delegation architecture to authenticate signed packets between the server-side SteelHead and any configured servers participating in the signed session. NTLM is used between the client-side and server-side SteelHead. SMB2 delegation mode in RiOS 6.5 and later supports Windows 7 and Samba 4 clients. Delegation mode requires additional configuration of Windows domain authentication.
• Kerberos authentication support—Uses Kerberos authentication end to end between the client-side and server-side SteelHead and the server-side SteelHead and the server. Kerberos authentication requires additional configuration of Windows domain authentication.
Transparent mode in RiOS 6.1 and later doesn’t support:
• Windows 7 clients. RiOS 7.0 and later support transparent mode when you join the server-side SteelHead as an Active Directory integrated (Windows 2008) or an Active Directory integrated (Windows 2008).
• Windows 2008 R2 domains that have NTLM disabled.
• Windows servers that are in domains with NTLM disabled.
• Windows 7 clients that have NTLM disabled.
You can enable extra security using the secure inner channel. The peer SteelHeads using the secure channel encrypt signed CIFS traffic over the WAN.
For detailed information about configuring Windows domains and prerequisites for enabling SMB signing, see the SteelHead User Guide.
You must restart the client appliance optimization service after enabling SMB1 latency optimization.
Settings
These configuration options are available:
Enable Latency Optimization
Enables SMB1 optimized connections for file opens and reads. Latency optimization is the fundamental component of the CIFS module and is required for base optimized connections for file opens and reads. Although latency optimization incorporates several hundred individual optimized connection types, the most frequent type of file opens is where exclusive opportunistic locks have been granted, and read-ahead operations are initiated on the file data. RiOS optimizes the bandwidth used to transfer the read-ahead data from the server side to the client side.
This is the default setting.
Only clear this check box if you want to disable latency optimization. Typically, you disable latency optimization to troubleshoot problems with the system.
Latency optimization must be enabled (or disabled) on both SteelHeads. You must restart the optimization service on the client-side SteelHead after enabling latency optimization.
Disable Write Optimization
Prevents write optimization. If you disable write optimization, the SteelHead still provides optimization for CIFS reads and for other protocols, but you might experience a slight decrease in overall optimization.
Select this control only if you have applications that assume and require write-through in the network.
Most applications operate safely with write optimization because CIFS allows you to explicitly specify write-through on each write operation. However, if you have an application that doesn’t support explicit write-through operations, you must disable it in the SteelHead.
If you don’t disable write-through, the SteelHead acknowledges writes before they’re fully committed to disk, to speed up the write operation. The SteelHead doesn’t acknowledge the file close until the file is safely written.
Optimize Connections with Security Signatures (that do not require signing)
Prevents Windows SMB signing. This is the default setting.
This feature automatically stops Windows SMB signing. SMB signing prevents the SteelHead from applying full optimization on CIFS connections and significantly reduces the performance gain from a SteelHead deployment. Because many enterprises already take additional security precautions (such as firewalls, internal-only reachable servers, and so on), SMB signing adds minimal additional security at a significant performance cost (even without SteelHeads).
Before you enable this control, consider these factors:
• If the client-side machine has Required signing, enabling this feature prevents the client from connecting to the server.
• If the server-side machine has Required signing, the client and the server connect but you can’t perform full latency optimization with the SteelHead. Domain Controllers default to Required.
If your deployment requires SMB signing, you can optimize signed CIFS messages using the Enable SMB Signing feature.
For details about SMB signing and the performance cost associated with it, see the SteelHead Deployment Guide - Protocols.
Enable Dynamic Write Throttling
Enables the CIFS dynamic throttling mechanism that replaces the current static buffer scheme. When there’s congestion on the server side of the optimized connection, dynamic write throttling provides feedback to the client side, allowing the write buffers to be used more dynamically to smooth out any traffic bursts. We recommend that you enable dynamic write throttling because it prevents clients from buffering too much file-write data.
This is the default setting.
If you enable CIFS dynamic throttling, it’s activated only when there are suboptimal conditions on the server-side causing a backlog of write messages; it doesn’t have a negative effect under normal network conditions.
Enable Applock Optimization
Enables CIFS latency optimizations to improve read and write performance for Microsoft Word (.doc) and Excel (.xls) documents when multiple users have the file open. This setting is enabled by default in RiOS 6.0 and later.
This control enhances the Enable Overlapping Open Optimization feature by identifying and obtaining locks on read write access at the application level. The overlapping open optimization feature handles locks at the file level.
Enable the applock optimization feature on the client-side SteelHead.
Enable Print Optimization
Improves centralized print traffic performance. For example, when the print server is located in the data center and the printer is located in the branch office, enabling this option speeds the transfer of a print job spooled across the WAN to the server and back again to the printer. By default, this setting is disabled.
Enable this control on the client-side SteelHead. Enabling this control requires an optimization service restart.
This option supports Windows XP (client), Vista (client), Windows 2003 (server), and Windows 2008 (server).
Both the client-side and server-side SteelHead must be running RiOS 6.0 or later.
This feature doesn’t improve optimization for a Windows Vista client printing over a Windows 2008 server, because this client and server pair uses a different print protocol.
Overlapping open optimization (advanced)
You can configure the client-side SteelHead with overlapping open optimization.
These configuration options are available:
Enable Overlapping Open Optimization
Enables overlapping opens to obtain better performance with applications that perform multiple opens on the same file (for example, CAD applications). By default, this setting is disabled. Enable this setting on the client-side SteelHead.
With overlapping opens enabled, the SteelHead optimizes data where exclusive access is available (in other words, when locks are granted). When an oplock isn’t available, the SteelHead doesn’t perform application-level latency optimizations but still performs SDR and compression on the data as well as TCP optimizations.
If a remote user opens a file that is optimized using the overlapping opens feature and a second user opens the same file, they might receive an error if the file fails to go through a SteelHead (for example, certain applications that are sent over the LAN). If this occurs, disable overlapping opens for those applications.
Use the radio buttons to set either an include list or exclude list of file types subject to overlapping open optimization.
Optimize only these extensions
Specifies a list of extensions you want to include in overlapping open optimization.
Optimize all except these extensions
Specifies a list of extensions you don’t want to include. For example, specify any file extensions that Enable Applock Optimization is being used for.
SMB settings
These configuration options are available:
Enable SMB Signing
Enables CIFS traffic optimization by providing bandwidth optimizations (SDR and LZ), TCP optimizations, and CIFS latency optimizations, even when the CIFS messages are signed. By default, this control is disabled. You must enable this control on the server-side SteelHead.
If you enable this control without first joining a Windows domain, a message tells you that the SteelHead must join a domain before it can support SMB signing.
NTLM Transparent Mode
Provides SMB1 signing with transparent authentication. The server-side SteelHead uses NTLM to authenticate users. Select transparent mode with Vista for the simplest configuration. You can also use transparent mode with Windows 7, provided that you join the server-side SteelHead as an Active Directory integration.
NTLM Delegation Mode
Re-signs SMB signed packets using the Kerberos delegation facility. This setting is enabled by default when you enable SMB signing. Delegation mode is required for Windows 7, but works with all clients (unless the client has NTLM disabled).
Delegation mode requires additional configuration. Choose Optimization > Active Directory: Service Accounts or click the link provided in the CIFS Optimization page.
Enable Kerberos Authentication Support
Provides SMB signing with end-to-end authentication using Kerberos. The server-side SteelHead uses Kerberos to authenticate users.
We recommend integrating WinSec Controller for Kerberos-based SMB optimization to follow Microsoft's tiered security model. If WinSec Controller is not integrated, the server-side SteelHead appliance must be joined to the Windows Domain, and Windows Domain Authentication must be configured.
The server-side SteelHead must be running RiOS 7.0.x or later. The client-side SteelHead must be running RiOS 5.5 or later.
No configuration is needed on the client-side SteelHead.
SMB2/3
This section describes the SMB support changes with recent versions of RiOS.
SMB3 support
In RiOS 9.2, enabling SMB3 on a SteelHead also enables support for SMB 3.1.1 to accelerate file sharing among Windows 10 clients to Windows Server 16 or Windows VNext (server). RiOS supports latency and bandwidth optimization for SMB 3.1.1 when SMB2/3 and SMB2 signing is enabled and configured. SMB 3.1.1 adds these encryption and security improvements:
• Encryption—The SMB 3.1.1 encryption ciphers are negotiated per-connection through the negotiate context. Windows 10 now supports the AES-128-CCM cipher in addition to AES-128-GCM for encryption. SMB 3.1.1 can negotiate to AES-128-CCM to support older configurations.
Encryption requires that SMB2 signing is enabled on the server-side SteelHead in NTLM-transparent (preferred) or NTLM-delegation mode, and/or end-to-end Kerberos mode. Domain authentication service accounts must be configured for delegation or replication as needed.
• Preauthentication Integrity—Provides integrity checks for negotiate and session setup phases. The client and server maintain a running hash on all of the messages received until there’s a final session setup response. The hash is used as input to the key derivation function (KDF) for deriving the session secret keys.
• Extensible Negotiation—Detects man-in-the-middle attempts to downgrade the SMB2/3 protocol dialect or capabilities that the SMB client and server negotiate. SMB 3.1.1 dialect extends negotiate request/response through negotiate context to negotiate complex connection capabilities such as the preauthentication hash algorithms and the encryption algorithm.
With the exception of service accounts configuration, you can complete all of the above settings on the server-side SteelHead by using the Configure Domain Auth widget.
In RiOS 9.0 and later, enabling SMB3 on a SteelHead also enables support for the SMB 3.02 dialect introduced by Microsoft in Windows 8.1 and Windows Server 2012 R2. SMB 3.02 is only negotiated when systems of these operating system versions are directly connected. SMB 3.02 is qualified with SMB3.02 signed and unsigned traffic over IPv4 and IPv6, and encrypted connections over IPv4 and IPv6. Authenticated connections between a server-side SteelHead and a domain controller are only supported over IPv4.
RiOS 8.5 and later include support for SMB3 traffic latency and bandwidth optimization for native SMB3 clients and servers.
Windows 8 clients and Windows 2012 servers feature SMB3, an upgrade to the CIFS communication protocol. SMB3 adds features for greater resiliency, scalability, and improved security. SMB3 supports these features:
• Encryption—If the server and client negotiate SMB3 and the server is configured for encryption, all SMB3 packets following the session setup are encrypted on the wire, except for when share-level encryption is configured. Share-level encryption marks a specific share on the server as being encrypted; if a client opens a connection to the server and tries to access the share, the system encrypts the data that goes to that share. The system doesn’t encrypt the data that goes to other shares on the same server.
Encryption requires that you enable SMB signing.
• New Signing Algorithm—SMB3 uses the AES-CMAC algorithm instead of the HMAC-SHA256 algorithm used by SMB2 and enables signing by default.
• Secure Dialect Negotiation—Detects man-in-the-middle attempts to downgrade the SMB2/3 protocol dialect or capabilities that the SMB client and server negotiate. Secure dialect negotiation is enabled by default in Windows 8 and Server 2012. You can use secure dialect negotiation with SMB2 when you are setting up a connection to a server running Server 2008-R2.
SMB 3.0 dialect introduces these enhancements:
– Allows an SMB client to retrieve hashes for a particular region of a file for use in branch cache retrieval, as specified in [MS-PCCRC] section 2.4.
– Allows an SMB client to obtain a lease on a directory.
– Encrypts traffic between the SMB client and server on a per-share basis.
– Uses remote direct memory access (RDMA) transports when the appropriate hardware and network are available.
– Enhances failover between the SMB client and server, including optional handle persistence.
– Allows an SMB client to bind a session to multiple connections to the server. The system can send a request through any channel associated with the session, and sends the corresponding response through the same channel previously used by the request.
To optimize signed SMB3 traffic, you must run RiOS 8.5 or later and enable SMB3 optimization on the client-side and server-side SteelHeads.
For additional details on SMB 3.0 specifications, go to
http://msdn.microsoft.com/en-us/library/cc246482.aspx.
SMB2 support
RiOS supports for SMB2 traffic latency optimization for native SMB2 clients and servers. SMB2 allows more efficient access across disparate networks. It is the default mode of communication between Windows Vista and Windows Server 2008. Microsoft modified SMB2 again (to SMB 2.1) for Windows 7 and Windows Server 2008 R2.
SMB2 brought a number of improvements, including but not limited to:
• a vastly reduced set of opcodes (a total of only 18); in contrast, SMB1 has over 70 separate opcodes. Note that use of SMB2 doesn’t result in lost functionality (most of the SMB1 opcodes were redundant).
• general mechanisms for data pipelining and lease-based flow control.
• request compounding, which allows multiple SMB requests to be sent as a single network request.
• larger reads and writes, which provide for more efficient use of networks with high latency.
• caching of folder and file properties, where clients keep local copies of folders and files.
• improved scalability for file sharing (number of users, shares, and open files per server greatly increased).
For details about Protocols SMB2, see the SteelHead User Guide.
Optimization
These configuration options are available:
None
Disables SMB2 and SMB3 optimization.
Enable SMB2 Optimizations
Performs SMB2 latency optimization in addition to the existing bandwidth optimization features. These optimizations include cross-connection caching, read-ahead, write-behind, and batch prediction among several other techniques to ensure low-latency transfers. RiOS maintains the data integrity, and the client always receives data directly from the servers.
By default, SMB2 optimization is disabled.
You must enable (or disable) SMB2 latency optimization on both the client-side and server-side SteelHeads.
To enable SMB2, both SteelHeads must be running RiOS 6.5 or later. After enabling SMB2 optimization, you must restart the optimization service.
Enable SMB3 Optimizations
Performs SMB3 latency optimization in addition to the existing bandwidth optimization features. This optimization includes cross-connection caching, read-ahead, write-behind, and batch prediction among several other techniques to ensure low-latency transfers. RiOS maintains the data integrity and the client always receives data directly from the servers.
By default, SMB3 optimization is disabled.
You must enable (or disable) SMB3 latency optimization on both the client-side and server-side SteelHeads.
You must enable SMB2 optimization to optimize SMB3.
To enable SMB3, both SteelHeads must be running RiOS 8.5 or later. After enabling SMB3 optimization, you must restart the optimization service.
Enable DFS Optimizations
Enables optimization for Distributed File System (DFS) file shares.
You must upgrade both your server-side and client-side SteelHeads to RiOS 9.5 or later to enable DFS optimization. However, this box only needs to be checked on the client-side SteelHead.
Signing
These configuration options are available:
Enable SMB Signing
Enables CIFS traffic optimization by providing bandwidth optimizations (SDR and LZ), TCP optimizations, and CIFS latency optimizations, even when the CIFS messages are signed. By default, this control is disabled. You must enable this control on the server-side SteelHead.
If you enable this control without first joining a Windows domain, a message tells you that the SteelHead must join a domain before it can support SMB signing.
NTLM Transparent Mode
Provides SMB1 signing with transparent authentication. The server-side SteelHead uses NTLM to authenticate users. We recommend using this mode for the simplest configuration. Transparent mode is the default for RiOS releases 9.6 and later.
NTLM Delegation Mode
Re-signs SMB signed packets using the Kerberos delegation facility.
We recommend using transparent mode instead of delegation mode because it is easier to configure and maintain.
Delegation mode requires additional configuration. Choose Optimization > Active Directory: Service Accounts or click the link provided in the CIFS Optimization page.
Enable Kerberos Authentication Support
Provides SMB signing with end-to-end authentication using Kerberos. The server-side SteelHead uses Kerberos to authenticate users.
In addition to enabling this feature, you must also join the server-side SteelHead to a Windows domain and add replication users on the Optimization > Active Directory: Auto Config page.
The server-side SteelHead must be running RiOS 7.0.x or later. The client-side SteelHead must be running RiOS 5.5 or later.
No configuration is needed on the client-side SteelHead.
Down negotiation
These configuration options are available:
None
Prevents negotiating the CIFS session down to SMB1.
SMB2 and SMB3 to SMB1
Optimizes connections that are successfully negotiated down to SMB1 according to the settings on the Optimization > Protocols: CIFS (SMB1) page. Enable this control on the client-side SteelHead.
RiOS bypasses down-negotiation to SMB1 when the client or the server is configured to use only SMB2/3 or the client has already established an SMB2/3 connection with the server. If the client already has a connection with the server, you must restart the client.
Down-negotiation can fail if the client only supports SMB2 or if it bypasses negotiation because the system determines that the server supports SMB2. When down-negotiation fails, bandwidth optimization isn’t affected.
Auto Config
These configuration options are available:
Configure Domain Auth
Automatically configures domain authentication for CIFS, SMB, and MAPI protocols.
WinSec Configuration
Configures the SteelHead as a secure Windows endpoint within the domain, enabling it to participate in and optimize secure Windows traffic
Configure Delegation Account
Automatically configures a delegate user in Active Directory, simplifying the setup of constrained delegation for optimizing SMB-signed or encrypted MAPI traffic.
Configure Service Account
Configures the deployed service account with AD replication privileges.
Add Delegation Servers
Adds delegation servers fro either the CIFS or Exchange MDB service.
Remove Delegation Servers
Removes delegation servers.
Admin User
Username for a domain administrator account.
Password
Password for a domain administrator account.
Service Account Domain/Realm
Defines the Active Directory domain where the SteelHead will replicate user accounts for authentication purposes.
Domain Controller
A comma-separated list of domain controller hostnames or IP addresses.
CIFS prepopulation
The prepopulation operation effectively performs the first SteelHead read of the data on the prepopulation share. Later, the SteelHead handles read and write requests as effectively as with a warm data store. With a warm data store, RiOS sends data references along with new or modified data, dramatically increasing the rate of data transfer over the WAN.
The first synchronization, or the initial copy, retrieves data from the origin file server and copies it to the RiOS data store on the SteelHead. Subsequent synchronizations are based on the synchronization interval.
The RiOS 8.5 and later Management Consoles include policies and rules to provide more control over which files the system transfers to warm the RiOS data store. A policy is a group of rules that select particular files to prepopulate. For example, you can create a policy that selects all PDF files larger than 300 MB created since January 1st, 2013.
CIFS Prepopulation is disabled by default.
Prepopulation
These configuration options are available:
Enable Prepopulation
Prewarms the RiOS data store. In this setup, the primary interface of the SteelHead acts as a client and prerequests data from the share you want to use to warm the data store. This request goes through the LAN interface to the WAN interface out to the server-side SteelHead, causing the in-path interface to see the data as a normal client request.
When data is requested again by a client on the local LAN, RiOS sends only new or modified data over the WAN, dramatically increasing the rate of data transfers.
Enable Transparent Prepopulation Support
Opens port 8777 to allow manual warming of the RiOS data store using the Riverbed Copy Utility (RCU) to prepopulate your shares.
Most environments don’t need to enable RCU.
Add a New Prepopulation Share
Displays the controls for adding a new prepopulation CIFS share.
Remote Path
Specifies the path to the data on the origin server or the UNC path of a share to which you want to make available for prepopulation. Set up the prepopulation share on the remote box pointing to the actual share in the head-end data center server. For example: \\<origin-file-server>\<local-name>
The share and the origin-server share names can’t contain any of these characters: < > * ? | / + = ; : " , & []
Account
Specifies the account number on the CIFS share.
Specify the account used to access the prepopulation share: for example, <domain>\<username>
Password
Sets the password for accessing the CIFS share.
Password Confirm
Confirms the password.
Synchronization Enable
Enables these synchronization options:
• Sync Schedule Date, Time—Sets date (yyyy/mm/dd) and time (hh:mm:ss) for synchronizing the appliance with the server.
• Sync Interval—Set number and select Minutes, Hours, Days, or Disabled from the drop-down list.
Comment
Provides comment that describes the share configuration.
Add
Adds the new CIFS share configuration to the policy definition.
HTTP
This section describes how to configure HTTP optimization features. HTTP optimization works for most HTTP and HTTPS applications, including SAP, customer relationship management, enterprise resource planning, financial, document management, and Intranet portals.
HTTP settings have been streamlined for release 10.2 and later. The SCC 10.2 settings are the same as SteelHead 10.2. Refer to the settings in the SteelHead User Guide for 10.2 for any settings not discussed in this guide. All other settings are applicable to prior versions.
Configuring HTTP optimization can be a complex task. There are many different options and it isn’t always easy to determine what settings are required for a particular application without extensive testing. HTTP automatic configuration creates an ideal HTTP optimization scheme based on a collection of comprehensive statistics per host. The host statistics create an application profile, used to configure HTTP automatically and assist with any troubleshooting.
For detailed information about configuring HTTP optimization, see the SteelHead User Guide.
You can easily change an automatically configured server subnet to override settings. All of the HTTP optimization features operate on the client-side SteelHead. You configure HTTP optimizations only on the client-side SteelHead.
HTTP optimization has been tested on Internet Explorer 6.0 and later and Firefox 2 and later. HTTP optimization has been tested on Apache 1.3, Apache 2.2, Microsoft IIS 5.0, 6.0, 7.5, and 8; Microsoft SharePoint, ASP.net, and Microsoft Internet Security and Acceleration Server (ISA).
Settings
These configuration options are available:
Enable HTTP Optimization
Prefetches and stores objects embedded in web pages to improve HTTP traffic performance. By default, HTTP optimization is enabled.
Enable SaaS User Identity (Office 365)
Enables collection of statistics by user ID that is viewable in the Current Connections report. The User Identity column in the Current Connections report lists the full email address of the user. If the user email address is too long the user ID is displayed.
The SteelHead collects User IDs only from Office 365 users that are authenticated with single sign-on (SSO) using Active Directory Federation Services (ADFS).
This control is disabled by default. You only need to enable this control on one SteelHead in your network. We recommend enabling it on the client-side SteelHead for Office 365 traffic and the server-side SteelHead for SMB and MoH traffic.
Starting with RiOS 9.7, user IDs extracted on Office 365 connections are propagated to other connections originating from the same source IP. Additionally, user IDs for SMB and MAPI over HTTP (MoH) connections are displayed in this field if SMB or MoH optimization is enabled. This feature is disabled by default.
Enable Object Caching
Globally enables the object caching feature, which parses the base HTML page and prefetches any embedded objects to the client-side appliance. When the browser requests an embedded object, the appliance serves the request from the cached results, eliminating the round-trip delay to the server. Cached objects can be images, style sheets, or any Java scripts associated with the base page and located on the same host as the base URL. Requires cookies.
Object Prefetch Table Settings:
Store All Allowable Objects
Optimizes all objects in the object prefetch table. By default, Store All Allowable Objects is enabled.
Store Objects With The Following Extensions
Examines the control header to determine which objects to store. When enabled, RiOS doesn’t limit the objects to those listed in Extensions to Prefetch but rather prefetches all objects that the control header indicates are storable. This control header examination is useful to store web objects encoded into names without an object extension.
Disable the Object Prefetch Table
Stores nothing.
Minimum Object Prefetch Table Time
Sets the minimum number of seconds the objects are stored in the local object prefetch table. The default is 60 seconds.
This setting specifies the minimum lifetime of the stored object. During this lifetime, any qualified If-Modified-Since (IMS) request from the client receives an HTTP 304 response, indicating that the resource for the requested object hasn’t changed since stored.
Maximum Object Prefetch Table Time
Sets the maximum number of seconds the objects are stored in the local object prefetch table. The default is 86400 seconds.
This setting specifies the maximum lifetime of the stored object. During this lifetime, any qualified If-Modified-Since (IMS) request from the client receives an HTTP 304 response, indicating that the resource for the requested object hasn’t changed since stored.
Extensions to Prefetch
Specifies object extensions to prefetch, separated by commas. By default the SteelHead prefetches .jpg, .gif, .js, .png, and .css object extensions.
These extensions are only for URL Learning and Parse and Prefetch.
Enable HTTP Stream Splitting
Splits Silverlight smooth streaming, Adobe Flash HTTP dynamic streams, and Apple HTTP Live Streaming (HLS). Enable this control on the client-side SteelHead. This global setting only applies to RiOS 8.6 and earlier. For RiOS 9.0 and later, use the per-host autoconfiguration setting.
This control includes support for Microsoft Silverlight video and Silverlight extensions support on Information Internet Server (IIS) version 7.5 installed on Windows Server 2008 R2.
To split Adobe Flash streams, you must set up the video origin server before enabling this control. For details, see the SteelHead Deployment Guide.
Apple HLS is an HTTP-based video delivery protocol for iOS and OSX that streams video to iPads, iPhones, and Macs. HLS is part of an upgrade to QuickTime. RiOS splits both live and on-demand video streams.
Live video streaming is unavailable in cloud appliances models. This feature may become available in future releases of those models.
Use this control to support multiple branch office users from a single real-time TCP stream. The SteelHead identifies live streaming video URL fragment requests and delays any request that’s already in progress. When the client receives the response, it returns the same response to all clients requesting that URL.
As an example, when employees in branch offices simultaneously start clients (through browser plug-ins) that all request the same video fragment, the client-side SteelHead delays requests for that fragment because it is already outstanding. Since many identical requests typically are made before the first request is responded to, the result is many hits to the server and many bytes across the WAN. When you enable stream splitting on the client-side SteelHead, it identifies live streaming video URL fragment requests, and holds subsequent requests for that fragment because the first request for that fragment is outstanding. When the response is received, it is delivered to all clients that requested it. Thus, only one request and response pair for a video fragment transfers over the WAN. With stream splitting, the SteelHead replicates one TCP stream for each individual client.
Stream splitting optimization doesn’t change the number of sockets that are opened to the server, but it does reduce the number of requests made to the server. Without this optimization, each fragment is requested once per client. With this optimization, each fragment is requested once.
Stream splitting is disabled by default.
Enabling this control requires that HTTP optimization is enabled on the client-side and server-side SteelHeads. The client-side SteelHead requires an optimization service restart. No other changes are necessary on the server-side SteelHead.
In addition to splitting the video stream, you can prepopulate video at branch office locations during off-peak periods and then retrieve them for later viewing. For information, see the protocol http prepop list url command in the Riverbed Command-Line Interface Reference Guide.
To view a graph of the data reduction resulting from stream splitting, choose Reports > Optimization: Optimized Throughput.
Enable Per-Host Auto Configuration
Creates an HTTP optimization scheme automatically by evaluating HTTP traffic statistics gathered for the host or server subnet. RiOS derives the web server hostname or server subnet from the HTTP request header and collects HTTP traffic statistics for that host or subnet. RiOS evaluates hostnames and subnets that don’t match any other rules.
Automatic configurations define the optimal combination of URL Learning, Parse and Prefetch, and Object Prefetch Table for the host or subnet. After RiOS evaluates the host or subnet, it appears on the Subnet or Host list at the bottom of the page as Auto Configured. HTTP traffic is optimized automatically.
Automatic configuration is enabled by default. If you have automatically configured hostnames and then disabled Per-Host Auto Configuration, the automatically configured hosts are removed from the list when the page refreshes. They’re not removed from the database. When you reenable Per- Host Auto Configuration, the hosts reappear in the list with the previous configuration settings.
We recommend that both the client-side and server-side SteelHeads are running RiOS 7.0 or later for full statistics gathering and optimization benefits.
Enable this control on the client-side SteelHead.
You can’t remove an automatically configured hostname or subnet from the list, but you can reconfigure them, save them as a static host and then remove them.
In RiOS 8.5 and later, the default configuration appears in the list only when automatic configuration is disabled.
To allow a static host to be automatically configured, remove it from the list.
Enable Web-proxy
Enables the web proxy feature, which enhances web browsing by caching web objects locally on the appliance.
HTTP per-host autoconfiguration settings
These configuration options are available:
Basic Tuning
Strip Compression
Marks the accept-encoding lines from the HTTP compression header so they’re not returned in calls. An accept-encoding directive compresses content rather than using raw HTML. Enabling this option improves the performance of the SteelHead data reduction algorithms. By default, strip compression is enabled.
Insert Cookie
Adds a cookie to HTTP applications that don’t already have one. HTTP applications frequently use cookies to keep track of sessions. The SteelHead uses cookies to distinguish one user session from another. If an HTTP application doesn’t use cookies, the client SteelHead inserts one so that it can track requests from the same client. By default, this setting is disabled.
Insert Keep Alive
Uses the same TCP connection to send and receive multiple HTTP requests and responses, as opposed to opening a new one for every single request and response. Specify this option when using the URL Learning or Parse and Prefetch features with HTTP version 1.0 or HTTP version 1.1 applications using the Connection Close method. By default, this setting is disabled.
Caching
Object Prefetch Table
Stores HTTP object prefetches from HTTP GET requests for cascading style sheets, static images, and Java scripts in the Object Prefetch Table. Enable this control on the client-side SteelHead. When the browser performs If-Modified-Since (IMS) checks for stored content or sends regular HTTP requests, the client-side SteelHead responds to these IMS checks and HTTP requests, cutting back on round trips across the WAN.
Stream Splitting
Splits Silverlight smooth streaming, Adobe Flash HTTP dynamic streams, and Apple HTTP Live Streaming (HLS). Enable this control on the client-side SteelHead. This control only applies to RiOS 9.0 and later.
This control includes support for Microsoft Silverlight video and Silverlight extensions support on Information Internet Server (IIS) version 7.5 installed on Windows Server 2008 R2.
To split Adobe Flash streams, you must set up the video origin server before enabling this control. For details, see the SteelHead Deployment Guide.
Apple HLS is an HTTP-based video delivery protocol for iOS and OSX that streams video to iPads, iPhones, and Macs. HLS is part of an upgrade to QuickTime. RiOS splits both live and on-demand video streams.
Live video streaming is unavailable in cloud appliances models. This feature may become available in future releases of those models.
Use this control to support multiple branch office users from a single real-time TCP stream. The SteelHead identifies live streaming video URL fragment requests and delays any request that’s already in progress. When the client receives the response, it returns the same response to all clients requesting that URL.
As an example, when employees in branch offices simultaneously start clients (through browser plug-ins) that all request the same video fragment, the client-side SteelHead delays requests for that fragment because it is already outstanding. Since many identical requests typically are made before the first request is responded to, the result is many hits to the server and many bytes across the WAN. When you enable stream splitting on the client-side SteelHead, it identifies live streaming video URL fragment requests, and holds subsequent requests for that fragment because the first request for that fragment is outstanding. When the response is received, it is delivered to all clients that requested it. Thus, only one request and response pair for a video fragment transfers over the WAN. With stream splitting, the SteelHead replicates one TCP stream for each individual client.
RiOS 9.1 increases the cache size by up to five times, depending on the SteelHead model, and stores the video fragments for 30 seconds to keep clients watching the same live video in sync. For details, see the SteelHead Deployment Guide - Protocols.
Stream splitting optimization doesn’t change the number of sockets that are opened to the server, but it does reduce the number of requests made to the server. Without this optimization, each fragment is requested once per client. With this optimization, each fragment is requested once.
Stream splitting is disabled by default.
Enabling this control requires that HTTP optimization is enabled on the client-side and server-side SteelHeads. The client-side SteelHead doesn’t require an optimization service restart in RiOS 9.1. No other changes are necessary on the server-side SteelHead.
In addition to splitting the video stream, you can prepopulate video at branch office locations during off-peak periods and then retrieve them for later viewing. For information, see the protocol http prepop list url command in the Riverbed Command-Line Interface Reference Guide.
To view a graph of the data reduction resulting from stream splitting, choose Reports > Optimization: Live Video Stream Splitting.
Live video streaming is unavailable in cloud appliances models. This feature may become available in future releases of those models.
Prefetch Schemes
URL Learning
Enables URL Learning, which learns associations between a base URL request and a follow-on request. Stores information about which URLs have been requested and which URLs have generated a 200 OK response from the server. This option fetches the URLs embedded in style sheets or any JavaScript associated with the base page and located on the same host as the base URL.
For example, if a web client requests /a.php?c=0 and then requests /b.php?c=0, and another client requests a.php?c=1 and then b.php?c=1, if somebody requests a.php?c=123, RiOS determines that it might request b.php?c=123 next and thus prefetches it for the client.
URL Learning works best with nondynamic content that doesn’t contain session-specific information. URL Learning is enabled by default.
Your system must support cookies and persistent connections to benefit from URL Learning. If your system has cookies turned off and depends on URL rewriting for HTTP state management, or is using HTTP version 1.0 (with no keepalives), you can force the use of cookies using the Add Cookie option and force the use of persistent connections using the Insert Keep Alive option.
Parse and Prefetch
Enables Parse and Prefetch, which parses the base HTML page received from the server and prefetches any embedded objects to the client-side SteelHead. This option complements URL Learning by handling dynamically generated pages and URLs that include state information. When the browser requests an embedded object, the SteelHead serves the request from the prefetched results, eliminating the round-trip delay to the server.
The prefetched objects contained in the base HTML page can be images, style sheets, or any Java scripts associated with the base page and located on the same host as the base URL.
Parse and Prefetch requires cookies. If the application doesn’t use cookies, you can insert one using the Insert Cookie option.
Authentication Tuning
Gratuitous 401
Prevents a WAN round trip by issuing the first 401 containing the realm choices from the client-side SteelHead. We recommend enabling Strip Auth Header along with this option.
This option is most effective when the web server is configured to use per-connection NTLM authentication or per-request Kerberos authentication.
If the web server is configured to use per-connection Kerberos authentication, enabling this option might cause additional delay.
SharePoint
FPSE
Enables Microsoft Front Page Server Extensions (FPSE) protocol optimization. FPSE is one of the protocols in the Front Page protocol suite. FPSE compose a set of SharePoint server-side applications that let users simultaneously collaborate on the same website and web server to enable multiuser authoring. The protocol is used for displaying site content as a file system and allows file downloading, uploading, creation, listing, and locking. FPSE uses HTTP for transport.
RiOS 8.5 and later cache and respond locally to some FPSE requests to save at least five round trips per each request, resulting in performance improvements. SSL connections and files smaller than 5 MB can experience significant performance improvements.
FPSE supports SharePoint Office 2007/2010 clients installed on Windows XP and Windows 7 and SharePoint Server 2007/2010.
SharePoint 2013 doesn’t use the FPSE protocol when users are editing files. It uses WebDAV when users map SharePoint drives to local machines and browse directories.
FPSE is disabled by default.
Choose Reports > Networking: Current Connections to view the HTTP-SharePoint connections. To display only HTTP-SharePoint connections, click add filter in the Query area, select for application from the drop-down menu, select HTTP-SharePoint, and click Update.
WebDAV
Enables Microsoft Web Distributed Authoring and Versioning (WebDAV) protocol optimization. WebDAV is an open-standard extension to the HTTP version 1.1 protocol that enables file management on remote web servers. Some of the many Microsoft components that use WebDAV include WebDAV redirector, Web Folders, and SMS/SCCM.
RiOS predicts and prefetches WebDAV responses, which saves multiple round trips and makes browsing the SharePoint file repository more responsive.
WebDAV optimization is disabled by default.
Choose Reports > Networking: Current Connections to view the HTTP-SharePoint connections. To display only HTTP-SharePoint connections, click add filter in the Query area, select for application from the drop-down menu, select HTTP-SharePoint, and click Update.
HTML tags to prefetch
Select which HTML tags to prefetch. By default, these tags are prefetched: base/href, body/background, img/src, link/href, and script/src.
These configuration options are available:
Add a Prefetch Tag
Displays the controls to add an HTML tag.
Tag Name
Specifies the tag name.
Attribute
Specifies the tag attribute.
Add
Adds the tag.
After you apply your settings, you can verify whether changes have had the desired effect by reviewing related reports.
Server subnet and host settings
Under Settings, you can enable URL Learning, Parse and Prefetch, and Object Prefetch Table in any combination for any host server or server subnet. You can also enable authorization optimization in RiOS 6.1 and later to tune a particular subnet dynamically, with no service restart required.
The default settings are URL Learning, Object Prefetch Table, and Strip Compression for all traffic with automatic configuration disabled. The default setting applies when HTTP optimization is enabled, regardless of whether there is an entry in the Subnet or Host list. In the case of overlapping subnets, specific list entries override any default settings.
For details, see the SteelHead User Guide.
These configuration options are available:
Add a Subnet or Host
Displays the controls for adding a server subnet or host. The server must support keepalive.
Server Subnet or Hostname
Specifies an IP address and mask pattern for the server subnet, or a hostname, on which to set up the HTTP optimization scheme.
Use this format for an individual subnet IP address and netmask:
xxx.xxx.xxx.xxx/xx (IPv4)
x:x:x::x/xxx (IPv6)
You can also specify 0.0.0.0/0 (all IPv4) or ::/0 (all IPv6) as the wildcard for either IPv4 or IPv6 traffic.
Row Filters
• Static—Displays only the static subnet or hostname configurations in the subnet and hostname list. You create a static configuration manually to fine-tune HTTP optimization for a particular host or server subnet. By default, RiOS displays both automatic and static configurations.
• Auto—Displays only the automatic subnet or hostname configurations in the subnet and hostname list. RiOS creates automatic configurations when you select Enable Per-Host Auto Configuration, based on an application profile. Automatic configurations define the optimal combination of URL learning, Parse and Prefetch, and Object Prefetch Table for the host or subnet. By default, RiOS displays both automatic and static configurations.
• Auto (Eval)—Displays the automatic hostname configurations currently under evaluation. By default, the evaluation period is 1000 transactions.
Basic Tuning
Strip Compression
Marks the accept-encoding lines from the HTTP compression header so they’re not returned in calls. An accept-encoding directive compresses content rather than using raw HTML. Enabling this option improves the performance of the SteelHead data reduction algorithms. By default, strip compression is enabled.
Insert Cookie
Adds a cookie to HTTP applications that don’t already have one. HTTP applications frequently use cookies to keep track of sessions. The SteelHead uses cookies to distinguish one user session from another. If an HTTP application doesn’t use cookies, the client SteelHead inserts one so that it can track requests from the same client. By default, this setting is disabled.
Insert Keep Alive uses the same TCP connection to send and receive multiple HTTP requests and responses, as opposed to opening a new one for every single request and response. Specify this option when using the URL Learning or Parse and Prefetch features with HTTP version 1.0 or HTTP version 1.1 applications using the Connection Close method. By default, this setting is disabled.
Prefetch Schemes
URL Learning
Enables URL Learning, which learns associations between a base URL request and a follow-on request. Stores information about which URLs have been requested and which URLs have generated a 200 OK response from the server. This option fetches the URLs embedded in style sheets or any JavaScript associated with the base page and located on the same host as the base URL.
For example, if a web client requests /a.php?c=0 and then requests /b.php?c=0, and another client requests a.php?c=1 and then b.php?c=1, if somebody requests a.php?c=123, RiOS determines that it might request b.php?c=123 next and thus prefetches it for the client.
URL Learning works best with nondynamic content that doesn’t contain session-specific information. URL Learning is enabled by default.
Your system must support cookies and persistent connections to benefit from URL Learning. If your system has cookies turned off and depends on URL rewriting for HTTP state management, or is using HTTP version 1.0 (with no keepalives), you can force the use of cookies using the Add Cookie option and force the use of persistent connections using the Insert Keep Alive option.
Parse and Prefetch
Enables Parse and Prefetch, which parses the base HTML page received from the server and prefetches any embedded objects to the client-side SteelHead. This option complements URL Learning by handling dynamically generated pages and URLs that include state information. When the browser requests an embedded object, the SteelHead serves the request from the prefetched results, eliminating the round-trip delay to the server.
The prefetched objects contained in the base HTML page can be images, style sheets, or any Java scripts associated with the base page and located on the same host as the base URL.
Parse and Prefetch requires cookies. If the application doesn’t use cookies, you can insert one using the Insert Cookie option.
Object Prefetch Table
Enables the Object Prefetch Table, which stores HTTP object prefetches from HTTP GET requests for cascading style sheets, static images, and Java scripts in the Object Prefetch Table. When the browser performs If-Modified-Since (IMS) checks for stored content or sends regular HTTP requests, the client-side SteelHead responds to these IMS checks and HTTP requests, cutting back on round trips across the WAN.
Authentication Tuning
Reuse Auth
Allows an unauthenticated connection to serve prefetched objects, as long as the connection belongs to a session whose base connection is already authenticated.
This option is most effective when the web server is configured to use per-connection NTLM or Kerberos authentication.
Force NTLM
In the case of negotiated Kerberos and NTLM authentication, forces NTLM. Kerberos is less efficient over the WAN because the client must contact the Domain Controller to answer the server authentication challenge and tends to be employed on a per-request basis.
We recommend enabling Strip Auth Header along with this option.
Strip Auth Header
Removes all credentials from the request on an already authenticated connection. This method works around Internet Explorer behavior that reauthorizes connections that have previously been authorized.
This option is most effective when the web server is configured to use per-connection NTLM authentication.
If the web server is configured to use per-request NTLM authentication, enabling this option might cause authentication failure.
Gratuitous 401
Prevents a WAN round trip by issuing the first 401 containing the realm choices from the client-side SteelHead.
We recommend enabling Strip Auth Header along with this option.
This option is most effective when the web server is configured to use per-connection NTLM authentication or per-request Kerberos authentication.
If the web server is configured to use per-connection Kerberos authentication, enabling this option might cause additional delay.
FPSE
Enables Microsoft Front Page Server Extensions (FPSE) protocol optimization. FPSE is one of the protocols in the Front Page protocol suite. FPSE compose a set of SharePoint server-side applications that let users simultaneously collaborate on the same website and web server to enable multiuser authoring. The protocol is used for displaying site content as a file system and allows file downloading, uploading, creation, listing, and locking. FPSE uses HTTP for transport.
RiOS 8.5 and later cache and respond locally to some FPSE requests to save at least five round trips per each request, resulting in performance improvements. SSL connections and files smaller than 5 MB can experience significant performance improvements.
FPSE supports SharePoint Office 2007/2010 clients installed on Windows XP and Windows 7 and SharePoint Server 2007/2010.
SharePoint 2013 doesn’t use the FPSE protocol when users are editing files. It uses WebDAV when users map SharePoint drives to local machines and browse directories.
FPSE is disabled by default.
Choose Reports > Networking: Current Connections to view the HTTP-SharePoint connections. To display only HTTP-SharePoint connections, click add filter in the Query area, select for application from the drop-down menu, select HTTP-SharePoint, and click Update.
WebDAV
Enables Microsoft Web Distributed Authoring and Versioning (WebDAV) protocol optimization. WebDAV is an open-standard extension to the HTTP version 1.1 protocol that enables file management on remote web servers. Some of the many Microsoft components that use WebDAV include WebDAV redirector, Web Folders, and SMS/SCCM.
RiOS predicts and prefetches WebDAV responses, which saves multiple round trips and makes browsing the SharePoint file repository more responsive.
WebDAV optimization is disabled by default.
Choose Reports > Networking: Current Connections to view the HTTP-SharePoint connections. To display only HTTP-SharePoint connections, click add filter in the Query area, select for application from the drop-down menu, select HTTP-SharePoint, and click Update.
Add
Adds the subnet or hostname.
MAPI
MAPI optimization doesn’t require a separate license and is enabled by default.
MAPI optimization requires a separate license that’s included with the BASE license. This feature is enabled by default.
RiOS uses the SteelHead secure inner channel to ensure all MAPI traffic sent between the client-side and the server-side SteelHeads is secure. You must set the secure peering traffic type to All. For detailed information on configuring secure peers, see the SteelHead User Guide.
You must enable MAPI optimization on all SteelHeads optimizing MAPI in your network, not just the client-side SteelHead.
You can display and modify MAPI optimization settings for the selected optimization policy on the Protocols MAPI page. For detailed information about the MAPI optimization, see the SteelHead User Guide.
These configuration options are available:
Enable MAPI Exchange Optimization
Enables the fundamental component of the MAPI optimization module, which includes optimization for read, write (receive, send), and sync operations.
By default, MAPI Exchange optimization is enabled.
Only clear this check box to disable MAPI optimization. Typically, you disable MAPI optimization to troubleshoot problems with the system. For example, if you’re experiencing problems with Outlook clients connecting with Exchange, you can disable MAPI latency acceleration (while continuing to optimize with SDR for MAPI).
Exchange Port
Specifies the MAPI Exchange port for optimization. Typically, you don’t need to modify the default value, 7830.
Enable Outlook Anywhere Optimization
Enables Outlook Anywhere latency optimization. Outlook Anywhere is a feature of Microsoft Exchange Server 2003, 2007, and 2010 that allows Microsoft Office Outlook 2003, 2007, and 2010 clients to connect to their Exchange Servers over the internet using the Microsoft RPC tunneling protocol. Outlook Anywhere allows for a VPN-less connection as the MAPI RPC protocol is tunneled over HTTP or HTTPS. RPC over HTTP can transport regular or encrypted MAPI. If you use encrypted MAPI, the server-side SteelHead must be a member of the Windows domain.
Enable this feature on the client-side and server-side SteelHeads. Both SteelHeads must be running RiOS 6.5 or later.
By default, this feature is disabled.
To use this feature, you must also enable HTTP Optimization on the client-side and server-side SteelHeads (HTTP optimization is enabled by default).
HTTP optimization is unavailable in cloud appliances models. This feature may become available in future releases of those models.
If you’re using Outlook Anywhere over HTTPS, you must enable SSL and the IIS certificate must be installed on the server-side SteelHead:
• When using HTTP, Outlook can only use NTLM proxy authentication.
• When using HTTPS, Outlook can use NTLM or Basic proxy authentication.
• When using encrypted MAPI with HTTP or HTTPS, you must enable and configure encrypted MAPI in addition to this feature.
Outlook Anywhere optimized connections can’t start MAPI prepopulation.
After you apply your settings, you can verify that the connections appear in the Current Connections report as a MAPI-OA or an eMAPI-OA (encrypted MAPI) application. The Outlook Anywhere connection entries appear in the system log with an RPCH prefix.
Outlook Anywhere creates twice as many connections on the SteelHead than regular MAPI. Enabling Outlook Anywhere latency optimization results in the SteelHead entering admission control twice as fast than with regular MAPI.
For details and troubleshooting information, see the SteelHead Deployment Guide - Protocols.
For details about enabling Outlook Anywhere, see http://technet.microsoft.com/en-us/library/bb123513(EXCHG.80).aspx.
Auto-Detect Outlook Anywhere Connections
Automatically detects the RPC over HTTPS protocol used by Outlook Anywhere. This feature is dimmed until you enable Outlook Anywhere optimization. By default, these options are enabled.
You can enable automatic detection of RPC over HTTPS using this option or you can set in-path rules. Autodetect is best for simple SteelHead configurations with only a single SteelHead at each site and when the IIS server is also handling websites.
If the IIS server is only used as RPC Proxy, and for configurations with asymmetric routing, connection forwarding or Interceptor installations, add in-path rules that identify the RPC Proxy server IP addresses and select the Outlook Anywhere latency optimization policy. After adding the in-path rule, disable the autodetect option.
On an Interceptor, add load-balancing rules to direct traffic for RPC Proxy to the same SteelHead.
In-path rules interact with autodetect as follows:
• When autodetect is enabled and the in-path rule doesn’t match, RiOS optimizes Outlook Anywhere if it detects the RPC over HTTPS protocol.
• When autodetect isn’t enabled and the in-path rule doesn’t match, RiOS doesn’t optimize Outlook Anywhere.
• When autodetect is enabled and the in-path rule matches with HTTP only, RiOS doesn’t optimize Outlook Anywhere (even if it detects the RPC over HTTPS protocol).
• When autodetect isn’t enabled and the in-path rule doesn’t match with HTTP only, RiOS doesn’t optimize Outlook Anywhere.
• When autodetect is enabled and the in-path rule matches with an Outlook Anywhere latency optimization policy, RiOS optimizes Outlook Anywhere (even if it doesn’t detect the RPC over HTTPS protocol).
• When autodetect isn’t enabled and the in-path rule matches with Outlook Anywhere, RiOS optimizes Outlook Anywhere.
Enable Secured Traffic Optimization
Enables encrypted MAPI RPC traffic optimization between Outlook and Exchange. By default, this option is disabled.
The basic steps to enable encrypted optimization are:
1. Choose Networking > Active Directory: Domain Join and join the server-side SteelHead to the same Windows Domain that the Exchange Server belongs to and operates as a member server. An adjacent domain can be used (through cross-domain support) if the SteelHead is running RiOS 6.1 or later. It isn’t necessary to join the client-side SteelHead to the domain.
2. Verify that Outlook is encrypting traffic.
3. Enable this option on all SteelHeads involved in optimizing MAPI encrypted traffic.
4. RiOS supports both NTLM and Kerberos authentication. To use Kerberos authentication, select Enable Kerberos Authentication support on both the client-side and server-side SteelHeads. Both SteelHeads must be running RiOS 7.0 or later. Windows 7 clients must not be configured to use NTLM only.
In RiOS 7.0 and later, Windows 7 MAPI clients must use Delegation mode unless you join the server-side SteelHead using Active Directory integration for Windows 2003 or 2008. Transparent mode is the default in RiOS 6.5 and later. Use Transparent mode for all other clients and for Windows 7 MAPI clients when the server-side SteelHead is joined as an Active Directory integrated.
5. Restart the service on all SteelHeads that have this option enabled.
Windows 7 clients running RiOS 6.1.x with MAPI encryption enabled can’t connect to a Microsoft Exchange cluster even after auto or manual delegation mode is configured. You must configure the Active Directory delegate user with the Exchange Cluster node service exchangeMDB. By default the Exchange 2003 and 2007 cluster nodes don’t have exchangeMDB service; these clusters must be defined manually in a Domain Controller. If your configuration includes an Exchange cluster working with encrypted MAPI optimization, you must use manual delegation mode. For details, see the SteelHead Deployment Guide - Protocols.
Both the server-side and client-side SteelHeads must be running RiOS 5.5.x or later.
When this option is enabled and Enable MAPI Exchange 2007 Acceleration is disabled on either SteelHead, MAPI Exchange 2007 acceleration remains in effect for unencrypted connections.
NTLM Transparent Mode
Provides SMB1 signing with transparent authentication. The server-side SteelHead uses NTLM to authenticate users. We recommend using this mode for the simplest configuration. Transparent mode is the default for RiOS releases 9.6 and later.
Enable Kerberos Authentication Support
Provides encrypted MAPI optimization with end-to-end authentication using Kerberos. The server-side SteelHead uses Kerberos to authenticate users.
The server-side SteelHead must be running RiOS 7.0.x or later.
Enable NTLM Support
Provides encrypted MAPI optimization for connections that use NTLM authentication.
NTLM Transparent Mode
Provides encrypted MAPI optimization using transparent authentication.
NTLM Delegation Mode
Provides encrypted MAPI optimization using the Kerberos delegation architecture. Select this mode if you’re encrypting MAPI traffic for Windows 7 or earlier client versions. The server-side SteelHead must be running RiOS 6.1 or later.
Enable Transparent Prepopulation
Enables a mechanism for sustaining Microsoft Exchange MAPI connections between the client and server even after the Outlook client has shut down. This method allows email data to be delivered between the Exchange Server and the client-side SteelHead while the Outlook client is offline or inactive. When a user logs into their Outlook client, the mail data is already prepopulated on the client-side SteelHead. This accelerates the first access of the client’s email, which is retrieved with LAN-like performance.
Transparent prepopulation creates virtual MAPI connections to the Exchange Server for Outlook clients that are offline. When the remote SteelHead detects that an Outlook client has shut down, the virtual MAPI connections are triggered. The remote SteelHead uses these virtual connections to pull mail data from the Exchange Server over the WAN link.
You must enable this control on the server-side and client-side SteelHeads. By default, MAPI transparent prepopulation is enabled.
MAPI prepopulation doesn’t use any additional Client Access Licenses (CALs). The SteelHead holds open an existing authenticated MAPI connection after Outlook is shut down. No user credentials are used or saved by the SteelHead when performing prepopulation.
The client-side SteelHead controls MAPI v2 prepopulation, which allows for a higher rate of prepopulated session, and enables the MAPI prepopulation to take advantage of the read-ahead feature in the MAPI optimization blade.
MAPI v2 prepopulation is supported in RiOS 6.0.4 and later, 6.1.2 and later, and 6.5 and later. The client-side and server-side SteelHead can be running any of these code train levels and provide prepopulation v2 capabilities. For example, a client-side SteelHead running RiOS 6.0.4 connecting to a server-side SteelHead running RiOS 6.5 provides MAPI v2 prepopulation capabilities. In contrast, a 6.0.1a client-side SteelHead connecting to a RiOS 6.5 server-side SteelHead supports MAPI v1 prepopulation, but doesn’t provide MAPI v2 prepopulation.
If a user starts a new Outlook session, the MAPI prepopulation session terminates. If for some reason the MAPI prepopulation session doesn’t terminate (for example, the user starts a new session in a location that’s different than the SteelHead that has the MAPI prepopulation session active), the MAPI prepopulation session eventually times out per the configuration setting.
MAPI transparent prepopulation isn’t started with Outlook Anywhere connections.
Max Connections Percentage (%)
Specifies the maximum number of virtual MAPI connections to the Exchange Server for Outlook clients that have shut down. Setting the maximum connections percentage limits the aggregate load on all Exchange Servers through the configured SteelHead. The default value is 25%.
You must configure the maximum connections on both the client-side and server-side of the network. In RiOS 7.0 and later, the maximum connections setting is only used by the client-side SteelHead.
Poll Interval (minutes)
Sets the number of minutes you want the appliance to check the Exchange Server for newly arrived email for each of its virtual connections. The default value is 20.
Time Out (hours)
Specifies the number of hours after which to time-out virtual MAPI connections. When this threshold is reached, the virtual MAPI connection is terminated. The time-out is enforced on a per-connection basis. Time-out prevents a buildup of stale or unused virtual connections over time. The default value is 96.
Enable MAPI over HTTP Optimization
HTTP optimization is unavailable in cloud appliances models. This feature may become available in future releases of those models.
Specifies on a client-side SteelHead to enable bandwidth and latency optimization for the MAPI over HTTP transport protocol. You must also create an in-path rule using the Exchange Autodetect latency optimization policy to differentiate and optimize MAPI over HTTP traffic.
Microsoft implements the MAPI over HTTP transport protocol in Outlook 2010 update, Outlook 2013 SP1, and Exchange Server 2013 SP1.
You must enable SSL optimization and install the server SSL certificate on the server-side SteelHead.
Both the client-side and server-side SteelHeads must be running RiOS 9.2 or later to receive full bandwidth and latency optimization.
RiOS 9.7 and later support admission control for MAPI over HTTP. Admission control maximizes the value of SteelHeads by allowing MAPI over HTTP optimization to degrade gracefully as a SteelHead nears capacity. Admission control helps maintain optimization for as many users as possible without pushing the SteelHead over capacity. Admission control for MAPI over HTTP is disabled by default. You enable this feature in the CLI with this command:
[no] admission control mapi enable
For details, see the Riverbed Command-Line Interface Reference Guide.
To view the MAPI over HTTP optimized connections, choose Reports > Networking: Current Connections. A successful connection appears as MAPI-HTTP in the Application column.
Enable Exchange 2003 Support
Enables MAPI 2003 support. This feature increases optimization of traffic between Exchange 2003 and Outlook 2003. This feature does not apply to RiOS 7.0.0 and above.
Enable Exchange 2007+ Support
Enable MAPI NSPI
NSPI is the address book subcomponent of the Exchange protocol. Enable this feature to perform latency optimization for the connection when using the Exchange 2000 Server or when the client is not using Cached Exchange mode.
Specify the Name Service Provider Interface (NSPI) port for MAPI in the NSPI Port. The default value is 7840.
NFS
You can display and modify NFS optimization settings for the selected optimization policy on the Protocols NFS page.
NFS optimization provides latency optimization improvements for NFS operations by prefetching data, storing it on the client appliance for a short amount of time, and using it to respond to client requests. You enable NFS optimization in high-latency environments.
You can configure NFS settings globally for all servers and volumes or you can configure NFS settings that are specific to particular servers or volumes. When you configure NFS settings for a server, the settings are applied to all volumes on that server unless you override settings for specific volumes.
NFS optimization isn’t supported in an out-of-path deployment.
NFS optimization is supported only for NFSv3. When a transaction using NFSv2 or v4 is optimized, the NFS latency module can’t be used and an alarm is triggered. Bandwidth optimization, SDR and LZ compression will still apply.
For detailed information about the NFS optimization, see the SteelHead User Guide.
Settings
These configuration options are available:
Enable NFS Optimization
On the client-side SteelHead, optimizes NFS where NFS performance over the WAN is impacted by a high-latency environment. By default, this control is enabled.
These controls are ignored on server-side SteelHeads. When you enable NFS optimization on a server-side SteelHead, RiOS uploads the NFS configuration information for a connection from the client-side SteelHead to the server-side SteelHead when it establishes the connection.
NFS v2 and v4 Alarms
Enables an alarm when RiOS detects NFSv2 and NFSv4 traffic. When the alarm triggers, the SteelHead displays the Needs Attention health state. The alarm provides a link to this page and a button to reset the alarm.
Default Server Policy
Specifies one of these server policies for NFS servers:
• Custom—Specifies a custom policy for the NFS server.
• Global Read-Write—Specifies a policy that provides data consistency rather than performance. All of the data can be accessed from any client, including LAN-based NFS clients (which don’t go through the SteelHeads) and clients using other file protocols such as CIFS. This option severely restricts the optimization that can be applied without introducing consistency problems. This is the default configuration.
• Read-only—Specifies that the clients can read the data from the NFS server or volume but can’t make changes.
The default server policy is used to configure any connection to a server that doesn’t have a policy.
Default Volume Policy
Specifies one of these volume policies for NFS volumes:
• Custom—Specifies a custom policy for the NFS volume.
• Global Read-Write—Specifies a policy that provides data consistency rather than performance. All of the data can be accessed from any client, including LAN-based NFS clients (which don’t go through the SteelHeads) and clients using other file protocols such as CIFS. This option severely restricts the optimization that can be applied without introducing consistency problems. This is the default configuration.
• Read-only—Specifies that the clients can read the data from the NFS server or volume but can’t make changes.
The default volume policy is used to configure a volume that doesn’t have a policy.
Override NFS protocol settings
You can add server configurations to override your default settings. You can also modify or remove these configuration overrides. If you don’t override settings for a server or volume, the appliance uses the global NFS settings.
These configuration options are available:
Add a New NFS Server
Displays the controls to add an NFS server configuration.
Server Name
Specifies the name of the server.
Server IP Addresses
Specifies the IP addresses of the servers, separated by commas, and click Add. If you have configured IP aliasing (multiple IP addresses) for an NFS server, you must specify all of the server IP addresses.
Add
Adds the configuration to the NFS Servers list.
To modify server properties, in the table row for the server, click the NFS Server Name to display controls you can use to modify server properties. Complete the configuration as above.
Windows domain authentication
This section describes how to configure an appliance to optimize in an environment where there are:
• Microsoft Windows file servers using signed SMB for file sharing to Microsoft Windows clients.
• Microsoft Exchange Servers providing an encrypted MAPI communication to Microsoft Outlook clients.
• Microsoft Internet Information Services (IIS) web servers running HTTP or HTTP-based web applications.
There are also CLI commands available that serve as a troubleshooting tool to identify, diagnose, and report possible problems with an appliance within a Windows domain environment. For details, see the Riverbed Command-Line Interface Reference Guide, the SteelHead Deployment Guide, and the SteelHead User Guide.
Active Directory Service Accounts
Kerberos end-to-end authentication relies on Active Directory replication to obtain machine credentials for any servers that require secure protocol optimization. Joining the managed appliance to a Windows domain is no longer necessary to enable Kerberos authentication.
The RiOS replication mechanism requires a domain user with AD replication privileges, and involves the same AD protocols used by Windows domain controllers. These procedures explain how to configure replication to use Kerberos authentication for these features:
• SMB signing
• SMB2 signing
• Encrypted MAPI and encrypted Outlook Anywhere
• HTTP or HTTP-based traffic
These configuration options are available:
Add a New User
Displays the controls to add a user with replication privileges to a domain.
You can add one replication user per forest.
Active Directory Domain Name
Specifies the AD domain in which you want to make the replication user a trusted member. For example, SIGNING.TEST. The SteelHead replicates accounts from this domain. To facilitate configuration, you can use wildcards in the domain name. For example, *.nbttech.com. You can’t specify a single-label domain name (a name without anything after the dot), as in riverbed. instead of riverbed.com.
Service Account Domain
Specifies the domain the user belongs to, if different from the Active Directory domain name. We recommend that you configure the user domain as close to the root as possible.
Service Account Name
Specifies the replication username. The user must have privileges to change the replicate directory. The username can be an administrator. A replicate user that’s an administrator already has the necessary replication privileges. The maximum username length is 20 characters. The username can’t contain any of these characters:
/ \ [ ] : ; | = , + * ? < > @ "
The system translates the username into uppercase to match the registered server realm information.
Password
Specifies the account password.
Password Confirm
Confirms the account password.
Add
Adds the user.
Enable Kerberos support for restricted trust environments
Enables Kerberos support for restricted trust environments. Kerberos restricted trust includes trust models with split resource and management Active Directory domains such as Office 365 or other managed service providers. For details about restricted trust configurations, see the SteelHead Deployment Guide - Protocols.
Windows XP clients must use TCP for Kerberos in a one-way trust configuration. By default, Kerberos uses UDP. You must change UDP to TCP in a Registry setting.
NTLM
These configuration options are available:
Add a New User
Displays the controls to add a user with trusted delegation rights to a domain. You can only add one delegate user per domain. A delegate user is required in each of the domains where a server is going to be optimized.
Active Directory Domain Name
Specifies the delegation domain in which you want to make the delegate user a trusted member. For example, SIGNING.TEST. You can’t specify a single-label domain name (a name without anything after the dot), as in riverbed instead of riverbed.com.
Service Account Domain
Specifies the domain the user belongs to, if different from the Active Directory domain name. We recommend that you configure the user domain as close to the root as possible.
User Name
Specifies the username. The maximum length is 20 characters. The service account name can’t contain any of these characters: / \ [ ] : ; | = , + * ? < > @ "
The system translates the service account name into uppercase to match the registered server realm information.
Password
Specifies the account password.
Password Confirm
Confirms the account password.
Add
Adds the account.
Delegation mode
These configuration options are available:
Delegation Mode: Manual
Enables transparent authentication using NTLM and provide more control to specify the exact servers to perform optimization for. When you select this mode, you must specify each server on which to delegate and sign for each domain using the Delegate-Only and Delegate-All-Except controls. This is the default setting.
Delegation Mode: Auto
Enables delegate user authentication and automatically discover the servers on which to delegate and sign. Automatic discovery eliminates the need to set up the servers on which to delegate and sign for each domain. This mode requires additional configuration. For details, see autodelegation mode.
A delegate user is required in each of the domains where a server is going to be optimized.
Allow delegated authentication to these servers (Delegate-Only)
Intercepts the connections destined for the servers in this list. By default, this setting is enabled. Specify the file server IP addresses for SMB signed or MAPI encrypted traffic in the text box, separated by commas.
You can switch between the Delegate-Only and Delegate-All-Except controls without losing the list of IP addresses for the control. Only one list is active at a time.
Allow delegated authentication to all servers except the following (Delegate-All-Except)
Intercepts all of the connections except those destined for the servers in this list. Specify the file server IP addresses that don’t require SMB signing or MAPI encryption in the text box, separated by commas. By default, this setting is disabled. Only the file servers that don’t appear in the list are signed or encrypted.
You must register any servers not on this list with the domain controller or be using autodelegation mode.
SSL main settings
You can display and modify SSL Main optimization settings for the selected optimization policy on the SSL Main Settings page. Enabling SSL allows you to accelerate encrypted traffic (for example, HTTPS).
For detailed information, see the SteelHead User Guide.
TLS Blade Configuration
These configuration options are available:
Enable TLS Optimization
Enables optimization of secure traffic, which accelerates applications that use TLS for encryption. Must be enabled on both the client-side and server-side SteelHeads. Using in-path rules, you can choose to enable TLS optimization only on certain sessions (based on source and destination addresses, subnets, and ports), on all sessions, or on no sessions at all. A TLS session that is not optimized simply passes through the unmodified. Disabled by default.
OCSP Stapling Support
Enables Online Certificate Status Protocol (OCSP) stapling. OCSP is an alternative approach to obtain certificate status from the OCSP servers instead of the origin server’s Public Key Infrastructure (PKI). Enable on server-side appliances.
• Off disables OCSP. Disabled by default.
• Strict bypasses the connection is the origin server does not support OCSP.
• Strict AIA bypasses the connection if the certificate included an Authority Information Access (AIA) field but the origin server failed to send an OCSP response. If the certificate did not include an AIA field and the origin server failed to send an OCSP response, the connection is not dropped because the server-side appliance does not expect an OCSP response.
• Loose does not bypass the connection if the origin server does not support OCSP.
Server Certificates (SSL)
In the Add a New SSL Certificate section, configure the following options:
Import Certificate and Private Key
Imports the certificate and key. The page displays controls for browsing to and uploading the certificate and key files. You can also use the text box to copy and paste a PEM file. The private key is required regardless of whether you’re adding or updating the certificate.
Certificate
Specifies the action:
• Upload—Browse to the local file in PKCS-12, PEM, or DER formats.
• Paste it here (PEM only)—Copy and then paste the contents of a PEM file.
Private Key
Specifies the private key origin.
• The Private Key is in a separate file (see below)—You can either upload it or copy and paste it.
• This file includes the Certificate and Private Key
• The Private Key for this Certificate was created with a CSR generated on this appliance.
Separate Private Key
Specifies the action:
• Upload (PEM or DER formats)—Browse to the local file in PEM or DER formats.
• Paste it here (PEM only)—Paste the contents of a PEM file.
Decryption Password
Specifies the decryption password, if necessary. Passwords are required for PKCS-12 files, optional for PEM files, and never needed for DER files.
Import Certificate and Key
Imports the certificate and key.
Generate Self-Signed Certificate and New Private Key
Generates a new private key and self-signed public certificate:
• Common name—Specify the common name (hostname) of the peer.
• Organization Name—Specify the organization name (for example, the company).
• Organization Unit Name—Specify the organization unit name (for example, the section or department).
• Locality—Specify the city.
• State (no abbreviations)—Specify the state.
• Country (2-letter code)—Specify the country (2-letter code only).
• Email Address—Specify the email address of the contact person.
• Validity Period (Days)—Specify how many days the certificate is valid. The default value is 730.
Private Key Cipher Bits
Specifies the key length from the drop-down list. The default value is 1024.
Generate SCC CA Signed Certificate and New Private Key
Generates a private key and CSR.
To generate a CSR, under Web Certificate, select the Generate CSR tab and complete these configuration options:
• Common Name (required)—Specify the common name (hostname) of the peer.
• Organization Name—Specify the organization name (for example, the company).
• Organization Unit Name—Specify the organization unit name (for example, the section or department).
• Locality—Specify the city.
• State—Specify the state. Don’t abbreviate.
• Country (2-letter code)—Specify the country (2-letter code only).
• Email Address—Specify the email address of the contact person.
• Validity Period (Days)—Specify how many days the certificate is valid. The default value is 730.
Generate Certs for Bypassed Servers
Generates SCC CA signed certificates for bypassed servers. The SteelHead passes the connection through unoptimized without affecting connection counts
Secure peering (SSL)
You configure SSL peers for the selected optimization policy in the Secure Peering (SSL) page.
Secure, encrypted peering extends beyond traditional SSL traffic encryption. In addition to SSL-based traffic like HTTPS that always needs a secure connection between the client-side and the server-side appliance, you can also secure other types of traffic such as:
• MAPI-encrypted, SMB1, and SMB2-signed traffic.
• Citrix traffic (RiOS 7.0 and later).
• all other traffic that inherently doesn’t require a secure connection.
In RiOS 9.0 and later, SSL secure peering and secure transport traffic can co-exist. For details about SSL, see the SteelHead User Guide.
The Secure Peering (SSL) page contains these groups of settings:
SSL secure peering settings
These configuration options are available:
Enable Quantum-Safe Support
Enables post-quantum cryptography (PQC) protection for inner-channel connections between server-side and client-side appliances. This feature provides protection against “harvest now, decrypt later” attacks by utilizing a hybrid key exchange method that employs classical and quantum safe module lattice-based key encapsulation (ML-KEM) standards available in OpenSSL 3.5 and later. Disabled by default.
Traffic Type
Specifies one of these traffic types from the drop-down list:
• SSL Only—The peer client-side appliance and the server-side SCC authenticate each other and then encrypt and optimize all SSL traffic: for example, HTTPS traffic on port 443. This is the default setting.
• SSL and Secure Protocols—The peer client-side appliance and the server-side appliance authenticate each other and then encrypt and optimize all traffic traveling over these secure protocols: SSL, SMB signed, and encrypted MAPI. When you select this traffic type, SMB-signing and MAPI encryption must be enabled. Enabling this option requires an optimization service restart.
SMB-signing, MAPI encryption, or Secure ICA encryption must be enabled on both the client-side and server-side appliances when securing SMB-signed traffic, encrypted MAPI traffic, or encrypted Citrix ICA traffic (RiOS 7.0).
Enabling this option requires an optimization service restart.
• All—The peer client-side appliance and the server-side appliance authenticate each other and then encrypt and optimize all traffic. Only the optimized traffic is secure; pass-through traffic isn’t. Enabling this option requires an optimization service restart.
Selecting All can cause up to a 10 percent performance decline in higher-capacity appliances. Take this performance metric into account when sizing a complete secure appliance peering environment.
Fallback to No Encryption
Specifies that the appliance optimizes but doesn’t encrypt the connection when it is unable to negotiate a secure, encrypted inner channel connection with the peer. This is the default setting. Enabling this option requires an optimization service restart.
We recommend enabling this setting on both the client-side and the server-side appliances, especially in mixed deployments where one appliance is running RiOS 6.0 or later and the other SteelHead is running an earlier RiOS version.
This option applies only to non-SSL traffic and is unavailable when you select SSL Only as the traffic type.
Clear the check box to pass through connections that don’t have a secure encrypted inner channel connection with the peer. Use caution when disabling this setting, as doing so specifies that you strictly don’t want traffic optimized between nonsecure SCC. Consequently, configurations with this setting disabled risk the possibility of dropped connections. For example, consider a configuration with a client-side SCC running RiOS 5.5.x or earlier and a server-side SteelHead running RiOS 6.0 or later. When this setting is disabled on the server-side SCC and All is selected as the traffic type, it will not optimize the connection when a secure channel is unavailable, and can drop it.
Trusted peering CAs and peer certificates
You can add and view these types of entities:
• Certificates of trusted peers.
• Certificates of trusted Certificate Authorities (CAs) that may sign certificates for peers.
These configuration options are available:
Add a New Trusted Entity
Displays the controls for adding trusted entities.
Trust Existing CA
Specifies an existing CA from the drop-down list.
Trust New Certificate
Adds a new CA or peer certificate. The appliance supports RSA and DSA for peering trust entities.
Optional Local Name
Specifies a local name for the entity (for example, the fully qualified domain name).
Local File browses to the local file.
Cert Text
Pastes the content of the certificate text file into the text box.
Add
Adds the trusted entity (or peer) to the trusted peers list.
Mobile trust
You can add and view trusted SteelHead Mobile entities that may sign certificates for SteelHead Mobile Clients.
These configuration options are available:
Add a New Mobile Entity
Displays the controls for adding a trusted Client Accelerator entity.
Optional Local Name
Specifies a local name for the entity (for example, the fully qualified domain name).
Local File
Browses to the local file.
Cert Text
Pastes the content of the certificate text file into the text box.
Add
Adds the trusted entity (or peer) to the trusted peers list.
Trusted peers
The first time a client-side appliance attempts to connect to the server, the optimization service detects peers and populates the peer entry tables. On both appliances, an entry appears in a peering list with the information and certificate of the other peer. A peer list provides you with the option of accepting or declining the trust relationship with each appliance requesting a secure inner channel.
These configuration options are available:
Trust Selected Peers
(Only SSL-capable or disconnected appliances are shown.) Trusts only SSL-capable or disconnected appliances.
Trust All Peers
Trusts all peers.
Update
Updates the policy to reflect the new settings.
Certificate authorities (SSL)
SSL is a cryptographic protocol that provides secure communications between two parties over the internet.
Typically, in a web-based application, it is the client that authenticates the server. To identify itself, an SSL certificate is installed on a web server and the client checks the credentials of the certificate to make sure it is valid and signed by a trusted third party. Trusted third parties that sign SSL certificates are called certificate authorities (CA). For detailed information about how SSL works, see the SteelHead User Guide.
A CA is a third-party entity in a network that issues digital certificates and manages security credentials and public keys for message encryption. A CA issues a public key certificate that states that the CA attests that the public key contained in the certificate belongs to the person, organization, server, or other entity noted in the certificate. The CA verifies applicant credentials, so that relying parties can trust the information in the CA certificates. If you trust the CA and can verify the CA signature, then you can also verify that a certain public key does indeed belong to whomever is identified in the certificate.
Before adding a CA, it is critical to verify that it is genuine; a malicious CA can compromise network security by signing fake certificates.
You may need to add a new CA in these situations:
• Your organization has an internal CA that signs the certificates or peering certificates for the back-end server.
• The server certificates are signed by an intermediate or root CA unknown to the appliance (perhaps external to the organization).
• The CA certificate included in the trusted list of the appliance has expired or has been revoked and needs replacing.
You can copy certificates from and existing policy. On the Certificate Authorities (SSL) policy page, select an option from the Copy Page Contents from Policy menu and click OK.
These configuration options are available:
SSL Certificate Authorities Update
Updates the appliance’s Trusted Root Store. Click Update.
Add a New Certificate Authority
• Optional Local Name—Specify the local filename.
• Local File—Browse to the local certificate authority file.
• Cert Text—Paste the certificate authority into the text box and click Add.
Add
Adds the certificate authority.
Certificate Authority
Displays the certificate details.
CRL management (SSL)
You can configure Certificate Revocation Lists (CRLs) for an automatically discovered CA using the Management Console. CRLs allow CAs to revoke issued certificates (for example, when the private key of the certificate has been compromised). By default, CRLs aren’t used in the appliance. For detailed information, see the SteelHead User Guide.
A CRL is a database that contains a list of digital certificates that have been invalidated before their expiration date, including the reasons for the revocation and the names of the issuing certificate signing authorities. The CRL is issued by the CA that issues the corresponding certificates. All CRLs have a lifetime during that they’re valid (often 24 hours or fewer).
The two types of CAs issuing CRLs are:
• conventional CAs, which are listed in the Certificate Authorities page.
• peering CAs, which are listed in the Trusted Entities list in the Secure Peering page.
You configure each type of CA separately.
Under CRL Settings, these configuration options are available:
Enable Automatic CRL Polling for CAs
Enables CRL polling and use of a CRL in handshake verifications of CA certificates. Currently, the SteelHead only supports downloading CRLs from Lightweight Directory Access Protocol (LDAP) servers.
Enable Automatic CRL Polling For Peering CAs
Configures a CRL for an automatically discovered peering CA.
Fail Handshakes If A Relevant CRL Cannot Be Found
Configures handshake behavior for a CRL. Fails the handshake verification if a relevant CRL for either a peering or server certificate can’t be found.
Advanced settings (SSL)
You configure SSL advanced settings for the selected optimization policy in the SSL Advanced Settings page.
For details about SSL, see the SteelHead User Guide.
The SSL Advanced Settings page contains these main groups:
SSL advanced options
These configuration options are available:
General SSL Settings
Specifies the Enable SSL Optimization check box.
Chain Discovery: Enable SSL Server Certificate Chain Discovery
Synchronizes the chain certificate configuration on the server-side SteelHead with the chain certificate configuration on the back-end server. The synchronization occurs after a handshake fails between the client-side and server-side SteelHead. By default, this option is disabled.
Enable this option when you replace an existing chain certificate on the back-end server with a new chain to ensure that the certificate chain remains in sync on both the server-side SteelHead and the back-end server.
This option never replaces the server certificate. It updates the chain containing the intermediate certificates and the root certificate in the client context.
SteelHead Mobile Security Mode
Specifies one of these security modes on the server-side SteelHead:
• High Security Mode—Enforces the advanced SSL protocol on the Client Accelerators for increased security.
• Mixed Security Mode—Allows Client Accelerator Controller clients to run in any SSL mode. This mode is required to optimize with mobile clients running on VMware Fusion.
This option doesn’t affect SteelHead-to-SteelHead operation.
Client Side Session Reuse: Enable Distributed SSL Termination
Enables reuse of the original session on a client-side SteelHead when the client reconnects to an SSL server. Reusing the session provides two benefits: it lessens the CPU load because it eliminates expensive asymmetric key operations and it shortens the key negotiation process by avoiding WAN roundtrips to the server. By default, this option is enabled. Both the client-side and server-side SteelHeads must be configured to optimize SSL traffic.
• Timeout—Specify the amount of time the client can reuse a session with an SSL server after the initial connection ends. The range is from 6 minutes to 24 hours. The default value is 10 hours.
Enabling this option requires an optimization service restart.
Client Authentication: Client Certificate Support
Enables support for client certificates during the SSL authentication. You can choose from these three modes: On, Off, and Peering.
The existing configuration for any managed appliances where this feature has been individually configured will be replaced with the configuration you select here.
• Off—By default, client certificate support is off and the SSL handshake relies only on a server certificate.
• On—This mode relies on passive key derivation.
The On setting enables acceleration of SSL traffic to those SSL servers that authenticate SSL clients. The SSL server verifies the SSL client certificate. In the client authentication SSL handshake, each client has a unique client certificate and the SSL server, in most cases, maintains the state that is specific to each client when answering the client's requests. The SSL server must receive exactly the same certificate that is originally issued for a client on all the connections between the client and the server. Typically, the client's unique certificate and private key are stored on a smart card, such as a Common Access Card (CAC), or on a similar location that is inaccessible to other devices on the network.
Setting the client authentication to On allows SteelHeads to compute the encryption key while the SSL server continues to authenticate the original SSL client exactly as it would without the SteelHeads. The server-side SteelHead observes the SSL handshake messages as they go back and forth. With access to the SSL server's private key, the SteelHead computes the session key exactly as the SSL server does. The SSL server continues to perform the actual verification of the client, so any dependencies on the uniqueness of the client certificate for correct operation of the application are met. Because the SteelHead doesn’t modify any of the certificates (or the handshake messages) exchanged between the client and the server, there’s no change to their trust model. The client and server continue to trust the same set of certificate authorities as they did without the SteelHeads accelerating their traffic.
If the data center has a mixed environment with a few SSL servers that authenticate clients along with those that don’t authenticate clients, we recommend enabling client authentication.
Requirements:
• Enable client certificate support on the server-side SteelHead.
• The server-side SteelHead must have access to the exact private key used by the SSL server.
• The SSL server must be configured to ask for client certificates.
• The SteelHead must have a compatible cipher chosen by the server.
• SSL sessions that reuse previous secrets that are unknown to the SteelHead can’t be decrypted.
• Client-side certificates with renegotiation handshakes aren’t supported.
• Client certificate supports the RSA key exchange only. It doesn’t support the Diffie-Hellman key exchange.
Basic steps to enable client authentication:
1. Perform the basic steps to enable SSL optimization.
2. On the server-side SteelHead, choose Optimization > SSL: Advanced Settings, select On for Client Certificate Support, and click Apply.
3. Choose Optimization > SSL: SSL Main Settings, import the private key and certificate used by the SSL server to the server-side SteelHead, and click Save to Disk to save the configuration. You don’t need to restart the optimization service.
Verification:
To verify client authentication, on the server-side SteelHead, check the Discovered Server (Optimizable) table in the Optimization > SSL: SSL Main Settings page. Optimizable servers that are using client authentication appear as optimizable.
• Peering—In peering mode, the SteelHead needs a proxy certificate for the connection, but it does not need the origin server’s private key. This mode lets the SteelHead respond to client authentication requests by using its peering certificate.
This is a more traditional implementation where the SteelHead acts as a trusted “man-in-the-middle.” When a client certificate request arrives from the server:
1. The server-side SteelHead replies to the server’s client certificate request with its own peering certificate.
2. The client-side SteelHead requests a client certificate in response to the client hello.
3. The client-side SteelHead authenticates the client certificate using the existing trusted CA repository.
This mode supports the Ephemeral Diffie-Hellman key exchange.
Proxies: Enable SSL Proxy Support
Enables SSL proxy support. Enable this control on both the client-side and server-side SteelHeads when clients are communicating with SSL to a server through one or more proxies. Proxy support allows the SteelHead to optimize traffic to a proxy server.
SSL traffic communication with a proxy initiates with an HTTP CONNECT message. The SteelHead recognizes the HTTP CONNECT message in the connection, extracts the hostname, and then optimizes the SSL connection that follows into the proxy state machine (expecting an SSL handshake following the CONNECT message).
In addition to enabling this feature on both SteelHeads, you must:
• create an in-path rule on the client-side SteelHead to identify the proxy server IP address and port number. Select the SSL preoptimization policy for the rule.
• enable SSL optimization on both the client-side and server-side SteelHeads.
• ensure both the client-side and server-side SteelHeads are running RiOS 7.0 or later.
• restart the optimization service on both SteelHeads.
By default, SSL proxy support is disabled.
When the SteelHead connects, the proxy servers appear in the SSL Main Settings page on the server-side SteelHead in the Discovered SSL Server (Optimizable) list. The same IP address appears on multiple lines, followed by the word “proxy.” The hostname of the back-end server appears in the Server Common Name field. All subsequent connections to the proxy servers are optimized.
When an error occurs, the proxy servers appear in the SSL Main Settings page on the server-side SteelHead in the Discovered Servers (bypassed, not optimized) list. The same IP address appears on multiple lines, followed by the word “proxy.” The hostname of the back-end server appears in the Server Common Name field. All subsequent connections to the servers aren’t optimized.
If you disable proxy support, you must delete the corresponding in-path rule and restart the optimization service.
Midsession SSL: Enable Midsession SSL.
Enables midsession SSL. Enable this control on both the client-side and server-side SteelHeads when there’s a delayed start to the Transport Layer Security (TLS) handshake because clients are transitioning into SSL after the initial handshake occurs. This feature optimizes connections that transition into SSL.
Client examples include SMTP/POP/IMAP-over-TLS and Microsoft .NET Windows Communication Foundation (WCF)-based TLS applications. This feature also enables SSL communication with protocols like Exchange-Hub to Exchange-Hub replications (for example, the SMTP-over-TLS protocol).
For details on SMTP over TLS Optimization, see the SteelHead Deployment Guide - Protocols.
The SteelHead looks for an SSL handshake for the life of the connection, and then optimizes the SSL connection that follows (except for an SSL handshake following the HTTP CONNECT message, in which case the SSL proxy support feature needs to be enabled).
After enabling this feature on both SteelHeads, you must restart the optimization service.
When the SteelHead connects, the servers appear in the SSL Main Settings page on the server-side SteelHead in the Discovered SSL Server (Optimizable) list. All subsequent connections to the servers are optimized.
TLS 1.2 support is enabled by default in RiOS 9.2. To disable TLS 1.2, enter the no protocol ssl backend client-tls-1.2 CLI command.
Requirements:
• Both the client-side and server-side SteelHeads must be running RiOS 7.0 or later.
• The SSL client must be the same as the TCP client.
• SSL messages can’t be wrapped with any other non-SSL or non-TCP protocol headers or footers.
• SSL optimization must be enabled on both the client-side and server-side SteelHeads.
TLS Extensions
Server Name Indication: Enable SNI support for Virtual Hosting
Enables this control on the server-side SteelHead while using name-based virtual hosts with SSL. Server name indication (SNI) is a transport layer security extension to the SSL protocol. With SNI, the first SSL client hello handshake message sent to the HTTPS server includes the requested virtual hostname to which the client is connecting. Because the server is aware of the hostname, it returns a host-specific security certificate.
Without SNI, an HTTPS server returns a default certificate that satisfies hostnames for all virtual hosts. The SSL connection setup uses the default virtual host configuration for the address where the connection was received. Browser messages warn that certificates have the wrong hostname.
With SNI enabled, RiOS provides the hostname. Knowing the hostname enables the server to determine the correct named virtual host for the request and set up the connection accordingly from the start.
The browser validates the certificate names against the requested URL, and the server-side SteelHead verifies that the selected proxy certificate is compatible with the client hostname. This verification ensures that the browser doesn’t reject the proxy certificate for the server-side SteelHead.
If SNI provides a hostname that doesn’t exactly match the common name or any of the subject alternate names for the certificate on the server-side SteelHead, the system determines that a valid certificate is not present and bypasses that hostname.
No configuration is necessary on the client-side SteelHead.
The client browser must also support SNI.
By default, RiOS enables the following SNI support on the server-side SteelHead, regardless of whether this SNI control is enabled:
• Adds the SNI extension from the client Hello to the server-side SteelHead client Hello.
• Uses the SNI extension to match and select the proxy certificate to return to the client.
Peer ciphers
These configuration options are available:
Add a New Peer Cipher
Displays the controls for adding a new peer cipher.
Cipher
Specifies the cipher type for communicating with peers from the drop-down list. The Hint text box displays information about the cipher. You must specify at least one cipher for peers, clients, and servers for SSL to function properly. The default cipher setting is DEFAULT, which represents a variety of high-strength ciphers that allow for compatibility with many browsers and servers.
Insert Cipher At
Specifies start, end, or the cipher number from the drop-down list. The default cipher, if used, must be rule number 1.
Hint
Displays information about the cipher.
Add
Adds the cipher to the list.
Show Effective Overall Cipher List
Displays the effective overall cipher list.
Client ciphers
These configuration options are available:
Add a New Client Cipher
Displays the controls for adding a new client cipher.
Cipher
Specifies the cipher type for communicating with clients from the drop-down list. You must specify at least one cipher for peers, clients, and servers for SSL to function properly. The default cipher setting is DEFAULT that represents a variety of high strength ciphers that allow for compatibility with many browsers and servers.
Insert Cipher At
Specifies start, end, or a cipher number from the drop-down list. The default cipher, if used, must be rule number 1.
Hint
Displays information about the cipher.
Add
Adds the cipher to the list.
Show Effective Overall Cipher List
Displays the effective overall cipher list.
Server ciphers
These configuration options are available:
Add a New Server Cipher
Displays the controls for adding a new server cipher.
Cipher
Specifies the cipher type for communicating with servers from the drop-down list. You must specify at least one cipher for peers, clients, and servers for SSL to function properly. The default cipher setting is DEFAULT that represents a variety of high strength ciphers that are compatible with many browsers and servers.
Insert Cipher At
Specifies start, end, or a cipher number from the drop-down list. The default cipher, if used, must be rule number 1.
Hint
Displays information about the cipher.
Add
Adds the cipher to the list.
Show Effective Overall Cipher List
Displays the effective overall cipher list.
Effective overall cipher list
Click Show Effective Overall Cipher List to display a list of ciphers.
Secure peering (IPSEC)
You configure secure peering for the selected optimization policy in the Secure Peering (IPSEC) page.
Enabling IPsec encryption makes it difficult for a third party to view your data or pose as a computer you expect to receive data from. To enable IPsec, you must specify at least one encryption and authentication algorithm. Only optimized data is protected, pass-through traffic isn’t.
If the WinSec Controller is deployed, enabling IPsec support is required.
RiOS doesn’t support IPsec over IPv6.
In RiOS 9.0 and later, IPsec secure peering and the secure transport service are mutually exclusive. The secure transport service is enabled by default. Before you enable IPsec secure peering, you must disable the secure transport service by entering the no stp-client enable command at the system prompt.
You must set IPsec support on each peer SteelHead in your network for which you want to establish a secure connection. You must also specify a shared secret on each peer SteelHead.
If you NAT traffic between SteelHeads, you can’t use the IPsec channel between the SteelHeads because the NAT changes the packet headers, causing IPsec to reject them.
For details about secure peering, see the SteelHead User Guide.
The Secure Peering (IPSEC) page contains these groups of settings:
General settings
These configuration options are available:
Enable Authentication and Encryption
Enables authentication between appliance. By default, this option is disabled.
Enable Prefetch Forward Secrecy
Enables additional security by renegotiating keys at specified intervals. If one key is compromised, subsequent keys are secure because they’re not derived from previous keys. By default, this option is enabled.
IKE Encryption Policy
Specifies the Internet Key Exchange (IKE) encryption policy:
• DES—The Data Encryption Standard. This is the default value.
• 3DES—Triple DES encryption algorithm.
• AES—The AES 128-bit encryption algorithm.
• AES256—The AES 256-bit encryption algorithm.
Internet Key Exchange (IKE) is the protocol that ensures a secure, authenticated communications channel between two peers.
IKE Authentication Policy
Specifies the IKE authentication policy:
• MD5—Enables MD5 security protocol.
• SHA-1—Enables SHA security protocol.
• SHA-256—Enables the SHA-256 cryptographic hash function.
• SHA-384—Enables the SHA-384 cryptographic hash function.
• SHA-512—Enables the SHA-512 cryptographic hash function.
ESP Encryption Policy
Specifies one of these Encapsulating Security Payload (ESP) encryption methods from the drop-down list:
• DES—Encrypts data using the Data Encryption Standard algorithm. DES is the default value.
• NULL—Specifies the null encryption algorithm.
• None—Doesn’t apply an encryption policy.
• 3DES—Appears when a valid Enhanced Cryptography License Key is installed on the appliance. Encrypts data using the Triple Digital Encryption Standard with a 168-bit key length. This standard is supported for environments where AES hasn’t been approved, but is both slower and less secure than AES.
• AES—Appears when a valid Enhanced Cryptography License Key is installed on the appliance. Encrypts data using the Advanced Encryption Standard (AES) cryptographic key length of 128 bits.
• AES256—Appears when a valid Enhanced Cryptography License Key is installed. Encrypts data using the Advanced Encryption Standard (AES) cryptographic key length of 256 bits. Provides the highest security.
Optionally, select an algorithm from the method 2, 3, 4, or 5 drop-down lists to create a prioritized list of encryption policies for negotiating between peers.
Peer appliances must both have a valid Enhanced Cryptography License Key installed to use 3DES, AES, or AES256. When an appliance has the valid Enhanced Cryptography License Key installed and an IPsec encryption level is set to 3DES or AES, and a peer SCC doesn’t have a valid Enhanced Cryptography License Key installed, the appliances uses the highest encryption level set on the appliance without the key.
ESP Authentication Policy
Specifies one of these ESP authentication methods from the drop-down list:
• MD5—Specifies the Message-Digest 5 algorithm, a widely-used cryptographic hash function with a 128-bit hash value. This is the default value.
• SHA-1—Specifies the Secure Hash Algorithm, a set of related cryptographic hash functions. SHA-1 is considered to be the successor to MD5.
• SHA-256—Enables the SHA-256 cryptographic hash function.
• SHA-384—Enables the SHA-384 cryptographic hash function.
• SHA-512—Enables the SHA-512 cryptographic hash function.
Optionally, select an algorithm from the method 2 drop-down list to create a secondary policy for negotiating the authentication method to use between peers. If the first authentication policy negotiation fails, the peer appliances use the secondary policy to negotiate authentication
Time Between Key Renegotiations
Specifies the number of minutes between quick-mode renegotiation of keys using the Internet Key Exchange (IKE) protocol.
IKE uses public key cryptography to provide the secure transmission of a secret key to a recipient so that the encrypted data can be decrypted at the other end. The default value is 240 minutes.
IKE Authentication Mode
Specifies the IKE authentication mode:
• shared-secret—Enables shared secret mode. All the SteelHeads in a network for which you want to use IPsec must have the same shared secret.
• certificate—Enables certificate mode.
Enter the Shared Secret
Specifies the shared secret. All the appliances in a network for that you want to use IPsec must have the same shared secret.
Confirm the Shared Secret
Confirms the shared secret.
Secure peers
Add a New Secure Peer
Displays the controls to add a new secure peer.
Peer IP Address
Specifies the IP address for the peer appliance (in-path interface) for which you want to make a secure connection.
For WinSec Controller communication, the SteelHead typically uses the primary interface, not in-path interface.
Add
Adds the peer specified in the Peer IP Address text box. If a connection hasn’t been established between the two appliances that are configured to use IPsec security, the peers list doesn’t display the peer appliance status as mature.
Adding a peer causes a short service disruption (3-4 seconds) to the peer that’s configured to use IPsec security.
SaaS Accelerator
SaaS Accelerator improves performance for many SaaS applications.
Accelerating SaaS applications requires configuration on the SteelHead SaaS Manager (SSM), see the SteelHead SaaS Manager User Guide for more information.
In the SaaS Acceleration Settings section you can enable SaaS Acceleration and allow Proxy Chaining Interoperability.
Legacy Cloud Accelerator
This feature is no longer supported. For information on this feature, please refer to previous versions of your product's user guide.
WinSec Controllers
Riverbed WinSec Controller is a standalone, dedicated tier-0 appliance for security-specific functions. WinSec Controller provides a highly secure method for performing security functions for server-side SteelHead appliances without compromising Microsoft-recommended security best practices. WinSec Controller appliances are purpose-built to operate in highly secure environments. For details, see the WinSec Controller User Guide.
These configuration options are available:
Add a New WinSec Controller
Displays the controls for adding a new WinSec Controller.
Server
Specifies the IP address or the Fully Qualified Domain Name (FQDN) for the WinSec Controller.
Port
Specifies the WinSec Controller service port for this host. The value must be 7890.
Priority
Specifies the priority for this host. The value must be a number between 1 and 255. The default value is 1.
Add
Adds the WinSec Controller to your deployment.
About NAT IP address mapping
For cloud appliances, you can setup NAT IP address mapping. These settings are located under Optimization > Cloud > NAT IP Address Mapping.
NAT address mapping enables you to map private IP addresses to public IP addresses. You can enable and disable the mapping feature, and you can create multiple mappings. This feature is useful to enhance security as private IP addresses are hidden behind the public one, and to streamline configuration as you can map multiple private IP addresses to a single public one.
Branch services settings
This section describes Branch Services feature set.
Caching DNS
You configure DNS caching in the Branch Services page. By default, the DNS cache is disabled.
A DNS name server resolves hostnames to IP addresses and stores them locally in a single appliance. Any time your browser requests a URL, it first looks in the local cache to see if it is there before querying the external name server. If it finds the resolved URL locally, it uses that IP address.
This is a nontransparent DNS caching service. Any client machine must point to the client-side appliance as their DNS server.
Hosting the DNS name server function provides:
• Improved performance for applications by saving the round trips previously needed to resolve names. Whenever the name server receives address information for another host or domain, it stores that information for a specified period of time. That way, if it receives another name resolution request for that host or domain, the name server has the address information ready, and doesn’t need to send another request across the WAN.
• Improved performance for services by saving round trips previously required for updates.
• Continuous DNS service locally when the WAN is disconnected, with no local administration needed, eliminating the need for DNS servers at branch offices.
For details about DNS caching, see the SteelHead User Guide.
The Branch Services page contains these groups of settings:
General settings
These configuration options are available:
Enable Caching DNS
• Enabled—Forwards name resolution requests to a DNS name server, then stores the address information locally in the SCC. By default, the requests go to the root name server, unless you specify another name server.
• Disabled—Stops the SCC from acting as the DNS name server.
DNS Cache Size (bytes)
Specifies the cache size, in bytes. The default value is 1048576. The range is from 524288 to 2097152.
Primary Interface Responding to DNS Requests
• Enabled—Enables the name server to listen for name resolution requests on the primary interface.
• Disabled—Stops the name server from using the primary interface
Aux Interface Responding to DNS Requests
• Enabled—Enables the name server to listen for name resolution requests on the auxiliary interface.
• Disabled—Stops the name server from using the auxiliary interface.
Apply
Applies your settings.
DNS forwarding name servers
These configuration options are available:
Add a New DNS Server Name
Displays the controls to add a DNS name server to that the SCC forwards requests to cache responses. By default, the SCC only forwards requests to the internet root name servers when you enable caching DNS without specifying any name servers to forward requests to. You can add multiple name servers to use; the SCC uses failover to these if one name server isn’t responding.
Name Server IP Address
Specifies an IP address for the name server.
Position
Specifies the order in that the name servers are queried (when using more than one). If the first name server, or forwarder, doesn’t respond, the SteelHead queries each remaining forwarder in sequence until it receives an answer or until it exhausts the list.
Add
Adds the name server.
Advanced cache
These configuration options are available:
Caching of Forwarded Responses
Enables the cache. The cache is enabled by default; however, nothing is actually cached until you select General Setting Enable Caching DNS.
Maximum Cache Time (seconds)
Specifies the maximum number of seconds the name server stores the address information. The default setting is one week (604,800 seconds). The minimum is two seconds and the maximum is thirty days (2,592,000 seconds). You can adjust this setting to reflect how long the cached addresses remain up-to-date and valid.
Changes to this setting affect new address information and don’t change responses already in the cache.
Minimum Cache Time (seconds)
Specifies the minimum number of seconds that the name server stores the address entries. The default value is 0. The maximum value is the current value of Maximum Cache Time. Typically, there is no need to adjust this setting.
Changes to this setting affect new responses and don’t change any responses already in the cache.
Neg DNS Maximum Cache Time (seconds)
Specifies the maximum number of seconds that an unresolved negative address is cached. The valid range is from two seconds to thirty days (2,592,000 seconds). The default value is 10,800 seconds.
A negative entry occurs when a DNS request fails and the address remains unresolved. When a negative entry is in the cache, the appliance doesn’t request it again until the cache expires, the maximum cache time is reached, or the cache is cleared.
Neg DNS Minimum Cache Time (seconds)
Specifies the TTL for a negative entry that’s always this value or above, even if the server returns a smaller TTL value. For example, when this value is set to 300 seconds and the client queries aksdfjh.com, the DNS service returns a negative answer with a TTL of 100 seconds, but the DNS cache stores the entry as having a TTL of 300 seconds. The default value is 0, which specifies that the SteelHead still caches negative responses; it doesn’t place a lower bound on what the TTL value for the entry can be.
Freeze Cache
Freezes the cache contents. When the cache is frozen, entries don’t automatically expire from the cache. They’re still returned in response to DNS queries. This is useful to keep local services available when the WAN is disconnected. By default, this setting is disabled.
When the cache is frozen and full, entries can still be pushed out of the cache by newer entries.
Minimum TTL of a Frozen Entry (seconds)
Specifies the minimum TTL in seconds that a response from a frozen cache has when sent to a branch office client. The default value is 10. For example, suppose this value is set to 60 seconds. At the time the cache is frozen, the cache entry for riverbed.com has a TTL of 300 seconds. For subsequent client requests for riverbed.com, the service responds with a TTL of 300 seconds minus however much time has lapsed since the cache freeze. After 240 seconds have elapsed, the service responds to all subsequent requests with a TTL of 60 seconds regardless of how much time elapses, until the cache is unfrozen.
Advanced name servers
These configuration options are available:
For Unresponsive Name Servers
Detects when one of the name servers isn’t responding and send requests to a responsive name server instead
Forwarder Down After (seconds)
Specifies how many seconds can pass without a response from a name server until the appliance considers it unresponsive. The default value is 120. When the name server receives a request but doesn’t respond within this time and doesn’t respond after the specified number of failed requests, the appliance determines that it is down. It then queries each remaining forwarder in sequence until it receives an answer or it exhausts the list. When the list is exhausted and the request is still unresolved, you can specify that the SteelHead try the root name server.
Forwarder Down After (requests)
Specifies how many requests a name server can ignore before the appliance considers it unresponsive. The default value is 30. When the name server doesn’t respond to this many requests and doesn’t respond within the specified amount of time, the appliance determines that it is down. It then queries each remaining forwarder in sequence until it receives an answer or it exhausts the list. When the list is exhausted and the request is still unresolved, you can specify that the SteelHead try the root name server.
Retry Forwarder After (seconds)
Specifies the time limit, in seconds, that the appliance forwards the name resolution requests to name servers that are responding instead of name servers that are down. The appliance also sends a single query to name servers that are down using this time period. If they respond, the appliance considers them back up again. The default value is 300. The single query occurs at intervals of this value – if the value is set to 300, a request is allowed to go to a forwarder considered down about every 300 seconds until it responds to one.
Fallback to Root Name Servers
Forwards the request to a root name server when all other name servers haven’t responded to a request. This is the default setting; either this option must be enabled or a server must be present. When the fallback to root name servers option is disabled, the SteelHead only forwards a request to the forwarding name servers listed above. If it exhausts these name servers and doesn’t get a response, it doesn’t forward the request to a root name server and returns a server failure.
If the name servers used by the SteelHead are internal name servers; that is, they can resolve hostnames that external name servers like the internet DNS root servers can’t, you must disable this option. Otherwise, if the name servers all fail, the root name servers can inform the SteelHead that a host visible only to internal name servers doesn’t exist, can cache that response, and return it to clients until it expires. This prolongs the period of time until service comes back up after name servers are down.
Common branch storage settings
You configure common branch storage settings in the Common Branch Storage Settings page.
For details about common branch storage settings, see the SteelHead User Guide.
These configuration options are available:
Alternate IP: Port, separate multiple by “,”
Specifies the alternate IP. Separate ports with a comma.
Local Interfaces
Specifies the local interface from the list.
Local Interfaces for MPIO
Specifies the local interface for MPIO from the list.
System settings policies
This section describes the System Settings Policy feature set.
Alarms
You can change alarm settings for the selected system settings policy in the Alarms page. Enabling alarms is optional.
The SCC checks for alarms every five minutes.
For details about alarms, see
Configuring alarm parameters.Admission Control
Enables an alarm and sends an email notification if the SCC enters admission control. When this occurs, the SCC optimizes traffic beyond its rated capability and is unable to handle the amount of traffic passing through the WAN link. During this event, the SteelHead continues to optimize existing connections, but new connections are passed through without optimization.
• Connection Limit—Indicates the system connection limit has been reached. Additional connections are passed through unoptimized. The alarm clears when the appliance moves out of this condition.
• CPU—The appliance has entered admission control due to high CPU use. During this event, the appliance continues to optimize existing connections, but new connections are passed through without optimization. The alarm clears automatically when the CPU usage has decreased.
• MAPI—The total number of MAPI optimized connections have exceeded the maximum admission control threshold. By default, the maximum admission control threshold is 85 percent of the total maximum optimized connection count for the client-side appliance. The appliance reserves the remaining 15 percent so that the MAPI admission control doesn’t affect the other protocols. The 85 percent threshold is applied only to MAPI connections. RiOS is now passing through MAPI connections from new clients but continues to intercept and optimize MAPI connections from existing clients (including new MAPI connections from these clients). RiOS continues optimizing non-MAPI connections from all clients. The alarm clears automatically when the MAPI traffic has decreased; however, it can take one minute for the alarm to clear.
In RiOS 7.0 and later, RiOS preemptively closes MAPI sessions to reduce the connection count in an attempt to bring the appliance out of admission control by bringing the connection count below the 85 percent threshold. RiOS closes the MAPI sessions in this order:
– MAPI prepopulation connections
– MAPI sessions with the largest number of connections
– MAPI sessions with most idle connections
– Most recently optimized MAPI sessions or oldest MAPI session
– MAPI sessions exceeding the memory threshold
• Memory—The appliance has entered admission control due to memory consumption. The appliance is optimizing traffic beyond its rated capability and is unable to handle the amount of traffic passing through the WAN link. During this event, the appliance continues to optimize existing connections, but new connections are passed through without optimization. No other action is necessary; the alarm clears automatically when the traffic has decreased.
• TCP—The appliance has entered admission control due to high TCP memory use. During this event, the appliance continues to optimize existing connections, but new connections are passed through without optimization. The alarm clears automatically when the TCP memory pressure has decreased.
By default, this alarm is enabled.
Application Consistent Snapshot
Enables an alarm to detect if there is an issue with the consistent snapshot of running applications.
Asymmetric Routing
Enables an alarm if asymmetric routing is detected on the network. This is usually due to a failover event of an inner router or VPN. By default, this alarm is enabled.
Blockstore
Enables an alarm to monitor the blockstore performance.
Connection Forwarding
Enables an alarm if the system detects a problem with a connection-forwarding neighbor. The connection-forwarding alarms are inclusive of all connection-forwarding neighbors. For example, if an appliance has three neighbors, the alarm triggers if any one of the neighbors are in error. In the same way, the alarm clears only when all three neighbors are no longer in error.
These alarms are events:
• Cluster IPv6 Incompatible
• Cluster Neighbor Incompatible
• Connection Forwarding Ack Timeout
• Connection Forwarding Connection Failure
• Connection Forwarding Keepalive Timeout
• Connection Forwarding Latency Exceeded
• Connection Forwarding Lost Connection Error
• Connection Forwarding Lost Due To End Of Stream
• Connection Forwarding Read Information Timeout
• Multiple Interface Connection Forwarding
• Single Interface Connection Forwarding
By default, this alarm is enabled.
CPU Utilization
Enables an alarm and sends an email notification if the average and peak threshold for the CPU utilization is exceeded. When an alarm reaches the rising threshold, it is activated; when it reaches the lowest or reset threshold, it is reset. After an alarm is triggered, it isn’t triggered again until it has fallen below the reset threshold. By default, this alarm is enabled, with a rising threshold of 90 percent and a reset threshold of 80 percent.
• Rising Threshold—Specify the rising threshold. When an alarm reaches the rising threshold, it is activated. The default value is 90 percent.
• Reset Threshold—Specify the reset threshold. When an alarm reaches the lowest or reset threshold, it is reset. After an alarm is triggered, it isn’t triggered again until it has fallen below the reset threshold. The default value is 80 percent.
Data Store
• Corruption—Enables an alarm and sends an email notification if the RiOS data store is corrupt or has become incompatible with the current configuration. To clear the RiOS data store of data, restart the optimization service and click Clear the Data Store.
If the alarm was caused by an unintended change to the configuration, the configuration can be changed to match the old RiOS data store settings again and then a service restart (without clearing) will clear the alarm.
• Data Store Clean Required—Enables an alarm and sends an email notification if you need to clear the RiOS data store.
• Encryption Level Mismatch—Enables an alarm and sends an email notification if a data store error, such as an encryption, header, or format error occurs.
• Synchronization Error—Enables an alarm if RiOS data store synchronization has failed. The RiOS data store synchronization between two SteelHeads has been disrupted and the RiOS data stores are no longer synchronized.
By default, this alarm is enabled.
Disk Full
Enables an alarm if the system partitions (not the RiOS data store) are full or almost full. For example, RiOS monitors the available space on /var that’s used to hold logs, statistics, system dumps, TCP dumps, and so on. By default, this alarm is enabled.
Domain Authentication Alert
Enables an alarm when the system is either unable to communicate with the domain controller, or has detected an SMB signing error, or that delegation has failed. CIFS-signed and Encrypted-MAPI traffic is passed through without optimization. By default, this alarm is enabled.
Domain Join Error
Enables an alarm if an attempt to join a Windows domain has failed. The number one cause of failing to join a domain is a significant difference in the system time on the Windows domain controller and the appliance. A domain join can also fail when the DNS server returns an invalid IP address for the domain controller. By default, this alarm is enabled.
Duplex
Enables an alarm when an interface wasn’t configured for duplex negotiation but has negotiated duplex mode. By default, this alarm is enabled.
Edge HA Service
Monitors the SteelFusion Edge High Availability (HA) service. Two SteelFusion Edge machines can be put into a HA configuration where the active and standby Edge machines are kept in sync at all times, everything in the blockstore of the Active edge is mirrored to the blockstore of the Stand-by edge.
Flash Protection Failure
Enables an alarm when USB flash drive hasn’t been backed up because there isn’t enough available space in the /var filesystem directory. By default, this alarm is enabled.
Hardware
• Disk Error—Enables an alarm when one or more disks is offline. To see that disk is offline, enter this CLI command from the system prompt:
show raid diagram
By default, this alarm is enabled.
This alarm applies only to the appliance RAID Series 3000, 5000, and 6000.
• Fan Error—Enables an alarm and sends an email notification if a fan is failing or has failed and needs to be replaced. By default, this alarm is enabled.
• Flash Error—Enables an alarm when the system detects an error with the flash drive hardware. By default, this alarm is enabled.
• IPMI—Enables an alarm and sends an email notification if an Intelligent Platform Management Interface (IPMI) event is detected. (Not supported on all appliance models.)
This alarm triggers when there has been a physical security intrusion. These events trigger this alarm:
– Chassis intrusion (physical opening and closing of the appliance case).
– Memory errors (correctable or uncorrectable ECC memory errors).
– Hard drive faults or predictive failures.
Power cycle, such as turning the power switch on or off, physically unplugging and replugging the cable, or issuing a power cycle from the power switch controller.
By default, this alarm is enabled.
• Memory Error—Enables an alarm and sends an email notification if a memory error is detected. For example, when a system memory stick fails.
• Other Hardware Error—Enables an alarm if a hardware error is detected. These issues trigger the hardware error alarm:
– The appliance doesn’t have enough disk, memory, CPU cores, or NIC cards to support the current configuration.
– The appliance is using a memory Dual In-line Memory Module (DIMM), a hard disk, or a NIC that’s not qualified by Riverbed.
– Other hardware issues.
By default, this alarm is enabled.
• Power Supply—Enables an alarm and sends an email notification if an inserted power supply cord doesn’t have power, as opposed to a power supply slot with no power supply cord inserted. By default, this alarm is enabled.
• RAID—Enables an alarm and sends an email notification if the system encounters an error with the RAID array (for example, missing drives, pulled drives, drive failures, and drive rebuilds). An audible alarm can also sound. To see if a disk has failed, enter this CLI command from the system prompt:
show raid diagram
For drive rebuilds, if a drive is removed and then reinserted, the alarm continues to be triggered until the rebuild is complete.
Rebuilding a disk drive can take four-six hours.
This alarm applies only to the appliance RAID Series 3000, 5000, and 6000.
By default, this alarm is enabled.
• SSD Write Cycle Level Exceeded—Enables an alarm if the accumulated SSD write cycles exceed a predefined write cycle 95 percent level on appliance models 7050L and 7050M. If the alarm is triggered, the administrator can swap out the disk before any problems arise. By default, this alarm is enabled.
Inbound QoS WAN Bandwidth Configuration
LAN-WAN Loop Alarm
Licensing
Enables an alarm and sends an email notification if a license on the SCC is removed, is about to expire, has expired, or is invalid. This alarm triggers if the SCC has no MSPEC license installed for its currently configured model.
• Appliance Unlicensed—This alarm triggers if the SCC has no BASE or MSPEC license installed for its currently configured model.
• Autolicense critical event—This alarm triggers if the SCC autolicense has a critical event.
• Autolicense information event—This alarm triggers if the SCC autolicense has event information.
• Licenses Expired—This alarm triggers if one or more features has at least one license installed, but all of them are expired.
• Licenses Expiring—This alarm triggers if the license for one or more features is going to expire within two weeks.
The licenses expiring and licenses expired alarms are triggered per feature. For example: if you install two license keys for a feature, LK1-FOO-xxx (expired) and LK1-FOO-yyy (not expired), the alarms don’t trigger, because the feature has one valid license.
By default, this alarm is enabled.
Link State
Enables an alarm and sends an email notification if an Ethernet link is lost due to an unplugged cable or dead switch port. Depending on that link is down, the system can no longer be optimizing and a network outage could occur.
This is often caused by surrounding devices, like routers or switches, interface transitioning. This alarm also accompanies service or system restarts on the SCC.
For WAN/LAN interfaces, the alarm triggers if in-path support is enabled for that WAN/LAN pair.
By default, this alarm is disabled.
Load Balancing Alerts
Enables an alarm with either status:
• Load Balance Service—Indicates whether the load-balancing service is properly configured.
• Oversubscription Alert—Indicates when the total capacity of the remote SteelHeads is much greater than the total capacity of the local SteelHeads (oversubscription).
For detailed information, see the SteelHead Interceptor User Guide.
Local Cluster Alerts
Enables an alarm when the selected local cluster conditions are net:
• Local SteelHead Interceptor Disconnection Alert—If a local Interceptor is disconnected from the cluster.
• SteelHead Admission Control Alert—If a local appliance is under admission control.
• SteelHead Capacity Alert—If a local appliance is near to or has reached capacity.
• SteelHead Permanent Capacity Adjustment Alert—If capacity reduction has been triggered for a local appliance.
• Version Incompatibility Alert—If version incompatibility exists among cluster appliances.
For detailed information, see the SteelHead Interceptor User Guide.
Memory Paging
Enables an alarm and sends an email notification if memory paging is detected. If 100 pages are swapped every couple of hours, the system is functioning properly. If thousands of pages are swapped every few minutes, contact Support. By default, this alarm is enabled.
Neighbor Incompatibility
Enables an alarm if the system has encountered an error in reaching a SteelHead configured for connection forwarding. By default, this alarm is enabled.
Network Bypass
Enables an alarm and sends an email notification if the system is in bypass failover mode. By default, this alarm is enabled.
NFS v2/v4 Alarm
Enables an alarm and sends an email notification if the SCC detects that either NFSv2 or NFSv4 is in use. The SteelHead only supports NFS 3.0 and passes through all other versions. By default, this alarm is enabled.
Optimization Service
• Internal Error—Enables an alarm and sends an email notification if the RiOS optimization service encounters a condition that can degrade optimization performance. By default, this alarm is enabled.
• Service Status—Enables an alarm and sends an email notification if the RiOS optimization service encounters a service condition. By default, this alarm is enabled. The message indicates the reason for the condition.
• Unexpected Halt—Enables an alarm and sends an email notification if the RiOS optimization service halts due to a serious software error. By default, this alarm is enabled.
Outbound QoS WAN Bandwidth Configuration
Path Selection Path Down
Path Selection Path Probing Error
Process Dump Creation Error
Enables an alarm and sends an email notification if the system detects an error while trying to create a process dump. This alarm indicates an abnormal condition where RiOS can’t collect the core file after three retries. It can be caused when the /var directory is reaching capacity or other conditions. When the alarm is raised, the directory is blacklisted. By default, this alarm is enabled.
Proxy File Service
Indicates that there has been a PFS operation or configuration error:
• Proxy File Service Configuration—Indicates that a configuration attempt has failed. If the system detects a configuration failure, attempt the configuration again.
• Proxy File Service Operation—Indicates that a synchronization operation has failed. If the system detects an operation failure, attempt the operation again.
By default, this alarm is enabled.
Riverbed Host Tools version
Secure Transport
Enables an alarm and sends an email notification if the system encounters a problem with secure transport.
Secure Vault
Enables an alarm and sends an email notification if the system encounters a problem with the secure vault:
• Secure Vault Locked—Indicates that the secure vault is locked. To optimize SSL connections or to use RiOS data store encryption, the secure vault must be unlocked.
• Secure Vault New Password Recommended—Indicates that the secure vault requires a new, nondefault password. Reenter the password.
• Secure Vault Not Initialized—Indicates that an error has occurred while initializing the secure vault. When the vault is locked, SSL traffic isn’t optimized and you can’t encrypt the RiOS data store.
Software Compatibility
Enables an alarm if there is a peer (Peer Mismatch) or software version mismatch (Software Version Mismatch) in the Riverbed system. By default, this alarm is enabled for both.
SSL
Enables an alarm if an error is detected in your SSL configuration.
• Non-443 SSL Servers—Indicates that during a RiOS upgrade (for example, from 5.5 to 6.0), the system has detected a preexisting SSL server certificate configuration on a port other than the default SSL port 443. SSL traffic can’t be optimized. To restore SSL optimization, you can add an in-path rule to the client-side SCC to intercept the connection and optimize the SSL traffic on the nondefault SSL server port.
After adding an in-path rule, you must clear this alarm manually by entering this CLI command:
stats alarm non_443_ssl_servers_detected_on_upgrade clear
• SSL Certificates Error (SSL CAs)—Indicates that an SSL peering certificate has failed to reenroll automatically within the Simple Certificate Enrollment Protocol (SCEP) polling interval.
• SSL Certificates Error (SSL Peering CAs)—Indicates that an SSL peering certificate has failed to reenroll automatically within the Simple Certificate Enrollment Protocol (SCEP) polling interval.
• SSL Certificates Expiring—Indicates that an SSL certificate is about to expire.
• SSL Certificates SCEP—Indicates that an SSL certificate has failed to reenroll automatically within the SCEP polling interval.
By default, this alarm is enabled.
SteelFusion Blockstore
Enables an alarm if the system encounters any of these issues with the SteelFusion Edge blockstore:
• The blockstore is running out of space.
• The blockstore is out of space.
• The blockstore is running out of memory.
• The blockstore couldn’t read data that was already replicated to the DC.
• The blockstore couldn’t read data that’s not yet replicated to the DC.
• The blockstore fails to start due to disk errors or an incorrect configuration.
• The Granite Edge software version is incompatible with the blockstore version on disk.
• The blockstore couldn’t save data to disk due to a media error.
By default, this alarm is enabled.
SteelFusion Core
Enables an alarm if the system encounters any of these issues with the SteelFusion Core:
• The Edge device has connected to a Granite Core that doesn’t recognize the Edge device.
• The Edge doesn’t have an active connection with the Granite Core.
• The data channel between Granite Core and the Edge is down.
• The connection between the Granite Core and the Edge has stalled.
By default, this alarm is enabled.
SteelFusion iSCSI
Enables an alarm if the iSCSI module encounters an error. By default, this alarm is enabled.
SteelFusion LUN
Enables an alarm if a LUN becomes unavailable. By default, this alarm is enabled.
SteelFusion Snapshot
Enables an alarm if a snapshot fails to be committed to the SAN, or a snapshot has fails to complete due to Windows timing out. By default, this alarm is enabled.
SteelFusion Uncommitted Data
Enables an alarm if a large amount of data in the block store needs to be committed to SteelFusion Core. By default, this alarm is enabled.
Storage Profile Switch Failed
Enables an alarm when an error occurs while repartitioning the disk drives during a storage profile switch. A profile switch changes the disk space allocation on the drives, clears the SteelFusion data stores and repartitions the data stores to the appropriate sizes.
By default, this alarm is enabled.
System Detail Report
Enables an alarm if a system component has encountered a problem. By default, this alarm is disabled (in RiOS 7.0.3 and later).
Temperature
• Critical Temperature—Enables an alarm and sends an email notification if the CPU temperature exceeds the rising threshold. When the CPU returns to the reset threshold, the critical alarm is cleared. The default value for the rising threshold temperature is 70C; the default reset threshold temperature is 67C.
• Warning Temperature—Enables an alarm and sends an email notification if the CPU temperature approaches the rising threshold. When the CPU returns to the reset threshold, the warning alarm is cleared.
• Rising Threshold—Specifies the rising threshold. The alarm activates when the temperature exceeds the rising threshold. The default value is 70 percent.
• Reset Threshold—Specifies the reset threshold. The alarm clears when the temperature falls below the reset threshold. The default value is 67 percent.
After the alarm triggers, it can’t trigger again until after the temperature falls below the reset threshold and then exceeds the rising threshold again.
Web Proxy
• Web Proxy Service - Configuration
• Web Proxy Service - Service Status
Announcements
You can create or modify a login message or a message of the day. The login message appears in the SCC Login page. The message of the day appears in the Dashboard and when you first log in to the CLI.
You can change announcement settings for the selected system settings policy in the Announcements page.
Login Message
Allows you to type a message in the text box to appear on the Login page.
MOTD
Allows you to type a message in the text box to appear on the Home page.
Email
You can change email notification settings for the selected system settings policy in the Email page as follows:
SMTP Server
Specifies the SMTP server. You must have external DNS and external access for SMTP traffic for this feature to function.
Make sure you provide a valid SMTP server to ensure that the users you specify receive email notifications for events and failures.
SMTP Port
Specifies the port number for the SMTP server. The default value is 25.
Send Reminder of Pass-through Rules via Email
Specifies this option to periodically send an email reminder to evaluate in-path and load-balancing rules. Frequently pass-through in-path and load-balancing rules are created as a temporary workaround for an acute problem. These rules often end up becoming permanent because the administrator forgets to remove them. To activate this feature, you must also specify Enable Email Notification in the In-Path Rules policy page. Reminder emails are also sent every 15 days.
Report Events via Email
Reports events through email. Specify a list of email addresses to receive the notification messages. Separate addresses by commas.
Report Failures via Email
Reports failures through email. Specify a list of email addresses to receive the notification messages. Separate addresses by commas.
Override Default Sender’s Address
Configures the SMTP protocol for outgoing server messages for errors or events. Specify a list of email addresses to receive the notification messages. Separate addresses by commas.
You can also configure the outgoing email address sent to the client recipients. The default outgoing address is do-not-reply@hostname.domain. If you don’t specify a domain the default outgoing email is do-not-reply@hostname.
You can configure the host and domain settings in the Host Settings page.
Report Failures to Technical Support
Reports serious failures, such as system crashes to Support. Specify the email addresses to that to send notification messages. Separate addresses by spaces, semicolons, commas, or vertical bars. We recommend that you activate this feature so that problems are promptly corrected.
This option doesn’t automatically report a disk drive failure. In the event of a disk drive failure, contact Support.
Logging
You can configure remote logging servers, log rotation and filtering, and log viewing preferences for the selected system settings policy in the Logging page.
By default, the system rotates each log file every 24 hours or if the file size reaches one Gigabyte uncompressed. You can change this to rotate every week or month and you can rotate the files based on file size.
The automatic rotation of system logs deletes your oldest log file, labeled as Archived log #10, pushes the current log to Archived log # 1, and starts a new current-day log file.
The Logging page contains these groups of settings:
Logging configuration
These configuration options are available under Logging Configuration:
Minimum Severity
Specifies the minimum severity level for the system log messages. The log contains all messages with this severity level or higher. Select one of these levels from the drop-down list:
• Emergency—Unusable system.
• Alert—Action must be taken immediately.
• Critical—Conditions that affect the functionality of the SCC.
• Error—Conditions that probably affect the functionality of the SCC.
• Warning—Conditions that could affect the functionality of the SCC, such authentication failures.
• Notice—Normal but significant conditions, such as a configuration change. This is the default setting.
• Info—Informational messages that provide general information about system operations.
This control applies to the system log only. It doesn’t apply to the user log.
Maximum Number of Log Files
Specifies the maximum number of logs to store. The default value is 10.
Lines Per Log Page
Specifies the number of lines per log page. The default value is 100.
Rotate Based On
Specifies one of these rotation options:
• Time—Select Day, Week, or Month from the drop-down list.
• Disk Space—Specify how much disk space, in megabytes, the log uses before it rotates. The default value is 16MB.
The log size of the log file is only checked on 10-minute intervals.
Adding a remote log server
You can add a remote log server that uses a secure TLS connection on this page. You must first add a certificate and key on the Logging appliance page for each log server being configured. These configuration options are available under Remote Log Servers.
Add a New Log Server:
Server IP or Hostname
Specifies the server IP address or hostname of the remote log server.
Port
Specifies the remote log server port. If you are upgrading from a release that did not include a port number option, you’ll need to remove the remote log server and then add it back, specifying a port. Default is 514.
Minimum Severity
Specifies the minimum severity level for the log messages. The log contains all messages with this severity level or higher. Select one of these levels from the drop-down list:
• Emergency—Unusable system.
• Alert—Action must be taken immediately.
• Critical—Conditions that affect the functionality of the SCC.
• Error—Conditions that probably affect the functionality of the SCC.
• Warning—Conditions that could affect the functionality of the SCC, such authentication failures.
• Notice—Normal but significant conditions, such as a configuration change. This is the default setting.
• Info—Informational messages that provide general information about system operations.
Enable Secure Connection
Enables secure remote logging. A log certificate must be installed before a secure remote logging server can be enabled.
Adding a new process logging filter
These configuration options are available:
Add a New Process Logging Filter
Displays the controls to add a new process logging filter.
Process
Specifies one of these settings from the drop-down list:
• alarmd—Alarm manager
• cifs—CIFS Optimization
• cmcfc—SCC Auto-registration Utility
• rgp—SCC Connector
• rgpd—SCC Connection Manager
• cli—Command-line interface
• mgmtd—Device Control and Management
• http—HTTP Optimization
• hald—Hardware Abstraction Daemon
• notes—Lotus Notes Optimization
• mapi—MAPI Optimization
• nfs—NFS Optimization
• pm—Process Manager
• qosd—QoS Classification
• sched—Process Scheduler
• ssl—SSL optimization
• statsd—Statistics Collector
• wdt—Watchdog Timer
• webasd—Web Application Process
• domain_auth—Windows Domain Authentication
Minimum Severity
Specifies one of these settings from the drop-down list:
• Emergency—Unusable system.
• Alert—Action must be taken immediately.
• Critical—Conditions that affect the functionality of the SCC.
• Error—Conditions that probably affect the functionality of the SCC.
• Warning—Conditions that could affect the functionality of the SCC, such authentication failures.
• Notice—Normal but significant conditions, such as a configuration change. This is the default setting.
• Info—Informational messages that provide general information about system operations.
Add
Adds a new process.
Monitored ports
You can specify monitored port for the selected system settings policy in the Monitored Ports page.
The appliance automatically discovers all the ports in the system that have traffic. Discovered ports, with a label (if one exists), are added to the Traffic Summary report. If a label doesn’t exist then an unknown label is added to the discovered port. To change the unknown label to a name representing the port, you must add the port with a new label. All statistics for this new port label are preserved from the time the port was discovered.
By default, traffic is monitored on ports 21 (FTP), 80 (HTTP), 135 (EPM), 139 (CIFS:NetBIOS), 443 (SSL), 445 (CIFS:TCP), 1352 (Lotus Notes), 1433 (SQL:TDS), 1748 (SRDF), 3225 (FCIP), 3226 (FCIP), 3227 (FCIP), 3228 (FCIP), 7830 (MAPI), 7919 (Packet Mode), 8777 (RCU), 8778 (SMB Signed), 8779 (SMB2), 8780 (SMB2 Signed) and 10566 (SnapMirror).
These configuration options are available:
Add Port
Displays the controls to add a new port.
Port Number
Specifies the port to be monitored.
Port Description
Specifies a description of the type of traffic on the port.
Add
Displays the controls for adding a port.
SNMP ACLs
The SNMP ACLs page contains these groups of settings:
Security names
The security names identify an individual user (v1 or v2c only).
These configuration options are available:
Add a New Security Name
Displays the controls to add a security name.
Security Name
Specifies a name to identify a requestor (allowed to issue gets and sets). The security name can make changes to the View Based Access Control Model (VACM) security name configuration.
Traps for v1 and v2c are independent of the security name.
Community String
Specifies the password-like community string to control access. Use a combination of uppercase, lowercase, and numerical characters to reduce the chance of unauthorized access to the appliance.
If you specify a read-only community string (located on the SNMP Basic page under SNMP Server Settings), it takes precedence over this community name and enables users to access the entire MIB tree from any source host. If this isn’t desired, delete the read-only community string.
Source IP Address and Mask Bits
Specifies the host IP address and mask bits to that you permit access using the security name and community string.
You can access the entire MIB tree from any source host using the Read-Only Community String on the SNMP Basic page. For detailed information about the SNMP Basic page, see
SNMP basic.Add
Adds the security name.
Groups
The groups identify a security-name, security model by a group, and referred to by a group-name.
These configuration options are available:
Add a New Group
Displays the controls to add a new group.
Group Name
Specifies a group name.
Security Model and Name Pairs
Specifies a security model. Click the plus (+) button and select a security model from the drop-down list:
• v1 or v2c displays another drop-down list; select a security name.
• usm displays another drop-down list, select a user.
To add another Security Model and Name pair, click the plus (+) button.
Add
Adds the group name and security model and name pairs.
Views
These configuration options are available:
Add a New View
Displays the controls to add a new view.
View Name
Specifies a descriptive view name to facilitate administration.
Includes
Specifies the Object Identifiers (OIDs) to include in the view, separated by commas: for example, .1.3.6.1.2.1.1. By default, the view excludes all OIDs. You can specify .iso or any subtree or subtree branch. You can specify an OID number or use its string for: for example, .iso.org.dod.internet.private.enterprises.rbt.products.SteelHead.system.model.
Excludes
Specifies the OIDs to exclude in the view, separated by commas. By default, the view excludes all OIDs.
Add
Adds the view.
Access policies
The access policies defines who gets access to that type of information. An access-policy is a comprised of <group-name, security-level, read-view-name>.
These configuration options are available:
Add a New Access Policy
Displays the controls to add a new access policy.
Group Name
Specifies a group name from the drop-down list.
Security Level
Determines whether a single atomic message exchange is authenticated. Select one of these from the drop-down list:
• No Auth—Doesn’t authenticate packets and doesn’t use privacy. This is the default setting.
• Auth—Authenticates packets but doesn’t use privacy.
A security level applies to a group, not to an individual user.
Read View
Specifies a view from the drop-down list.
Add
Adds the policy to the policy list.
SNMP basic
You can configure SNMP contact and trap receiver settings to allow events to be reported to an SNMP entity in the SNMP Basic page.
Traps are messages sent by an SNMP entity that indicate the occurrence of an event. The default system configuration doesn’t include SNMP traps.
RiOS 7.0 and later provide support for these SNMP versions:
• SNMPv1
• SNMPv2
• SNMPv3, which provides authentication through the User-based Security Model (USM).
• View-Based Access Control Mechanism (VACM), which provides richer access control.
• SNMPv3 authentication using AES 128 and DES encryption privacy.
The SNMP page contains these groups of settings:
SNMP server settings
These configuration options are available:
Enable SNMP Traps
Enables event reporting to an SNMP entity.
System Contact
Specifies the username for the SNMP contact.
System Location
Specifies the physical location of the SNMP system.
Read-Only Community String
Specifies a string to identify the read-only community: for example, Read-only. Community strings can’t contain the # (hash) value. The default value is public.
Adding a new trap receiver
These configuration options are available:
Add New Trap Receiver
Displays the controls for configuring new trap receivers.
Receiver
Specifies the hostname, IPv4 or IPv6 address. For IPv6, specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Destination Port
Specifies the destination port.
Receiver Type
Specifies SNMPv1, v2c, or v3 (user-based security mode) from the drop-down list.
Remote User
(Appears only when you select 3.) Specifies a remote username.
Authentication
(Appears only when you select v3.) Specifies either Supply a Password or Supply a Key to use while authenticating users.
Authentication Protocol
(Appears only when you select v3.) Specifies an authentication method from the drop-down list:
• MD5—Specifies the Message-Digest 5 algorithm, a widely used cryptographic hash function with a 128-bit hash value. This is the default value.
• SHA—Specifies the Secure Hash Algorithm, a set of related cryptographic
Password/Password Confirm
(Appears only when you select v3.) Specifies a password. The password must have a minimum of eight characters. Confirm the password in the Password Confirm text box.
Security Level
(Appears only when you select v3.) Determines whether a single atomic message exchange is authenticated. Select one of these levels from the drop-down list:
• No Auth—Doesn’t authenticate packets and doesn’t use privacy. This is the default setting.
• Auth—Authenticates packets but doesn’t use privacy.
• AuthPriv—Authenticates packets using AES 128 and DES to encrypt messages for privacy.
A security level applies to a group, not to an individual user.
Community
Specifies the SNMP community name.
Enable Receiver
Enables the new trap receiver.
Add
Adds the new configuration to the Trap Receiver list.
SNMPv3
You can change SNMPv3 settings policy in the SNMP v3 page.
SNMPv3 provides additional authentication and access control for message security: for example, you can verify the identity of the SNMP entity (manager or agent) sending the message.
RiOS 7.0 and later support SNMPv3 message encryption for increased security.
Using SNMPv3 is more secure than SNMPv1 or SNMPv2; however, it requires more configuration steps to provide the additional security features. For detailed information about SNMPv3, see the SteelHead User Guide.
These configuration options are available:
Add a New User
Displays the controls to add a new user.
User Name
Specifies the username.
Authentication Protocol
Specifies an authentication method from the drop-down list:
• MD5—Specifies the Message-Digest 5 algorithm, a widely-used cryptographic hash function with a 128-bit hash value. This is the default value.
• SHA-1—Specifies the Secure Hash Algorithm, a set of related cryptographic hash functions. SHA-1 is considered to be the successor to MD5.
Authentication
Specifies either Supply a Password or Supply a Key to use while authenticating users.
SHA Key
(Appears only when you select Supply a Key.) Specifies a unique authentication key. The key is an MD5 or SHA-1 digest created using md5sum or sha1sum.
Password
Specifies a password. The password must have a minimum of eight characters.
Password Confirm
Confirms the password.
Use Privacy Option
Specifies SNMPv3 encryption:
• Privacy Protocol—Select either the AES or DES protocol from the drop-down list. AES uses the AES128 algorithm.
• Privacy—Select same as Authentication, Supply a Password, or Supply a Key to use while authenticating users. The default setting is Same as Authentication.
Add
Adds the user.
NTP settings
You configure NTP setting in the NTP Settings page.
For details, see the SteelHead User Guide.
These configuration options are available:
Use NTP Time Synchronization
Enables NTP time synchronization.
Add a New NTP Server
Displays the controls to add a server.
Hostname or IP Address
Specifies the hostname or IP address for the NTP server. For IPv6 specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Version
Specifies the NTP server version from the drop-down list: 3 or 4.
Enabled/Disabled
Specifies Enabled from the drop-down list to connect to the NTP server. Select Disabled from the drop-down list to disconnect from the NTP server.
Key ID
Specifies the MD5 key identifier to use to authenticate the NTP server. The valid range is from 1 to 65534. The key ID must appear on the trusted keys list.
Add a New NTP Authentication Key
Displays the controls to add an authentication key to the key list. Both trusted and untrusted keys appear on the list.
Key ID
Specifies the secret MD5 key identifier for the NTP server. The valid range is from 1 to 65534.
Secret (Text)
Specifies the shared secret. You must configure the same shared secret for both the NTP server and the NTP client to use MD5-based cryptography. The shared secret:
• is limited to 16 characters or fewer
• can’t include white space or #s
• can’t be empty
• is case sensitive
The secret appears in the key list as its MD5 hash value.
Time zone
You configure time zone setting in the Time Zone page.
For details, see the SteelHead User Guide.
This configuration option is available:
Time Zone
Specifies a time zone from the drop-down list. The default value is GMT.
If you change the time zone, log messages retain the previous time zone until you reboot the appliance.
Security policy settings
This section describes the Security Policy feature set.
General Security Settings
You can prioritize local, RADIUS, and TACACS+ authentication methods for the system and set the authorization policy and default user for RADIUS and TACACS+ authorization systems in the General Settings page.
Make sure to put the authentication methods in the order in that you want authentication to occur. If authorization fails on the first method, the next method is attempted, and so on, until all of the methods have been attempted.
To set TACACS+ authorization levels (admin or read-only) to allow certain members of a group to log in, add this attribute to users on the TACACS+ server:
service = rbt-exec {
local-user-name = “monitor”
}
where you replace monitor with admin for write access.
For details about general security settings, see the SteelHead User Guide.
These configuration options are available;
Authentication Methods
Specifies an authentication method from the drop-down list. The methods are listed in the order in that they occur. If authorization fails on the first method, the next method is attempted, and so forth, until all the methods have been attempted.
For RADIUS/TACACS+, fallback only when servers are unavailable Select this check box to prevent local login if the RADIUS or TACACS+ server denies access, but allow local login if the RADIUS or TACACS+ server isn’t available.
When checked, indicates fallback to a RADIUS or TACACS+ server only when all of the other servers haven’t responded. This is the default setting.
When this feature is disabled, the appliance doesn’t fall back to the RADIUS or TACACS+ servers. If it exhausts the other servers and doesn’t get a response, it returns a server failure.
Safety Account
Creates a safety account so that admin/sys admin users can log in to the SCC even if remote authentication servers are unreachable. A safety account increases security and conforms to Defense Information Systems Agency (DISA) requirements.
Only the selected safety account will be allowed to login in cases where the AAA server isn’t reachable. (Only one user can be assigned to the safety account.)
You can create a system administrator user in the Administrator > Security: User Permissions page.
Safety Account User
Specifies the user account from the drop-down list. The default value is admin.
Authorization Policy
Specifies Remote First, Remote Only, or Local only.
User Permissions
You can change the administrator or monitor passwords and define role-based users for the selected security policy in the User Permissions page.
For details about user permissions, see
About user permissions.Capability-Based Accounts
The system uses these accounts based on what actions the user can take:
• Admin—The system administrator user has full privileges. For example, as an administrator you may set and modify configuration settings, add and delete users, restart the optimization service, reboot the SteelHead, and create and view performance and system reports. The system administrator role allows you to add or remove a system administrator role for any other user, but not for yourself.
• Monitor—A monitor user may view reports, view user logs, and change their password. A monitor user can’t make configuration changes, modify private keys, view logs, or manage cryptographic modules in the system.
• Max Web Login Limit—You can configure the maximum number of logins to the web UI for the specified user. The default value is -1 which allows for unlimited logins.
• Max CLI Login Limit—Configure the maximum number of logins to the CLI for this user. The default value is -1 which allows for unlimited logins.
• Enable Account—check this box to enable the specified account.
Role-Based Accounts
Allows you to create users, assign passwords to the user, and assign varying permissions based on the roles of the user.
An administrator role configures a system administrator role. Read-only permission isn’t allowed for this role. This role allows permission for all other RBM roles, including creating, editing and removing user accounts. The system administrator role allows you to add or remove a system administrator role for any other user, but not for yourself.
A user role determines whether the user has permission to:
• Read-only—With read-only privileges you can view current configuration settings but you can’t change them.
• Read/write—With read and write privileges you can view settings and make configuration changes for a feature.
• Deny—With deny privileges you can’t view settings or save configuration changes for a feature.
As an example, you might have user Jane who can make configuration changes to QoS and SSL whereas user John can only view these configuration settings; and finally, user Joe can’t view, change, or save the settings for these features.
Available menu items reflect the privileges of the user. For example, any menu items that a user doesn’t have permission to use are unavailable. When a user selects an unavailable link, the User Permissions page appears.
Combining permissions by feature
RiOS 9.0 and later require additional user permissions for path selection and QoS. For example, to change a QoS rule, a user needs read/write permission for the Network Settings role in addition to read/write permission for QoS.
This table summarizes the changes to the user permission requirements for RiOS 9.0 and later.
Management Console page | To configure this feature or change this section | Required read permission | Required read/write permission |
|---|
Networking > Topology: Sites & Networks | Networks | Network Settings Read-Only | Network Settings read/write |
| Sites | Network Settings Read-Only QoS Read-Only Path Selection Read-Only | Network Settings read/write QoS read/write Path Selection read/write |
Networking > App Definitions: Applications | Applications | Network Settings Read-Only | Network Settings read/write |
Networking > Network Services: Quality of Service | Enable QoS | Network Settings Read-Only | Network Settings read/write |
| Manage QoS Per Interface | Network Settings Read-Only | Network Settings read/write |
| QoS Profile | QoS Read-Only | QoS read/write |
| QoS Remote Site Info | Network Settings Read-Only QoS Read-Only | — |
Networking > Network Services: QoS Profile Details | Profile Name | QoS Read-Only | QoS read/write |
| QoS Classes QoS Rules | QoS Read-Only QoS Read-Only | QoS read/write Network Settings read/write QoS read/write |
Path Selection | Enable Path Selection Path Selection Rules Uplink Status | Network Settings Read-Only Network Settings Read-Only Path Selection Read-Only Network Settings Read-Only Path Selection Read-Only Reports read/write | Network Settings read/write Network Settings read/write Path Selection read/write — |
Outbound QoS Report | | QoS Read-Only | QoS read/write |
Inbound QoS Report | | QoS Read-Only | QoS read/write |
Host Labels | | Network Settings Read-Only or QoS Read-Only | Network Settings read/write or QoS read/write |
Port Labels | | Network Settings Read-Only or QoS Read-Only | Network Settings read/write or QoS read/write |
These configuration options are available:
admin/monitor
Changes the password or creates a default user account. Click the right arrow.
Change Password
Enables password protection. Password protection is an account control feature that allows you to select a password policy for more security. When you enable account control on the Administration > Security: Password Policy page, a user must use a password.
When a user has a null password to start with, the administrator can still set the user password with account control enabled. However, once the user or administrator changes the password, it can’t be reset to null as long as account control is enabled.
Password
Specifies a password in the text box.
Password Confirm
Confirms the new administrator password.
Enable Account
Enables or clears the administrator or monitor account.
When enabled, you may make the account the default user for Radius and TACACS+ authorization. You may only designate one account as the default user. Once enabled, the default user account may not be disabled or removed. The Accounts table displays the account as permanent.
Adding a new account
A role-based account can’t modify another role-based or capability-based account.
These configuration options are available:
Add a New Account
Displays the controls for creating a new account.
Account Name
Specifies a name for the account.
Password
Specifies a password in the text box, and then retype the password for confirmation.
Enable Account
Enables the new account.
Administrator
Configures a system administrator role. This role allows permission for all other RBM roles, including creating, editing, and removing user accounts. The system administrator role allows you to add or remove a system administrator role for any other user, but not for yourself. Read-only permission isn’t allowed for this role.
User
Configures a role that determines whether the user:
• has permission to view current configuration settings but not change them (Read-Only).
• has permission to view settings and make configuration changes for a feature (read/write).
• is prevented from viewing or saving settings or configuration changes for a feature (Deny).
General Settings
Configures per-source IP connection limit and the maximum connection pooling size.
Network Settings
Configures these features:
• Topology definitions
• Site and network definitions
• Application definitions
• Host interface settings
• Network interface settings
• DNS cache settings
• Hardware assist rules
• Host labels
• Port labels
You must include this role for users configuring path selection or enforcing QoS policies in addition to the QoS and Path Selection roles.
QoS
Enforces QoS policies. You must also include the Network Settings role.
Path Selection
Configures path selection. You must also include the Network Settings role.
Optimization Service
Configures alarms, performance features, SkipWare, HS-TCP, and TCP optimization.
SteelHead In-Path Rules
Configures TCP traffic for optimization and how to optimize traffic by setting in-path rules. This role includes WAN visibility to preserve TCP/IP address or port information. For details about WAN visibility, see the SteelHead Deployment Guide.
CIFS Optimization
Configures CIFS optimization settings (including SMB signing) and overlapping open optimization.
HTTP Optimization
HTTP optimization is unavailable in cloud appliances models. This feature may become available in future releases of those models.
Configures enhanced HTTP optimization settings: URL learning, Parse and Prefetch, Object Prefetch Table, keepalive, insert cookie, file extensions to prefetch, and the ability to set up HTTP optimization for a specific server subnet.
Oracle Forms Optimization
Optimizes Oracle E-business application content and forms applications.
MAPI Optimization
Optimizes MAPI and sets Exchange and NSPI ports.
NFS Optimization
Configures NFS optimization.
Notes Optimization
Configures Lotus Notes optimization.
Citrix Optimization
Configures Citrix optimization.
SSL Optimization
Configures SSL support and the secure inner channel.
Replication Optimization
Configures the SRDF/A, FCIP, and SnapMirror storage optimization modules.
Storage Service
Configures branch storage services on SteelFusion Edge appliances (the branch storage services are only available on a SteelFusion Edge).
Security Settings
Configures security settings, including RADIUS and TACACS authentication settings and the secure vault password.
Basic Diagnostics
Customizes system diagnostic logs, including system and user log settings, but doesn’t include TCP dumps.
TCP Dumps
Customizes TCP dump settings.
Reports
Sets system report parameters.
Domain Authentication
Allows joining a Windows domain and configuring Windows domain authentication.
Citrix Acceleration
Configures Citrix optimization.
Add
Adds your settings to the system.
Password Policy
Choose one of these password policy templates, depending on your security requirements:
• Strong—Sets the password policy to more stringent enforcement settings. Selecting this template automatically prepopulates the password policy with stricter settings commonly required by higher security standards such as for the Department of Defense.
• Basic—Reverts the password policy to its predefined settings so you can customize your policy.
For details about password policy, see the SteelHead User Guide.
Under Password Management, these configuration options are available:
Login Attempts Before Lockout
Specifies the maximum number of unsuccessful login attempts before temporarily blocking user access to the appliance. The user is prevented from further login attempts when the number is exceeded. The lockout expires after the amount of time specified in Timeout for User Login After Lockout elapses.
Timeout for User Login After Lockout
Specifies the amount of time, in seconds, that must elapse before a user can attempt to log in after an account lockout due to unsuccessful login attempts. The default for the strong security template is 300.
Days Before Password Expires
Specifies the number of days the current password remains in effect. The default for the strong security template is 60. To set the password expiration to 24 hours, specify 0. To set the password expiration to 48 hours, specify 1. Leave blank to turn off password expiration.
Days to Warn User of an Expiring Password
Specifies the number of days the user is warned before the password expires. The default for the strong security template is 7.
Days to Keep Account Active After Password Expire
Specifies the number of days the account remains active after the password expires. The default for the strong security template is 305. When the time elapses, RiOS locks the account permanently, preventing any further logins.
Minimum Interval for Password Reuse
Specifies the number of password changes allowed before a password can be reused. The default for the strong security template is 0.
Under Password Characteristics, these configuration options are available:
Minimum Password Length
Specifies the minimum password length. The default for the strong security template is 14 alphanumeric characters.
Minimum Uppercase Characters
Specifies the minimum number of uppercase characters required in a password. The default for the strong security template is 1.
Minimum Lowercase Characters
Specifies the minimum number of lowercase characters required in a password. The default for the strong security template is 1.
Minimum Numerical Characters
Specifies the minimum number of numerical characters required in a password. The default for the strong security template is 1.
Minimum Special Characters
Specifies the minimum number of special characters required in a password. The default for the strong security template is 1.
Minimum Character Differences Between Passwords
Specifies the minimum number of characters that must be changed between the old and new password. The default for the strong security template is 4.
Maximum Consecutively Repeating Characters
Specifies the maximum number of consecutively repeating characters allowed in a password. The default value is 3.
Prevent Dictionary Words
Prevents the use of any word that’s found in a dictionary as a password. By default, this control is enabled.
Enable Session Management
Allows you to limit the number of logins when specify a Global Maximum login limit. The default value is -1 which allows unlimited logins.
RADIUS
You set up RADIUS server authentication for the selected security policy in the RADIUS page.
RADIUS is an access control protocol that uses a challenge and response method for authenticating users. Setting up RADIUS server authentication is optional.
For details about the RADIUS feature, see the SteelHead User Guide.
The RADIUS page contains these groups of settings:
Default RADIUS settings
These configuration options are available:
Set a Global Default Key
Enables a global server key for the RADIUS server.
Global Key
Specifies the global server key.
Leave it unchanged to leave the global key unchanged.
Confirm Global Key
Confirms the global server key.
Timeout (seconds)
Specifies the time-out period in seconds (1 to 60). The default value is 3.
Retries
Specifies the number of times you want to allow the user to retry authentication. The default value is 1.
RADIUS servers
These configuration options are available:
Add a RADIUS Server
Displays the controls for defining a new RADIUS server.
Hostname or IP Address
Specifies the server IPv4 or IPv6 address. For IPv6 specify an IP address using this format: eight 16-bit hexadecimal strings separated by colons, 128-bits. For example: 2001:38dc:0052:0000:0000:e9a4:00c5:6282
You don’t need to include leading zeros. For example: 2001:38dc:52:0:0:e9a4:c5:6282
You can replace consecutive zero strings with double colons (::). For example: 2001:38dc:52::e9a4:c5:6282
Authentication Port
Specifies the port for the server. The default value is 1812.
Authentication Type
Specifies the authentication type, choose from PAP, CHAP or MS-CHAPv2.
Override the Global Default Key
Overrides the global server key for the server.
• Server Key—Specify the override server key.
• Confirm Server Key—Confirm the override server key.
Timeout (seconds)
Specifies the time-out period in seconds (1 to 60). The default value is 3.
Retries
Specifies the number of times you want to allow the user to retry authentication. Valid values are 0 to 5. The default value is 1.
Enabled
Enables the new server.
Add
Adds the RADIUS server to the list.
If you add a new server to your network and you don’t specify these fields at that time, the global settings are applied automatically.
TACACS+
You set up TACACS+ server authentication for the selected security policy in the TACACS+ page.
Enabling this feature is optional.
TACACS+ is an authentication protocol that enables a remote access server to forward a login password for a user to an authentication server to determine whether access is allowed to a given system.
For details about TACACS+, see the SteelHead User Guide.
The TACACS+ page contains these groups of settings:
Default TACACS+ settings
These configuration options are available:
Set a Global Default Key
Enables a global server key for the server.
Global Key
Specifies the global server key.
Leave it unchanged to leave the global key unchanged.
Confirm Global Key
confirms the global server key.
Timeout (seconds)
Specifies the time-out period in seconds (1 to 60). The default value is 3.
Retries
Specifies the number of times you want to allow the user to retry authentication. Valid values are 0 to 5. The default is 1.
TACACS+ servers
These configuration options are available:
Add a TACACS+ Server
Displays the controls for defining a new TACACS+ server as described in this table.
Hostname or IP Address
Specifies the server IP address.
Authentication Port
Specifies the port for the server. The default value is 49.
Authentication Type
Specifies the authentication type. Click either PAP or ASCII.
Override the Global Default Key
Overrides the global server key for the server.
• Server Key—Specify the override server key.
• Confirm Server Key—Confirm the override server key.
Timeout (seconds)
Specifies the time-out period in seconds (1 to 60). The default is 3.
Retries
Specifies the number of times you want to allow the user to retry authentication. Valid values are 0 to 5. The default is 1.
Enabled
Enables the new server.
Add
Adds the TACACS+ server to the list.
SAML
You set up SAML server authentication for the selected security policy in the SAML page.
Enabling this feature is optional.
SAML is an XML standard that acts as an authentication interface between a SCC and an identity provider (IdP). You can use the IdP to provide additional requirements for authentication, which can be multi-factor authentication methods such as common access card (CAC) or personal identity verification (PIV). For more information, see
Configuring SAML.These configuration options are available:
IdP Configuration
Pastes the IdP metadata you copied or received from the IdP website.
Security Settings
• Sign Authentication Request—Select this option to have SCC sign the SAML authentication request sent to the identity provider. Signing the initial login request sent by SCC allows the identity provider to verify that all login requests originate from a trusted service provider.
• Requires Signed Assertions—Select if SAML assertions must be signed. Some SAML configurations require signed assertions to improve security.
• Requires Encrypted Assertions—Select this option to indicate to the SAML identity provider that SCC requires encrypted SAML assertion responses. When this option is selected, the identity provider encrypts the assertion section of the SAML responses. Even though all SAML traffic to and from SCC is already encrypted by the use of HTTPS, this option adds another layer of encryption.
Attribute
• User Name Attribute—Enter the name of the IdP variable that carries the username of the user. The user name attribute is mandatory and must be sent by your identify provider in the SAML response to align the login with a configured SteelHead account. Default value is samlNameId.
• Member of Attribute—Enter the name of the IdP variable that carries the role of the user. Default value is memberOf.
Enable SAML enables SAML Authentication. Select this option and click Apply.
Management ACL
You configure management ACL for the selected security policy in the Management ACL page.
Appliances are subject to the network policies defined by a corporate security policy, particularly in large networks. Using an internal management ACL, you can:
• restrict access to certain interfaces or protocols of an appliance.
• restrict inbound IP access to an appliance, protecting it from access by hosts that don’t have permission without using a separate device (such as a router or firewall).
• specify that hosts or groups of hosts can access and manage an appliance by IP address, simplifying the integration of appliances into your network.
The Management ACL provides these safeguards to prevent accidental disconnection from the SCC:
• detects the IP address you’re connecting from and displays a warning if you add a rule that denies connections to that address.
• always enables the default appliance ports 7800, 7801, 7810, 7820, and 7850.
• always enables a previously-connected SCC to connect and tracks any changes to the IP address of the SCC to prevent disconnection.
• converts well-known port and protocol combinations, such as SSH, Telnet, HTTP, HTTPS, SNMP, and SOAP into their default management service and protects these services from disconnection. For example, if you specify protocol 6 (TCP) and port 22, the management ACL converts this port and protocol combination into SSH and protects it from denial.
• tracks changes to default service ports and automatically updates any references to changed ports in the access rules.
For details about management ACL, see the SteelHead User Guide.
The Management ACL page contains these groups of settings:
Management ACL settings
The management ACL contains rules that define a match condition for an inbound IP packet. You set a rule to allow or deny access to a matching inbound IP packet. When you add a rule on an SCC, the destination specifies the SCC, and the source specifies a remote host.
This configuration option is available:
Enable Management ACL
Secures access to an appliance using a management ACL.
Adding a new rule
The management ACL contains rules that define a match condition for an inbound IP packet. You set a rule to allow or deny access to a matching inbound IP packet. When you add a rule on an appliance, the destination specifies the appliance, and the source specifies a remote host.
The ACL rules list contains default rules that allow you to use the management ACL with the RiOS features PFS and DNS caching. These default rules allow access to certain ports required by these features. The list also includes a default rule that enables access to the SCC.
These configuration options are available:
Add a New Rule
Displays the controls for adding a new rule.
Action
Specifies one of these rule types from the drop-down list:
• Allow—Enables a matching packet access to the SCC. This is the default action.
• Deny—Denies access to any matching packets.
Service
Specifies All, HTTP, HTTPS, SOAP, SNMP, SSH, or Telnet. When specified, the Destination Port is dimmed and unavailable.
Protocol
(Appears only when Service is set to Specify Protocol.) Specifies All, TCP, UDP, ICMP or a specify a protocol number (1, 6, 17). The default value is All. When set to All or ICMP, the Service and Destination Ports are dimmed and unavailable.
Destination Port
Specifies the destination port number.
Source Network
Specifies the source network of the inbound packet.
Interface
Specifies an interface name from the drop-down list. Select All to specify all interfaces.
Description
Describes the rule to facilitate administration.
Rule Number
Specifies a rule number from the drop-down list. By default, the rule goes to the end of the table (just above the default rule).
Appliances evaluate rules in numerical order starting with rule 1. If the conditions set in the rule match, then the rule is applied, and the system moves on to the next packet. If the conditions set in the rule don’t match, the system consults the next rule. For example, if the conditions of rule 1 don’t match, rule 2 is consulted. If rule 2 matches the conditions, it is applied, and no further rules are consulted.
The default rule, Allow, enables all remaining traffic from everywhere that hasn’t been selected by another rule. It can’t be removed and is always listed last.
Log Packets
Tracks denied packets in the log. By default, packet logging is enabled.
Add
Adds the rule to the list.
REST API Access
You configure enable access to the Riverbed REST API in the REST API Access page.
REST (Representational State Transfer) is a framework for API design. REST builds a simple API on top of the HTTP protocol. It is based on generic facilities of the standard HTTP protocol, including the six basic HTTP methods (GET, POST, PUT, DELETE, HEAD, INFO) and the full range of HTTP return codes. You can discover REST APIs by navigating links embedded in the resources provided by the REST API, which follow common encoding and formatting practices.
You can invoke the REST API to enable communication from one Riverbed appliance to another through REST API calls, for example:
• A SteelCentral NetProfiler communicating with a SteelCentral NetShark.
• A SteelCentral NetProfiler retrieving a QoS configuration from an appliance.
For all uses you must preconfigure an access code to authenticate communication between parties and to authorize access to protected resources.
The REST API calls are based on the trusted application flow, a scenario where you download and install an application on some host, such as your own laptop. You trust both the application and the security of the host onto that the application is installed.
For detailed information about REST API, see the SteelHead User Guide.
These configuration options are available:
Enable REST API Access
Enables REST API access. Before an appliance can access the REST API, you must preconfigure an access code for the system to use to authenticate access.
Apply
Applies your settings.
Description of Use
Specifies a description.
Generate New Access Code
Creates a code.
Import Existing Access Code
Uses an existing code.
Add Access Code
Adds the access code.
Maintenance policy settings
This section describes the Image Signing feature.
Image signature verification
Before you install a software upgrade, you can verify the integrity and authenticity of the software image and ensure it is an unmodified and approved Riverbed image. The feature requires a public key in a certificate to verify the digital signature of the software image. The certificate is automatically installed on the SteelHead, but you can import an updated version, if needed.
You can download the public key for Riverbed images from Knowledge Base article S33657.
During the verification process, the appliance compares the signature in the image with the Riverbed public certificate. If they match, the installation continues. If not, the system alerts you to a potential problem. As long as image signature verification is enabled, you cannot continue with an installation that cannot verify the image signature.
By default, image signature verification is on and should be disabled only when necessary. We recommend keeping the feature enabled at all times.
This configuration option is available:
Image Signature Verification
Enables the verification of the integrity and authenticity of the software image to ensure it is an unmodified and approved Riverbed image.