Troubleshooting SteelHead Deployment Problems
This chapter describes common deployment problems and solutions.
Duplex mismatches
This section describes common problems that can occur in networks in which duplex settings do not match. Duplex mismatch occurs when the speed of a network interface that is connected to the SteelHead does not match.
The number one cause of poor performance issues with SteelHead installations is duplex mismatch. A duplex mismatch can cause performance degradation and packet loss.
Signs of duplex mismatch:
• You cannot connect to an attached device.
• You can connect with a device when you choose auto-negotiation, but you cannot connect with the same device when you manually set the speed or duplex.
• Minimal or no performance gains.
• Loss of network connectivity.
• Intermittent application or file errors.
• All of your applications are slower after you have installed in-path SteelHeads.
Perform this task to determine whether the slowness is caused by a duplex mismatch:
1. Create a pass-through rule for the application on the client-side SteelHead and ensure that the rule is at the top of the in-path rules list. You add a pass‑through rule with the in-path rule pass-through command, or you can use the Management Console.
2. Restart the application.
3. Check that all connections related to the application are being passed through. If all connections related to the application are being passed through and the performance of the application does not return to the original levels, the slowness is most likely due to duplex mismatch.
The following sections describe several possible solutions to duplex mismatch.
Solution: Manually set matching speed and duplex
One solution for mismatched speed and duplex settings is to manually configure the settings.
1. Manually set (that is, hard set) matching speed and the duplex settings for the following four ports:
– Devices (switches) connected on the SteelHead LAN port
– Devices (routers) connected on the SteelHead WAN port
– The SteelHead LAN port
– The SteelHead WAN port
We recommend the following speeds:
– Fast Ethernet Interfaces: 100 megabits full duplex
– Gigabit Interfaces: 1000 megabits full duplex
For details, go to Knowledge Base article
S14623.
We recommend that you avoid using half-duplex mode whenever possible. If you are using a modern interface, and it appears to not support full duplex, double check the duplex setting. It is likely that one side is set to auto and the other is set to fixed. To manually change interface speed and duplex settings, use the interface command.
2. Verify that each of the above devices:
– have settings that match in accelerating mode.
– is configured to see interface speed and duplex settings, using the show configuration command. By default, the SteelHead automatically negotiates speed and duplex mode for all data rates and supports full duplex mode and flow control. To change interface speed and duplex settings, use the interface command.
– have settings that match in bypass mode.
– are not showing any errors or collisions.
– does not have a half-duplex configuration (forced or negotiated) on either the WAN or the LAN.
– has at least 100 Mbps speed, forced or negotiated, on the LAN.
– has network connectivity in acceleration and in failure mode.
3. Test connectivity with the SteelHead powered off to ensure that the SteelHead does not sever the network in the event of a hardware or software problem. This must be done last, especially after making any duplex changes on the connected devices.
4. If the SteelHead is powered off and you cannot pass traffic through it, verify that you are using the correct cables for all devices connected to the SteelHead. The type of cable is determined by the device connecting to the SteelHead:
– Router to SteelHead: use a crossover cable.
– Switch to SteelHead: use a straight-through cable.
– Do not rely on Auto MDI/MDI-X to determine which cables you are using.
5. Use a cable tester to verify that the SteelHead in-path interface is functioning properly: turn off the SteelHead, and connect the cable tester to the LAN and WAN port. The test result must show a crossover connection.
6. Use a cable tester to verify that all of the cables connected to the SteelHead are functioning properly.
Solution: Use an intermediary switch
If you have tried to manually set matching speed and duplex settings, and duplex mismatch still causes slow performance and lost packets after you deploy in-path SteelHeads, introduce an intermediary switch that is more compatible with both existing network interfaces. We recommend that you use this option only as a last option.
To use an intermediary switch, you must also change your network cables appropriately.
Network asymmetry
If some of the connections in a network are accelerated and some are passed through unaccelerated, it might be due to network asymmetry. Network asymmetry causes a client request to traverse a different network path than the server response. Network asymmetry can also break connections.
If SYN packets that traverse from one side of the network are accelerated, but SYN packets that traverse from the opposite side of the network are passed-through unaccelerated, it is a symptom of network asymmetry.
Figure: Server-side asymmetric network shows an asymmetric server-side network in which a server response can traverse a path (the bottom path) in which a SteelHead is not installed.
Server-side asymmetric network

The following sections describe several possible solutions to network asymmetry.
You can configure your SteelHeads to automatically detect and report asymmetric routes within your network. Whether asymmetric routing is automatically detected by SteelHeads or is detected in some other way, use the solutions described in the following sections to work around it.
Solution: Use connection forwarding
For a network connection to be accelerated, packets traveling in both network directions (from server to client and from client to server) must pass through the same client-side and server-side SteelHead. In networks in which asymmetric routing occurs because client requests or server responses can traverse different paths, you can solve it by:
• ensuring that there is a SteelHead installed on every possible path a packet can traverse. You would install a second server-side SteelHead, covering the bottom path. For details, see
Figure: Server-side asymmetric network.
• setting up connection forwarding to route packets that traversed one SteelHead in one direction to traverse the same SteelHead in the opposite direction. Connection forwarding can be configured on the client side or server side of a network.
To set up connection forwarding, use the Management Console or CLI as described in the SteelHead User Guide and the Riverbed Command-Line Interface Reference Manual.
Solution: Use virtual in-path deployment
Because a connection cannot be accelerated unless packets traveling in both network directions pass through the same client-side SteelHead and the same server-side SteelHead, you can use a virtual in-path deployment to solve network asymmetry.
In the example network shown in
Figure: Server-side asymmetric network, changing the server-side SteelHead that is deployed in-path on the top server-side path to a virtual in-path deployment, ensures that all server-side traffic passes through the server-side SteelHead.
Virtual in-path deployment to solve network asymmetry

A virtual in-path deployment differs from a physical in-path deployment in that a packet redirection mechanism directs packets to SteelHeads that are not in the physical path of the client or server. Redirection mechanisms include a Layer-4 switch (or server load balancer), WCCP, and PBR.
Solution: Deploy a four-port SteelHead
If you have a SteelHead that supports a Four-Port Copper Gigabit-Ethernet PCI-X card, you can deploy it to solve network asymmetry in which a two-port SteelHead or one of the solutions described in the previous sections is not successful.
For example, instead of the two-port SteelHead deployed to one server-side path as shown
Figure: Server-side asymmetric network, you deploy a four-port SteelHead on the server side of the network. All server-side traffic passes through the four-port SteelHead and asymmetric routing is eliminated.
Rogue SteelHead appears on current connections list
Enhanced autodiscovery greatly reduces the complexities and time it takes to deploy SteelHeads. It works so seamlessly that occasionally it has the undesirable effect of peering with SteelHeads on the internet that are not in your organization's management domain or your corporate business unit. When an unknown (or unwanted) SteelHead appears connected to your network, you can create a peering rule to prevent it from peering and remove it from your list of peers. The peering rule defines what to do when a SteelHead receives an autodiscovery probe from the unknown SteelHead.
Perform this task to prevent an unknown SteelHead from peering:
1. Choose Configure > Optimization > Peering Rules.
2. Click Add a New Peering Rule.
3. Select Passthrough as the rule type.
4. Specify the source and destination subnets. The source subnet is the remote location network subnet (in the format XXX.XXX.XXX.XXX/XX). The destination subnet is your local network subnet (in the format XXX.XXX.XXX.XXX/XX).
5. Click Add.
In this example, the peering rule passes through traffic from the unknown SteelHead in the remote location.
When you use this method and add a new remote location in the future, you must create a new peering rule that accepts traffic from the remote location. Place this new Accept rule before the Pass-through rule.
If you do not know the network subnet for the remote location, there is another option: you can create a peering rule that allows peering from your corporate network subnet and denies it otherwise. For example, create a peering rule that accepts peering from your corporate network subnet and place it as the first rule in the list.
Next, create a second peering rule to pass through all other traffic. In this example, when the local SteelHead receives an autodiscovery probe, it checks the peering rules first (from top to bottom). If it matches the first Accept rule, the local SteelHead peers with the other SteelHead. If it does not match the first Accept rule, the local SteelHead checks the next peering rule, which is the pass-through rule for all other traffic. In this case, the local SteelHead just passes through the traffic and does not peer with the other SteelHead.
After you add the peering rule, the unknown SteelHead appliance appears in the Current Connections report as a Connected Appliance until the connection times out. After the connection becomes inactive, it appears dimmed. To remove the unknown appliance completely, restart the acceleration service.
Outdated antivirus software
After installing SteelHeads, if application access over the network does not speed up or certain operations on files (such as dragging and dropping) speed up greatly but application access does not, it might be due to old antivirus software installed on a network client.
Solution: Upgrade antivirus software
If it is safe to do so, temporarily disable the antivirus software and try opening files. If performance improves with antivirus software disabled, we recommend that you upgrade the antivirus software.
If performance does not improve with antivirus software disabled or after upgrading antivirus software, contact Support.
Packet ricochets
Signs of packet ricochet are:
• Network connections fail on their first attempt but succeed on subsequent attempts.
• The SteelHead on one or both sides of a network has an in-path interface that is different from that of the local host.
• You have not defined any in-path routes in your network.
• Connections between the SteelHead and the clients or server are routed through the WAN interface to a WAN gateway, and then they are routed through a SteelHead to the next-hop LAN gateway.
• The WAN router drops SYN packets from the SteelHead before it issues an ICMP redirect.
Solution: Add in-path routes
To prevent packet ricochet, add in-path routes to local destinations.
Solution: Use simplified routing
You can also use simplified routing to prevent packet ricochet. To configure simplified routing, use
the in-path simplified routing command or the Management Console.
Router CPU spikes after WCCP configuration
If the CPU usage of the router spikes after WCCP configuration, it might be because you are not using a WCCP‑compatible Cisco IOS release or because you must use inbound redirection.
The following sections describe several possible solutions to router CPU spike after WCCP configuration.
Solution: Use mask assignment instead of hash assignment
The major difference between the hash and mask assignment methods lies in the way traffic is processed within the router/switch. With a mask assignment, traffic is processed entirely in the hardware, which means the CPU of the switch is minimal. A hash assignment uses the switch CPU for part of the load distribution calculation and hence places a significant load on the switch CPU. The mask assignment method was specifically designed for hardware-based switches and routers (such as Cisco 3560, 3750, 4500, 6500, and 7600).
Solution: Check internetwork operating system compatibility
Because WCCP is not fully integrated in every IOS release and on every platform, ensure that you are running a WCCP-compatible IOS release. If you have questions about the WCCP compatibility of your IOS release, contact Support.
If you are certain that you are running a WCCP-compatible IOS release and you experience router CPU spike after WCCP configuration, review the remaining sections for possible solutions.
Solution: Use inbound redirection
One possible solution to router CPU spike after WCCP configuration is to use inbound redirection instead of outbound redirection. Inbound redirection ensures that the router does not waste CPU cycles consulting the routing table before handling the traffic for WCCP redirection.
Solution: Use inbound redirection with fixed-target rules
If inbound redirection does not solve router CPU spike after WCCP is configured, try using inbound redirection with a fixed-target rule between SteelHeads. The fixed-target rule can eliminate one redirection interface.
Fixed-target rules directly specify server-side SteelHeads near the target server that you want to accelerate. You determine which servers you would like the SteelHead to accelerate (and, optionally, which ports), and you add fixed-target rules to specify the network of servers, ports, and out-of-path SteelHeads to use.
Solution: Use inbound redirection with fixed-target rules and redirect list
If the solutions described in the previous sections do not solve router CPU spike after WCCP is configured, try using inbound redirection with a fixed-target rule and a redirect list. A redirect list can reduce the load on the router by limiting the amount of unnecessary traffic that is redirected by the router.
Solution: Base redirection on ports rather than ACLs
If the solutions described in the previous sections do not solve router CPU spike after WCCP configuration, consider basing traffic redirection on specific port numbers rather than using ACLs.
Solution: Use PBR
If the solutions described in the previous sections do not solve router CPU spike after WCCP configuration, consider using PBR instead of WCCP.
Server Message Block signed sessions
This section provides a brief overview of problems that can occur with Windows Server Message Block (SMB) signing.
If network connections appear to be accelerated but there is no performance difference between a cold and warm transfer, it might be due to SMB-signed sessions.
SMB-signed sessions support compression and RiOS SDR, but render latency acceleration (for example read-ahead and write‑behind) unavailable.
Signs of SMB signing:
• Access to some Windows file servers across a WAN is slower than access to other Windows file servers across the WAN.
• Connections are shown as accelerated in the Management Console.
• The results of a TCP dump show low WAN utilization for files where their contents do not match existing segments in the data store.
• Copying files via FTP from the slow server is much faster than copying the same files via mapped network drives (CIFS).
When copying FTP from a slow server is much faster than copying from the same server via a mapped network drive, the possibility of other network problems (such as duplex mismatch or network congestion) with the server is ruled out.
• Log messages in the Management Console such as:
error=SMB_SHUTDOWN_ERR_SEC_SIG_ENABLED
The following sections describe possible solutions to SMB-signed sessions.
Unavailable opportunistic locks
If a file is not accelerated for more than one user at a time, it might be because an application lock on it prevents other applications and the SteelHead from obtaining exclusive access to it. Without an exclusive lock, the SteelHead cannot perform latency (for example, read-ahead and write-behind) acceleration on the file.
Without opportunistic locks (oplocks), RiOS SDR and compression are performed on file contents, but the SteelHead cannot perform latency acceleration because data integrity cannot be ensured without exclusive access to file data.
The following behaviors are signs of unavailable oplocks:
• Within a WAN:
– A client, PC1, in a remote office across the WAN can open a file it previously opened in just a few seconds.
– Another client, PC2, on the WAN has also previously opened the file but cannot open it quickly because PC1 has it open. While PC1 has the file open, it takes PC2 significantly longer to open the file.
– When PC1 closes the file, PC2 can once again open it quickly. However, because PC2 has the file open, PC1 cannot open it quickly; it takes significantly longer for PC1 to open the file because PC2 has it open.
– If no client has the file open, and PC1, PC2, and a third client on the WAN (PC3) simultaneously copy but do not open the file, each client can copy the file quickly and in nearly the same length of time.
• The results of a tcpdump show that WAN utilization is low for files that take a long time to open.
• In the Management Console, slow connections appear accelerated.
You can check connection bandwidth reduction in the Bandwidth Reduction report in the Management Console.
Solution: None needed
To prevent any compromise to data integrity, the SteelHead only accelerates access to data when exclusive access is available. When unavailable oplocks prevent the SteelHead from performing latency acceleration, the SteelHead still performs RiOS SDR and compression on the data. Therefore, even without the benefits of latency acceleration, SteelHeads might still increase WAN performance, but not as effectively as when application accelerated connections are available.
Underutilized fat pipes
A fat pipe is a network that can carry large amounts of data without significantly degrading transmission speed. If you have a fat pipe that is not being fully utilized and you are experiencing WAN congestion, latency, and packet loss as a result of the limitations of regular TCP, consider the solutions outlined in this section.
Solution: Enable high-speed TCP
To better utilize fat pipes such as in GigE WANs, consider enabling high-speed TCP (HS-TCP). HS-TCP is a feature that you can enable on SteelHeads to ease WAN congestion caused by limitations with regular TCP that results in packet loss. Enabling the HS-TCP feature enables more complete utilization of long fat pipes (high‑bandwidth, high-delay networks).
We recommend that you enable HS-TCP only after you have carefully evaluated whether it will benefit your network environment.
To display HS-TCP settings, use the show tcp highspeed command. To configure HS-TCP, use the tcp highspeed enable command. Alternatively, you can configure HS-TCP in the Management Console.
MTU sizing
This section describes how SteelHeads work with PMTU Discovery and references RFC 1191 to negotiate Maximum Transmission Unit (MTU) sizing.
The MTU specifies the largest datagram packet (Layer-3 packet) that a device supports. In SteelHead accelerated environments, MTU sizing is typically automatic and not a concern. The default MTU size for a SteelHead is 1500 bytes, which is the standard for many client and networking devices, especially across WAN circuits.
For pass-through traffic for an in-path SteelHead without RSP, the SteelHead passes packets up to the supported packet size of the configured in-path MTU. The in-path MTU supports jumbo frame configuration. For 1-Gbps interface cards, the supported MTU is 9216 or 16110, and all 10-Gbps cards support 16110.
For accelerated traffic, the SteelHeads act as a proxy. A separate inner TCP connection is established between SteelHeads, with a potentially different MTU size from the original client-to-server connection.
When a SteelHead detects that a session can be accelerated, it initiates a TCP session to the remote SteelHead using the IP flag don't fragment with packet size up to the value configured in the interface MTU (default 1500 bytes). In line with RFC 1191, if a router or device along the TCP path of the session (possibly originating a GRE tunnel) does not support the packet size, and because it is not allowed to fragment the packet, it can request the originator (the SteelHead) to reduce the packet size. It does this with an ICMP type 3, code 4 (34) packet that carries the desired maximum size and the sequence number of the packet exceeding the router's interface MTU.
A common reason devices support less than 1500 bytes the presence GRE tunnels used to establish VPNs. The 24-byte overhead GRE incurs effectively gives the tunnel interface an MTU of 1476 bytes.
Similar to the Path MTU Discover (PMTUD) behavior for most clients and servers, the SteelHead reduces the packet size for a given session after it receives the ICMP message from the device with the lower MTU. According to RFC 1191, Section 6.3, the SteelHead tries to send larger packets every 10 minutes.
In environments that support PMTUD, we recommend that you leave the SteelHead MTU configuration to its default setting of 1500 bytes.
In environments that support PMTUD, use the ip rt-cache rebuilt-count 0 command on communicating SteelHeads.
MTU issues
In most cases two hosts dynamically negotiate path MTU. Networks that contain firewalls or tunnels (VPN, GRE, IPsec transport mode) between SteelHeads sometimes require manual tuning of the MTU values. Firewalls and tunnels interfere in the following ways:
• Firewalls can contain rules that explicitly prevent path MTU by blocking or not sending the ICMP type 3 packets, causing all attempts at dynamically negotiating MTU to fail.
• Tunnels require additional per-packet overhead to encapsulate data, reducing possible MTU size for connections being carried.
SteelHeads set the DF bit for inner channel communication to peer SteelHeads. If the device in the network path does not support the SteelHead packet size and also does not send an ICMP type 3 to notify the SteelHead to reduce packet size, the packet is dropped without the SteelHead knowing to reduce future packet sizes. This can result in poor acceleration performance.
Determining MTU size in deployments
A simple method of determining MTU size across a path is to send do not fragment ping requests from the client PC, or client-side SteelHead, with varying packet sizes to a remote server or SteelHead. The following procedure shows the method from a windows client.
In the following example, a ping with the don't fragment (-f) and length (-l) 1500 bytes is sent to the remote server or SteelHead. This results in 100% loss with Packet needs to be fragmented but DF set in the reply. Pinging 10.0.0.1 with 1500 bytes of data:
C:\>ping -f -l 1500 10.0.0.1
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Ping statistics for 10.0.0.1:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)
Decrease the size of the packet to 1400 and repeat, pinging 10.0.0.1 with 1400 bytes of data:
C:\> ping -f -l 1400 10.0.0.1
Reply from 10.0.0.1: bytes=1400 time=222 ms TTL=251
Reply from 10.0.0.1: bytes=1400 time=205 ms TTL=251
Reply from 10.0.0.1: bytes=1400 time=204 ms TTL=251
Reply from 10.0.0.1: bytes=1400 time=218 ms TTL=251
Ping statistics for 10.0.0.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)
Approximate round-trip times in milliseconds:
Minimum = 204 ms, Maximum = 222 ms, Average = 212 ms
This command gets the desired result. Repeat the ping test increasing or decreasing in increments of 10 or 20 until reaching the optimum value.
Packet size of 1400 is shown only as an example, typical values can range from 1280 and higher.
When you specify the -l <size> command on a Windows machine, you are actually specifying the data payload, not the full IP datagram, including the IP and ICMP headers. To calculate the appropriate MTU, you must add the IP header (20 bytes) and ICMP header (8 bytes) to the Windows ping size. For example, for sending a 1400-byte payload, the SteelHead in-path MTU should be set to 1428 bytes. Although specifying ping sizes on Cisco routers, the specified size includes the IP and ICMP header (28 bytes). If using a Cisco device to test, set the MTU to the specified size; adding the 28 bytes manually is not necessary.
How to configure the in-path MTU value
1. Choose Networking > Networking: In-Path Interfaces, and expand the interface you want to edit.
2. Change the MTU value and apply the setting.
SteelHead does not pass through packets larger than the MTU value of its interfaces, nor does it send ICMP notifications to the sending host of the dropped packets. In environments in which the in-path MTU is lowered to account for a smaller MTU in the WAN network, we recommend that you use the interface mtu-override enable command.
Connection-forwarding MTU considerations
In networks in which SteelHeads are connection-forwarding neighbors, it is critical that you configure the LAN or WAN links that are expected to carry forwarded traffic so they can support the configured in-path MTU. Connection-forwarded traffic does not support PMTUD.
When forwarded packets are too large, ICMP Type 3, Code 4 messages are generated on intermediate routers are sent back to the sending client or server. The ICMP header does not match a TCP connection in the client or server, which causes poor acceleration or failed connections. To prevent poor acceleration or failed connections, make sure that you configure the interface MTUs on links carrying forwarded traffic the same size as the SteelHead in-path interface.