SteelHeadā„¢ Deployment Guide : Troubleshooting SteelHead Deployment Problems : MTU Sizing
  
MTU Sizing
This section describes how SteelHeads work with PMTU Discovery and references RFC 1191 to negotiate Maximum Transmission Unit (MTU) sizing. This section includes the following topics:
  • MTU Issues
  • Determining MTU Size in Deployments
  • Connection-Forwarding MTU Considerations
  • The MTU specifies the largest datagram packet (Layer-3 packet) that a device supports. In SteelHead optimized environments, MTU sizing is typically automatic and not a concern. The default MTU size for a SteelHead is 1500 bytes. This is the standard for many client and networking devices, especially across WAN circuits.
    For pass-through traffic for an in-path SteelHead without RSP, the SteelHead passes packets up to the supported packet size of the configured in-path MTU. The in-path MTU supports jumbo frame configuration. For 1-Gbps interface cards, the supported MTU is 9216 or 16110, and all 10-Gbps cards support 16110. For a full list of interface cards and their MTU support, go to https://supportkb.riverbed.com/support/index?page=content&id=s14344.
    For optimized traffic, the SteelHeads act as a proxy. A separate inner TCP connection is established between SteelHeads, with a potentially different MTU size from the original client-to-server connection.
    When a SteelHead detects that a session can be optimized, it initiates a TCP session to the remote SteelHead using the IP flag don't fragment with packet size up to the value configured in the interface MTU (default 1500 bytes). In line with RFC 1191, if a router or device along the TCP path of the session (possibly originating a GRE tunnel) does not support the packet size, and because it is not allowed to fragment the packet, it can request the originator (the SteelHead) to reduce the packet size. It does this with an ICMP type 3, code 4 (34) packet that carries the desired maximum size and the sequence number of the packet exceeding the router's interface MTU.
    A common reason devices support less than 1500 bytes the presence GRE tunnels used to establish VPNs. The 24-byte overhead GRE incurs effectively gives the tunnel interface an MTU of 1476 bytes.
    Similar to the Path MTU Discover (PMTUD) behavior for most clients and servers, the SteelHead reduces the packet size for a given session after it receives the ICMP message from the device with the lower MTU. According to RFC 1191, Section 6.3, the SteelHead tries to send larger packets every 10 minutes. For details, go to http://www.faqs.org/rfcs/rfc1191.html.
    In environments that support PMTUD, Riverbed recommends that you leave the SteelHead MTU configuration to its default setting of 1500 bytes.
    In environments that support PMTUD, use the command ip rt-cache rebuilt-count 0 on communicating SteelHeads (RiOS v8.0 and later).
    For information about MTU and path selection, see MTU and MSS Adjustment When Using Firewall Path Traversal.
    MTU Issues
    In most cases two hosts dynamically negotiate path MTU. Networks that contain firewalls or tunnels (VPN, GRE, IPSec transport mode) between SteelHeads sometimes require manual tuning of the MTU values. Firewalls and tunnels interfere in the following ways:
  • Firewalls can contain rules that explicitly prevent path MTU by blocking or not sending the ICMP type 3 packets, causing all attempts at dynamically negotiating MTU to fail.
  • Tunnels require additional per-packet overhead to encapsulate data, reducing possible MTU size for connections being carried.
  • SteelHeads set the DF bit for inner channel communication to peer SteelHeads. If the device in the network path does not support the SteelHead packet size and also does not send an ICMP type 3 to notify the SteelHead to reduce packet size, the packet is dropped without the SteelHead knowing to reduce future packet sizes. This can result in poor optimization performance.
    Determining MTU Size in Deployments
    A simple method of determining MTU size across a path is to send do not fragment ping requests from the client PC, or client-side SteelHead, with varying packet sizes to a remote server or SteelHead. The following procedure shows the method from a windows client.
     
    In the following example, a ping with the don't fragment (-f) and length (-l) 1500 bytes is sent to the remote server or SteelHead. This results in 100% loss with Packet needs to be fragmented but DF set in the reply. Pinging 10.0.0.1 with 1500 bytes of data:
    C:\>ping -f -l 1500 10.0.0.1
    Packet needs to be fragmented but DF set.
    Packet needs to be fragmented but DF set.
    Packet needs to be fragmented but DF set.
    Packet needs to be fragmented but DF set.
    Ping statistics for 10.0.0.1:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
    Decrease the size of the packet to 1400 and repeat, pinging 10.0.0.1 with 1400 bytes of data:
    C:\> ping -f -l 1400 10.0.0.1
    Reply from 10.0.0.1: bytes=1400 time=222ms TTL=251
    Reply from 10.0.0.1: bytes=1400 time=205ms TTL=251
    Reply from 10.0.0.1: bytes=1400 time=204ms TTL=251
    Reply from 10.0.0.1: bytes=1400 time=218ms TTL=251
    Ping statistics for 10.0.0.1:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    Approximate round trip times in milliseconds:
    Minimum = 204ms, Maximum = 222ms, Average = 212ms
    This command gets the desired result. Repeat the ping test increasing or decreasing in increments of 10 or 20 until reaching the optimum value.
    Packet size of 1400 is shown only as an example, typical values can range from 1280 and higher.
    When you specify the -l <size> command on a Windows machine, you are actually specifying the data payload, not the full IP datagram, including the IP and ICMP headers. To calculate the appropriate MTU, you must add the IP header (20 bytes) and ICMP header (8 bytes) to the Windows ping size. In the example of 1400 byte payload sent, the SteelHead in-path MTU should be set on the SteelHeads to 1428 bytes. Note that although specifying ping sizes on Cisco routers, the specified size includes the IP and ICMP header (28 bytes). If using a Cisco device to test, set the MTU to the specified size, and adding the 28 bytes manually is not necessary.
    How to configure the in-path MTU value
    Go to Networking > Networking: In-Path Interfaces, and expand the interface you want to edit.
    Change the MTU value and apply the setting.
    In RiOS v8.0 and later, the SteelHead does not pass through packets larger than the MTU value of its interfaces, nor does it send ICMP notifications to the sending host of the dropped packets. In environments in which the in-path MTU is lowered to account for a smaller MTU in the WAN network, Riverbed recommends that you use the command interface mtu-override enable.
    Connection-Forwarding MTU Considerations
    In Networks in which SteelHeads are connection-forwarding neighbors, it is critical that you configure the LAN or WAN links that are expected to carry forwarded traffic so they can support the configured in-path MTU. Connection-forwarded traffic does not support PMTUD.
    When forwarded packets are too large, ICMP Type 3, Code 4 messages are generated on intermediate routers are sent back to the sending client or server. The ICMP header does not match a TCP connection in the client or server, which causes poor optimization or failed connections. To prevent poor optimization or failed connections, make sure that you configure the interface MTUs on links carrying forwarded traffic the same size as the SteelHead in-path interface.