SteelHead™ Deployment Guide : Path Selection : Path Selection Implementation
  
Path Selection Implementation
This section includes the following topics:
  • Path Selection Work Flow
  • Example of a Path Selection Implementation
  • Identifying Traffic Flow Candidates
  • Path Selection Work Flow
    Path selection configuration is highly dependent on the network, site and uplink configurations, defined in Topology. You must complete topology configuration according to your physical network design.
    Path selection configuration in RiOS v9.0 and later differs considerably from previous RiOS versions. As such, configuration migration from previous versions is neither compatible nor supported.
    To avoid repetitive configuration steps on single SteelHeads, Riverbed strongly recommends that you use the SCC v9.0 or later to configure path selection on your SteelHeads. SCC enables you to configure one time and to send it out to multiple SteelHeads instead of connecting to a SteelHead, performing the configuration, and repeating the same configuration for all SteelHeads in the network.
    To configure path selection you must complete the following tasks:
    Configure the multiple different WAN networks in the environment on the Networking > Topology > Sites & Networks page. Even though this is not required, Riverbed recommends that you complete this step to simplify the configuration.
    The network topology configuration is more of a concept to bind different SteelHeads over a common logical connection. Path selection is typically deployed with WAN designs composed of two or more different circuits; each one of those circuits provides a distinct path to be used by path selection (example shown in Figure 8‑1). There are three distinct circuits: MPLS, VPN, and Internet.
    Figure 8‑1. Example Network Topology
    Figure 8‑2 shows the SteelHead configuration for the Figure 8‑1, in which each WAN path is labeled in the configuration.
    Figure 8‑2. WAN Networks
    Configure the sites on the Networking > Topology > Sites & Networks page (Figure 8‑3).
    Figure 8‑3. Site Configuration
    The Sites configuration is integral for the path selection feature. You must create any remote destination site you want to build a path to. Path selection takes into account the remote destination IP subnets that you configure as a parameter for the site. The IP subnet property is how a SteelHead is able to direct traffic towards a specific destination site because it can identify the destination IP address in the packet header.
    A site configuration contains the SteelHead peers field property. SteelHead peers are select distinct IP addresses you choose to poll, in order, to verify path availability. Riverbed highly recommends that you use the remote SteelHead in-path IP address as a peer address when possible; for example, a remote peer SteelHead in-path IP is required for the firewall traversal GRE feature. You can enter additional addresses that are probed for path availability status. Each entered IP address is attempted as a separate independent path.
    As part of the site configuration, a default site is preconfigured by default. You need the default site because it serves to catch traffic that is not part of any preconfigured sites; for example, Internet-bound traffic. Depending on this traffic flow pattern, you must enter a value in the SteelHead Peer field. Riverbed generally recommends that you edit the existing default site (Figure 8‑4) to use:
  • the data center SteelHead IP address for Internet-bound traffic that is backhauled through the data center.
  • the local router gateway when the Internet-bound traffic is exiting directly out of the branch.
  • Figure 8‑4. Existing Default Site
    Edit the existing local site to configure it for your design.
    A local site is always created by default and cannot be deleted. However, you can rename the local site to reflect your network design.
    The uplinks configuration is integral to configuring the local site. Uplinks dictate the egress path out of the SteelHead and hence are critical to the path selection configuration. You can rename uplinks to a more meaningful description, and you can tie them to a network that you have already defined.
    The uplink name is recalled in the path selection page as a selection to direct traffic. Uplinks, which are configured by default, are hard tied to each in-path interface available to the SteelHead. Local site uplinks require that you configure a gateway IP address. By default, the gateway IP address is identical to the already configured in-path gateway IP address, but it is configurable if you want to change it. Riverbed recommends that you point the uplink gateway to the WAN-facing IP address in case the in-path gateway is configured towards the LAN.
    The gateway IP address is a WAN-side IP address of the next hop device you want to direct the traffic to. You do not need to configure remote site uplinks for path selection. The GRE tunneling option is enabled for certain designs that require tunneling.
    For more information about GRE tunneling, see Firewall Path Traversal Deployment.
    Previous RiOS versions required you to configure a path and a probe IP address to poll for path availability. Starting with RiOS v9.0 this is no longer required. The SteelHead automatically probes through each uplink you configure at the local site. This probe is the mechanism by which the SteelHead automatically configures the path that is available. The SteelHead probes from each uplink towards each configured remote site that you configure in the Peer SteelHead IP address field parameter.
    Figure 8‑5. Example Uplink Configuration
    Configure path selection as described in Configuring Path Selection.
    Path selection is a global function that influences all traffic traversing the SteelHead. You cannot configure path selection to only intercept traffic on certain LAN interfaces. This is unlike QoS traffic enforcement in which you can select the desired interfaces to enforce traffic shaping. In RiOS v9.0 and later, path selection introduces the concept of site identification. For example, if you want to identify a certain application that is destined to a certain site, you can elect to take an action on the exit paths.
    For example, Figure 8‑6 shows an applications named RDP. Depending on the original destination, the traffic can follow two different uplinks. If you want to send the RDP traffic towards remotebranch1, then the traffic is steered towards the VPN path. On the other hand, if you want the traffic to travel to RemoteBranch2, then the traffic is steered to the PTP uplink. If the PTP path is not available, then traffic is configured to be dropped.
    Figure 8‑6. Path Selection Rules
    RiOS v9.0 and later includes the following destination concepts:
  • Default Site - The Default Site contains the IP subnet property of the default site configured in the Topology section. For destination identification, this matches the 0.0.0.0/0 setting. You select the default-site as the destination for connections typically oriented towards unknown areas, such as Internet-bound destinations.
  • Any - The Any setting combines identifications of all known configured sites, including the Default-Site. Rather than configuring a separate identical path selection rule for every known site, choose the Any setting to match the destination address of every configured site. This setting ensures that the application configured, and any matching configured site or the default-site, are steered onto the selected uplink. The Any destination concept is important to understand. The concept serves as a mean to reduce the configuration steps required, yet provide a common application steering design.
  • These settings are available when you add a rule on the Networking > Network Services: Path Selection page.
    Note that the order of the path selection rules in relation to the applications they refer to, and how the site definitions come into play. As RiOS v9.0 introduces application groupings, certain configurations can consist of a path selection rule that you configure for a specific single application followed by another application group rule, which includes the previous application.
    In the case of overlapping application, the order of the rule is the deciding factor as far as which rule is enacted for the path selection logic. In relation to the sites concept, RiOS identifies and selects the rule with the longest site prefix match first. Therefore a rule specific for a site takes effect before the Any rule.
    Example of a Path Selection Implementation
    Figure 8‑7 shows an example of a WAN design in which path selection is implemented. The example shows multiple dual-homed sites with an MPLS WAN link provided by a carrier, labeled MPLS, and the second link is private VPN circuit. A third site, Remote Branch3, has a single connection back to the MPLS cloud, and a secondary link through an Internet-based firewall. All traffic, including public Internet is backhauled through the main headquarters site, and egresses directly through the firewall connection.
    Figure 8‑7. Path Selection Design Example
    Each path is probed for availability based on the probe-setting schedule with default configuration of 2 seconds. The probe transmits an ICMP request from the configured in-path interface toward the probe destination IP, and after receipt of an ICMP response, the path is declared available for use. A path is determined down after the count of consecutive probe failures surpasses the configured probe threshold. The default threshold is three probe packets.
    Note that the SteelHead is looking for an ICMP response from the probe destination to determine path availability. Even if the ICMP response traverses unintended devices or WANs, the path is available as long as the configured in-path interface receives the ICMP response. This can result in false positive path availability. The example shown in Figure 8‑7 assumes that the MPLS path is configured with inpath0_0, along with a probe destination of 2R1. Even if the MPLS network fails, the path remains up as long as 2R1 continues to send ICMP responses to the SteelHead inpath0_0. Likewise, assume that the MPLS path, inpath0_0, is configured to send probes to 3R1 as the probe destination.
    Given the bad scenario, in which the MPLS fails, 1R1 forwards the ICMP request to 2R1, across the VPN, through 3FW, and on to 3R1. 3R1 can respond, sending ICMP responses down and back over the VPN, reaching the SteelHead inpath0_0. In either case, the MPLS path availability remains connected, though the likely intention is that the path shows it is not connected when the MPLS WAN is down.
    Riverbed recommends that you locate an address on the remote side of the path, and make sure devices in the path treat the probe as expected during a failure. This connection is best verified by conducting traceroute operations to verify the path flow traversal during outages. If the MPLS path has inpath0_0 configured with a probe destination of R31 and a next-hop gateway of 1R1, then configure 1R1 in which traffic to 3R1 can only go over the MPLS network. If the MPLS network fails, then configure 1R1, or another device, to drop the ICMP request probe from inpath0_0 to 3R1. An appropriate probe destination for a path can be a remote router loopback address or one of the remote SteelHead in-path interfaces.
    In RiOS v8.6 and later, you can configure the SteelHead on a per-uplink basis to perform firewall traversal using GRE encapsulation to a remote SteelHead peer. The SteelHeads use the configured destination IP for the probe as the other endpoint when you set the tunnel mode setting in a path to GRE. Remote side GRE tunneling will require to be configured as well, with similar Path Selection rules, if you intend to maintain traffic symmetry.
    Each path has its own independent IP address to probe, yet this address can be the identical one for each path. Therefore, each path can poll on the same probe destination. Note that the ICMP request has to use whichever physical interface is selected for the path. In Figure 8‑7, the MPLS path egresses its packets using the inpath0_0 interface, hence all traffic uses the corresponding WAN0_0 interface. Meanwhile, the VPN path egresses its packing using in-path0_1, hence all traffic uses the corresponding WAN0_1 interface.
    The next-hop gateway serves the following purposes for the path selection design because the gateway provides the new routing path for packets to travel through to their destination:
  • Replaces the destination MAC address of packets with the MAC address of the alternate gateway. The gateway MAC address is learned by the SteelHead in-path interface. As part of steering packets, the destination MAC address of the packets is altered to match learned MAC address of the configured new next-hop gateway.
  • Path selection requires a Layer-2 connection between the SteelHead and the gateway; the connection between the SteelHead and the next-hop gateway cannot be a routed link. This is referred to as Layer-2 redirect by next hop MAC.
  • Switches the outbound interface from the original in-path interface to the desired primary path.
  • The path selection solution is implemented completely transparently, regardless of existing routing metrics. The primary path selection path gateway accepts the steered packets and proceeds to forward them onto the corresponding WAN.
  • The SteelHead takes no action in having to reconfigure the Layer-3 routing parameters of the routers in the network.
  • The SteelHead never takes an action to inject any routes or alter the routing instances. The traffic source whose packets are sent to the primary path selection gateway have no visibility into the changes the SteelHead applies. Therefore, the client (or server) continues to send any and all packets to the gateway address they are configured with. This is referred to as Layer-2 redirection by interface.
    WAN interface selection is based on identified traffic type and availability of the end-to-end path, depending on how you configure your SteelHead. Note that path selection remains functional even if you pause the optimization service, or if the optimization service becomes unavailable. If the SteelHead fails completely, then path selection is no longer applicable and traffic proceeds as normal, following its default gateway.
     
    Identifying Traffic Flow Candidates
    The critical step for path selection is to identify traffic flow and to associate these traffic flow candidates with a different, configured uplink. In this step, the AFE interacts with the path selection feature. Use the following methods to identify traffic that can benefit from path selection:
  • The AFE can help you to identify the traffic and steer the traffic along a configured path. For more information about AFE, see Application Flow Engine.
  • Some limitations exist when you use AFE in conjunction with path selection. AFE, or any deep packet inspection technique, requires examining several of the beginning packets of a connection before it can identify the traffic. That means that after the beginning packets have been identified, they can be sent on a path different than the one you chose. This midstream switching has implications in various environments involving firewalls and dual Internet egress environments. For more details, see Firewall Path Traversal Deployment and Design Considerations.
  • You can also use IP header information as an alternate method for identifying traffic. IP header information identification consists of any of the following combinations:
  • Source IP
  • Destination IP
  • Source port address
  • Destination port address
  • DSCP mark
  • VLAN tag
  • Optimized/Unoptimized traffic
  • Layer-4 protocols (TCP, UDP, GRE, and so on)
  • For each path selection rule you configure, you can add a maximum of three different uplinks. The uplinks you choose cascade from one to the next, based on availability. RiOS v9.0 and later includes the concept of application groups and enables you to reference multiple application types using a single path selection rule designating this application group.
    In the brief duration when classification of traffic is not yet completed, traffic is treated according to a matching header-based rule or the site default rule. This period of time could have implications during path failure when such a rule does not specify the uplink preference and sent down the original path. In this case, new connections, or connections on which application identification is yet to complete, might never become established. Riverbed recommends that you specify an uplink preference for all path selection rules, both application and header-based rules, as well as specify the uplink preference for the site default rule.
    You have the option to drop the traffic if no alternate path is available. Dropping traffic is useful when you prefer not to use bandwidth on an available path in case of failure on the primary selected path. If you choose not to override the original intended route, then traffic is relayed normally. The traffic continues to flow normally along the original intended path, following the default gateway.
    Traffic identification and path steering is independent of optimized versus pass-through traffic. Path selection takes action on the configured traffic, no matter the optimization status of the traffic.
    Path selection configuration is also independent of any QoS settings. This means that you can apply path selection rules with and without enabling QoS marking and/or shaping. Path selection uses its own independent rule sets apart from QoS, therefore it does not increase the rule count number against the model limit as specified for QoS.
    Remember that return traffic in path selection is not influenced or manipulated in any way to take the steered path from the sending SteelHead. You must install and configure a remote SteelHead with the appropriate path selection configuration, and steer the return traffic on the same path.