VPN Routing and Forwarding
  
VPN Routing and Forwarding
This chapter describes how to deploy SteelHeads in an MPLS/VRF environment using Not-So-VRF (NSV) and VRF-aware WCCP. It includes the following sections:
•  NSV with VRF Select
•  VRF-Aware WCCP
NSV is a Riverbed network design option that leverages the Riverbed WAN optimization solution by deploying SteelHeads in an existing MPLS deployment using virtual routing and forwarding (VRF). You can use NSV when the WCCP router or the Layer-3 switch operating system does not support VRF-aware WCCP. You can use VRF-aware WCCP as an alternative deployment option to NSV when the WCCP router or Layer-3 switch operating system supports VRF-aware WCCP.
NSV with VRF Select
This section provides an overview of NSV. This section includes the following topics:
•  Virtual Routing and Forwarding
•  NSV with VRF Select
•  IOS Requirements
•  Prerequisites for NSV
•  Example NSV Network Deployment
•  Configuring NSV
Virtual Routing and Forwarding
Virtual routing and forwarding (VRF) is a technology used in computer networks that allows multiple instances of a routing table to coexist within the same router at the same time. VRF partitions a router by creating multiple routing tables and multiple forwarding instances. Because the routing instances are independent, you can use the same or overlapping IP addresses without conflict.
Note: The VRF table is also referred to as the VPNv4 routing table.
Figure: Partitioned Router Using Two Routing Tables
You can implement VRF in a network device by having distinct routing tables, one per VRF. Dedicated interfaces are bound to each VRF.
In Figure: Partitioned Router Using Two Routing Tables, the red table can forward packets between interfaces E1/0, E1/2, and S2/0.102. The green table, on the other hand, forwards between interfaces E4/2, S2/0.103, and S2/1.103.
The simplest form of VRF implementation is VRF Lite, as shown in Figure: VRF Lite. VRF Lite uses VRFs without multiprotocol label switching (MPLS). In this implementation, each router within the network participates in the virtual routing environment in a peer-based fashion. This implementation extends multiple VPNs from a provider edge (PE) device onto non-MLPS customer edge (CE) devices, which support multiple VRFs. It also replaces the requirement for separate, physical CE devices.
Figure: VRF Lite
NSV with VRF Select
NSV is a Riverbed network design option that leverages the Riverbed WDS solution by deploying SteelHeads in an existing MPLS deployment using VRF. Riverbed recommends using NSV in an MPLS/VRF environment to deploy SteelHeads while retaining existing overlapping address spaces.
The concept of NSV originates in an MPLS VPN environment with multiple hosts in the same source VPN. The hosts require access to different servers in various destination VPNs. This deployment is difficult to implement if a particular subinterface is VRF-attached. A subinterface is a way to partition configuration information for certain subsets of traffic that arrive or leave a physical interface.
NSV uses the IOS MPLS VPN VRF Select feature, which essentially eases the requirement of a VRF-attached subinterface.
The VRF Select feature uses policy-based routing (PBR) at the ingress interface of the VRF router to determine which VRF to forward traffic to. In most cases, the VRF router is a PE device. In a VRF-Lite implementation, the VRF router is a CE device. The VRF router determines the routing and forwarding of packets coming from the customer networks (or VPNs). The access control list (ACL) defined in the PBR route map matches the source IP address of the packet. If it finds a match, it sends the packet to the appropriate MPLS VPN (the VRF table).
For more information about PBR, see Policy-Based Routing Virtual In-Path Deployments.
The VRF table contains the virtual routing and forwarding information for the specified VPN. It forwards the selected VPN traffic to the correct MPLS label switched path (LSP), based upon the destination IP address of the packet.
NSV with VRF Select removes the association between the VRF and the subinterface. Decoupling the VRF and the subinterface allows you associate more than one MPLS VPN to the subinterface. The subinterface remains in the IPv4 dimension in VRF Select (as compared to the VPNv4 address space, in which it resides when it is VRF-attached). The subinterface is still IPv4-based, but it becomes aware of VRF Select by replacing the ip vrf forwarding Cisco command with ip vrf receive command.
The result is that the subinterface becomes Not-So-VRF. The subinterface still resides in the global IPv4 table, but it now uses PBR for the VRF switch. The PBR route map matches criteria based on traffic flows to be optimized.
IOS Requirements
Cisco recommends the following minimum IOS releases for an MLPS VPN VRF Selection using PBR deployment.
Cisco Hardware
Cisco IOS
Most Router Platforms
12.3(7)T or later
C76xx
12.2(33)SRB1, 12.2(33)SRB2, 12.2(33)SRC, 12.2(33)SRC1, 12.2(33)SRC2
ASR 1000 Series Router
XE 2.1.0, 2.1.1, 2.1.2, 2.2.1
Note: Regardless of how you configure a SteelHead, if the Cisco IOS version on the router or switch is below the current Cisco minimum recommendations, it might be impossible to have a functioning NSV implementation, or the implementation might not have optimal performance.
Prerequisites for NSV
Before configuring NSV, review the following information:
•  A detailed network diagram illustrating the logical connectivity between the data centers and branch offices
•  A running configuration of the multiple VRF CE devices
•  The exact IOS versions and hardware platforms in use
Example NSV Network Deployment
The examples in this section shows the following deployment scenarios:
•  One SteelHead, configured as a logical in-path (data center)
•  One SteelHead, configured as a physical in-path (branch office)
•  Both SteelHeads are running RiOS 5.0.3 or later
•  The operating system is 12.3(15)T7
•  Two units of 3640 series routers
•  Two units of WinXP VM hosts
•  IP Service Level Agreement (SLA)
•  Static routes with tracking
This deployment is the basis for the configuration shown in Configuring NSV.
Figure: Sample NSV Network Setup shows a logical in-path NSV deployment in a VRF network environment.
Figure: Sample NSV Network Setup
Figure: NSV Deployment with Intercepted and Optimized Flows shows the NSV deployment shown in Figure: Sample NSV Network Setup with intercepted and optimized flows.
Figure: NSV Deployment with Intercepted and Optimized Flows
Figure: NSV Deployment with Bypassed Flows shows the NSV deployment with bypassed flows in the event that the data center SteelHead fails.
Figure: NSV Deployment with Bypassed Flows
Configuring NSV
This section describes how to configure NSV. This section includes the following topics:
•  Basic Steps for Configuring NSV
•  Configuring the Data Center Router
•  Configuring the PBR Route Map
•  Decoupling VRF from the Subinterface to Implement NSV
•  Configuring Static Routes
•  Configuring the Branch Office Router
•  Configuring the Data Center SteelHead
•  Configuring the Branch Office SteelHead
The configuration in this section uses the parameters set up in Example NSV Network Deployment.
Basic Steps for Configuring NSV
This section shows the overview of the basic steps to configure NSV with VRF Select. The following sections describe each step in detail.
To configure basic NSV with VRF select
1. Configure the data center PE or CE router, which includes defining:
•  the VRF tables.
•  the subinterfaces.
•  the PBR route map.
•  PBR.
•  static routes.
•  parameters for monitoring the SteelHead availability.
2. Configure the branch office router.
3. Configure the data center SteelHead.
4. Configure the branch office SteelHead.
The following sections describe each of these steps in detail.
Configuring the Data Center Router
The data center PE or CE router determines the routing and forwarding of packets coming from the customer networks or VPNs. This device requires the most configuration.
The first step is to define the VRF tables for the SteelHead. For example, you define two VRF tables for SteelHead 40: custa for the customer and wds_a to use as a dummy VRF table. The dummy VRF table is not tied to any interface. It redirects traffic with a corresponding default route, which points to or exits at the subinterface to the SteelHead.
Note: You cannot enter the set ip next-hop Cisco command on a PBR route map configured for VRF select.
The next step configures the subinterfaces and VRF routing protocol. In this example, you configure the following subinterfaces and define the OSPF VRF routing protocol:
•  f0/0.40 (the LAN-to-SteelHead 40)
•  e1/0 (the WAN)
Note: This example uses OSPF as the routing protocol, but you can use other protocols, such as RIP, EIGRP, ISIS, and BGP, as well. OSPF uses a different routing process for each VRF. For the other protocols, a single process can manage all the VRFs.
To define the VRF tables and subinterfaces
1. Define the VRF tables for the SteelHead. On the data center router (in this example, P4R1), enter the following commands:
hostname p4R1
!
ip cef
!
ip vrf custa
rd 4:1
!
ip vrf wds_a
rd 4:9
!
2. Configure the VRF subinterfaces and corresponding VRF routing protocol. On the data center router, at the system prompt, enter the following commands:
interface FastEthernet0/0.40
encapsulation dot1Q 40
ip vrf forwarding custa
ip address 10.4.40.1 255.255.255.0
!
interface Ethernet1/0
ip vrf forwarding custa
ip address 10.254.4.1 255.255.255.0
half-duplex
!
router ospf 4 vrf custa
redistribute static subnets
network 10.4.40.0 0.0.0.255 area 0
network 10.254.4.0 0.0.0.255 area 0
This example configures the LAN subinterface f0/0.40, which interconnects SteelHead 40 to use VRF custa. Later, you point the dummy VRF wds_a to a default route (in this example, f0/0.40). This enables a PBR route map at f0/0.49 to redirect incoming traffic from Server 49 to Client 42 to SteelHead 40 for optimization.
In this example, because Client 42 is in the VPN custa (VRF custa), the traffic must return to the VRF custa routing path after optimization. For this redirection to work, the SteelHead 40 must reside in VRF custa and not VRF wds_a.
Configuring the PBR Route Map
VRF Select requires a control mechanism such as PBR to select which particular VRF table a data packet goes to. The next step is to configure a PBR route map, which provides matching criteria for incoming traffic and sets the VRF table.
To configure the PBR route map
•  On the data center router, enter the following commands:
route-map wds_a permit 10
match ip address 104
set vrf wds_a
!
route-map wds_a permit 20
set vrf custa
!
access-list 104 permit tcp host 10.4.49.88 host 10.4.42.99
The route map wds_a matches incoming traffic from Server 49 to Client 42. When it finds a match, it sets the VRF to wds_a, which, in turn, points to default route f0/0.40, where SteelHead 40 resides. Binding f0/0.40 with VRF custa ensures that the returning optimized traffic eventually reaches Client 42. The route map also sets any incoming traffic to VRF custa—except Server 49 to Client 42.
Note: Ensure that the PBR route map contains a default set vrf in the PBR route map to always match a packet that does not match any of the previous criteria.
Note: Because BGP control packets are required to remain categorized as global-IPv4, use an ACL to ensure that these packets do not get forwarded to a VRF table.
Decoupling VRF from the Subinterface to Implement NSV
The following procedure decouples the association between VRF and a subinterface. It implements NSV by replacing the ip vrf forwarding Cisco command with ip vrf receive command.
The result is that the subinterface is becomes Not-So-VRF. The subinterface still resides in the global IPv4 table, but it now uses PBR for the VRF switch. The PBR route map matches criteria based on traffic flows to be optimized.
Note: You must have already defined the PBR route map as described in Configuring the PBR Route Map before completing the next step.
To implement VRF Select and PBR
•   On the data center router, enter the following commands:
interface FastEthernet0/0.49
encapsulation dot1Q 49
ip vrf receive custa
ip address 10.4.49.1 255.255.255.0
ip policy route-map wds_a
The absence of the ip vrf forwarding command in this example configuration implies that f0/0.49 is not associated with any particular VRF and remains in the IPv4 global address space. This makes it possible for the SteelHeads to communicate with the subinterface.
Configuring Static Routes
Static routes play a crucial role in an NSV deployment, because you use them to fine-tune the routing. The primary, default static route points to the in-path interface to redirect incoming traffic for optimization. (In the following example, traffic is redirected to 10.4.40.101 of SteelHead 40).
The command keyword track 1 determines whether the in-path IP address of the SteelHead is reachable. The primary, default static route is used only when the in-path IP address for the SteelHead is reachable. If it becomes unreachable, the primary route is removed from the routing table. The second, floating route serves as a backup to avoid black holing traffic and ensure flow continuity.
In this example, when the primary route is removed from the routing table because the SteelHead is unreachable, the second route becomes effective at an administrative weight of 250, points to the WAN interface e1/0, and avoids blackholing traffic to ensure flow continuity.
Also, in this example, because FastEthernet0/0.49 (where Server 49 is connected) is still in the IPv4 global address space, you must make it visible in VRF custa. To do this, you assign a third static route, associating it with VRF custa. The third static route points to Server 49 (10.4.49.88) in VRF custa and redistributes it into OSPF.
To define static routes
•   On the data center router, enter the following commands:
ip route vrf wds_a 0.0.0.0 0.0.0.0 FastEthernet0/0.40 10.4.40.101 track 1
ip route vrf wds_a 0.0.0.0 0.0.0.0 Ethernet1/0 10.254.4.2 250
ip route vrf custa 10.4.49.88 255.255.255.255 FastEthernet0/0.49 10.4.49.88
!
To monitor SteelHead availability
•   On the P4R1, at the system prompt, enter the following commands:
ip sla monitor 1
type echo protocol ipIcmpEcho 10.4.40.101
vrf custa
frequency 5
!
ip sla monitor schedule 1 life forever start-time now
!
track 1 rtr 1 reachability
IP SLA uses the ICMP echo protocol to monitor the availability status of the SteelHead in-path IP address every 5 seconds (in this example, IP address 10.4.40.101 for SteelHead 40:custa). This is tied to the primary default route through the tracking mechanism. The tracking mechanism prevents routing to an unavailable IP destination when the in-path IP address for the SteelHead is down (in this example, SteelHead 40:custa).
Configuring the Branch Office Router
A typical branch office router is a PE VRF or CE VRF-Lite device. Its configuration is minimal and standard. In most environments you probably do not need to configure this device.
To configure the P4R2
•  On the P4R2, enter the following commands:
hostname P4R2
ip cef
ip vrf custa
rd 4:1
interface FastEthernet0/0.42
encapsulation dot1Q 42
ip vrf forwarding custa
ip address 10.4.42.1 255.255.255.0
interface FastEthernet0/0.254
encapsulation dot1Q 254
ip vrf forwarding custa
ip address 10.254.4.2 255.255.255.0
router ospf 4 vrf custa
network 10.4.42.0 0.0.0.255 area 0
network 10.254.4.0 0.0.0.255 area 0
Configuring the Data Center SteelHead
The data center SteelHead (in this example, VRF custa) is another vital component of an NSV deployment. Its configuration is very simple; you simply enable the logical in-path interface.
To configure the data center SteelHead
•  On the data center SteelHead, connect to the CLI and enter the following commands:
hostname "SH40"
interface inpath0_0 ip address 10.4.40.101 /24
ip in-path-gateway inpath0_0 "10.4.40.1"
in-path enable
in-path oop enable
write memory
restart
Note: You must save your changes or they are lost upon reboot. Restart the optimization service for the changes to take effect.
Configuring the Branch Office SteelHead
The SteelHead deployed at the branch office needs slightly more configuration than the data center SteelHead. Because you are only implementing VRF Select for redirecting the data center LAN-side traffic, you must define fixed-target rules for the WAN-side traffic.
The following example uses SteelHead 42:custa.
To configure the branch office SteelHead:
•  On the branch office SteelHead, connect to the CLI and enter the following commands:
hostname "SH42"
interface inpath0_0 ip address 10.4.42.101 /24
ip default-gateway "10.4.42.1
in-path enable
in-path rule fixed-target target-addr 10.4.40.101 target-port 7800 dstaddr
10.4.49.88/32 dstport "all" srcaddr 10.4.42.99/32 rulenum 4
 
Note: You must save your changes or they are lost upon reboot. Restart the optimization service for the changes to take effect.
You can also use autodiscovery to eliminate configuring fixed-target rules if you disassociate the WAN interface (in this example, P4R1 e1/0) from the VRF (in this example, custa) the same way you disassociated the LAN interface using VRF Select as described in Decoupling VRF from the Subinterface to Implement NSV.
The branch office SteelHead could also be a SteelHead Mobile. In this deployment, you could use the Mobile Controller to facilitate configuring the fixed-target rules.
For information about the Mobile Controller, see the SteelCentral Controller for SteelHead Mobile User’s Guide.
VRF-Aware WCCP
This section describes how to deploy SteelHeads in an MPLS and VRF environment using VRF-aware WCCP. VRF-aware WCCP is an alternative deployment option to NSV when the WCCP router or Layer-3 switch operating system supports VRF-aware WCCP.
This section includes the following topics:
•  VRF-Aware WCCP Design Examples
•  VRF-Aware WCCP Best Practices
The following table shows, by platform, which minimum Cisco software you need to support VRF-aware IPv4.
Cisco WCCP Server Platforms
IOS/IOS-XE/NX-OS
ISR
C7200
ISR G1 - 15.0(1)M or later
ISR G2 - 15.2(3)T or later
C7200 - 15.0(1)M or later
C7206VX
•  12.2SRC5
•  12.2SRE1
Catalyst 6500 with Sup2T
C7600
Catalyst 6500 with Sup2T - 15.1(1)SY or later
C7600 - 15.1(3)S or later
ASR 1000
3.1.0S or later
Nexus 7000
4.2(1)
5.1(5) or later
6.x or later
Note: The table is subject to change over time. For the latest information, see the platform release notes.
VRF allows multiple instances of a routing table to coexist within the same router or Layer-3 switch at the same time. This increases functionality by allowing network (or Layer-3) paths to be segmented without using multiple devices. Because traffic is automatically segregated, you can use VRF to create separate virtual private networks (VPNs) and can be extrapolated to support multitenant architectures.
Multitenancy is an architecture in which a single instance of a software application serves multiple users. Each user is called a tenant. Tenants can be given the ability to customize some parts of the application.
VRF also increases network security and can eliminate the need for encryption and authentication. Because the routing instances are independent, you can use the same or overlapping IP addresses without conflicting with each other.
Riverbed recommends a separate SteelHead per each VRF and tenant. Currently, Riverbed does not recommend you join a SteelHead to WCCP service groups that could redirect overlapping IP segments.
From the network design perspective, VRF-aware WCCP associates a separate WCCP instance (WCCP router ID) on each defined VRF. This configuration enables you to configure WCCP service groups on a per-VRF basis. The WCCP service group definition is local to the VRF on which it is defined. The WCCP service groups defined within a VRF are logically separated from each other.
Because the WCCP service group is locally significant to the VRF on which it is defined, you can reuse the service group IDs across different VRFs. This reuse is particularly useful for deployments in which the same addressing scheme is adopted for different tenants. In this case, you can reuse the same WCCP configurations (including redirect ACLs) for the various tenants.
By associating a separate WCCP instance for each VRF, you can allocate independent WCCP service group IDs to different VRFs residing within the same router or Layer-3 switch. This is analogous to server virtualization in which multiple virtual machines (VMs) running different applications are hosted on the same physical server.
In short, you can view the VRF-aware WCCP feature as a separate router (or Layer-3 switch) with its own WCCP instance. On the corresponding SteelHead, you define the corresponding WCCP service group IDs as if you were connecting to a single router (or Layer-3 switch).
VRF-Aware WCCP Design Examples
This section shows design examples pertaining to VRF-aware WCCP with virtual in-path SteelHead deployment:
•  XYZ Data Services Design Study (Nexus 7000/NX-OS Example)
•  IJK Enterprise Design Study (ASR 1000/IOS-XE Example)
•  ABC Retail Design Study (ISR/IOS Example)
For brevity, only the VRF-aware WCCP configurations on the WCCP router/Layer-3 switch are shown in these examples. The WCCP client (SteelHead) configurations are not shown.
For information on configuring WCCP and the SteelHead, see WCCP Virtual In-Path Deployments.
Note: XYZ Data Services, IJK Enterprise and ABC Retail are fictitious names. Any resemblance is purely coincidental.
XYZ Data Services Design Study (Nexus 7000/NX-OS Example)
XYZ Data Services is a small IT hosting company that provides premium data services to enterprise customers. One of the data centers of the is located in San Francisco Bay Area. The data center core switches consist of Cisco Nexus 7010 (NX7K) switches that run on NX-OS version 6.2(1).
Note: The Nexus 7000 (NX7K) switch is used as the example platform in this design study to further illustrate the VRF-aware WCCP configurations running on NX-OS in a multitenant environment.
Figure: WAN Optimization with VRF-Aware WCCP on a Nexus 7000 Switch shows a portion of the WAN optimization setup on an NX7K switch (CORE-11) in XYZ Data Services Bay Area data center. CORE-11 resides at the data center core layer. The basic design is summarized as follows:
•  WAN optimization is provisioned for both Tenant-B and Tenant-C.
•  CORE-11 is functioning as a multi-VRF Layer-3 switch, with VRF Tenant-B and VRF Tenant-C created for the corresponding tenants.
•  Because there are now two different VRFs in CORE-11, VRF-aware WCCP (or two separate instances of WCCP) has to be implemented on CORE-11 for proper interception and redirection of TCP traffic to the correct SteelHeads:
–  TCP traffic to and from server S11 is intercepted by the WCCP instance for VRF Tenant-B and redirected to SteelHead B for optimization.
–  TCP traffic to and from server S21 is intercepted by the WCCP instance for VRF Tenant-C and redirected to SteelHead C for optimization.
•  The SteelHeads in this example can be in physical or virtual form factor.
Figure: WAN Optimization with VRF-Aware WCCP on a Nexus 7000 Switch
To configure the scenario shown in Figure: WAN Optimization with VRF-Aware WCCP on a Nexus 7000 Switch, use the following procedures.
The Layer-2 configurations are summarized as follows:
•  Five ports are used; four as access ports and the remaining one as an 802.1Q trunk port.
•  Out of the four access ports:
–  Two (Eth1/13 and Eth1/15) are for Tenant-B connecting SteelHead B and server S11.
–  The other two access ports (Eth1/14 and Eth1/16) are for Tenant-C connecting SteelHead C and server S21.
•  The trunk port (Eth1/17) connects to a WAN edge/peering-layer router (not shown in the diagram).
To configure the Layer-2 baseline configurations for NX7K/CORE-11
vlan 11,21,111-112,211-212
 
interface Ethernet1/13
description "Connects SteelHead B"
switchport
switchport access vlan 11
no shutdown
 
interface Ethernet1/14
description "Connects SteelHead C"
switchport
switchport access vlan 21
no shutdown
 
interface Ethernet1/15
description "Connects Server S11"
switchport
switchport access vlan 112
no shutdown
 
interface Ethernet1/16
description "Connects Server S21"
switchport
switchport access vlan 212
no shutdown
 
interface Ethernet1/17
description "802.1Q Trunk from Peering Layer"
switchport
switchport mode trunk
switchport trunk allowed vlan 111,211
no shutdown
The Layer-3 configurations are summarized as follows:
•  Based on the six VLANs created in Code Listing 14-1, six corresponding Layer-3 switch virtual interfaces (SVIs) are also created:
–  SVIs Vlan11 (in IP subnet 10.1.11.0/24), Vlan111 (in IP subnet 10.1.111.0/24), and Vlan112 (in IP subnet 10.1.112.0/24) are associated with VRF Tenant-B.
–  SVIs Vlan21 (in IP subnet 10.2.21.0/24), Vlan211 (in IP subnet 10.2.211.0/24), and Vlan212 (in IP subnet 10.2.212.0/24) are associated with VRF Tenant-C.
•  The default VDC is used for VRF Tenant-B and VRF Tenant-C.
•  Two OSPFv2 processes are instantiated:
–  One (router ospf 100) for VRF Tenant-B.
–  The other (router ospf 200) for VRF Tenant-C.
To configure the NX7K/CORE-11 Layer-3 baseline configurations
feature ospf
feature interface-vlan
 
vrf context Tenant-B
vrf context Tenant-C
 
interface Vlan11
no shutdown
vrf member Tenant-B
ip address 10.1.11.1/24
ip router ospf 100 area 0.0.0.0
 
interface Vlan111
no shutdown
vrf member Tenant-B
ip address 10.1.111.1/24
ip router ospf 100 area 0.0.0.0
 
interface Vlan112
no shutdown
vrf member Tenant-B
ip address 10.1.112.1/24
ip router ospf 100 area 0.0.0.0
 
interface Vlan21
no shutdown
vrf member Tenant-C
ip address 10.2.21.1/24
ip router ospf 200 area 0.0.0.0
 
interface Vlan211
no shutdown
vrf member Tenant-C
ip address 10.2.211.1/24
ip router ospf 200 area 0.0.0.0
 
interface Vlan212
no shutdown
vrf member Tenant-C
ip address 10.2.212.1/24
ip router ospf 200 area 0.0.0.0
 
router ospf 100
vrf Tenant-B
 
router ospf 200
vrf Tenant-C
The VRF-aware WCCP configurations are summarized as follows:
•  To facilitate the WCCP mask assignment scheme on the NX7K switch, two WCCP service groups are created for each tenant:
–  WCCP service group IDs 111 and 112 for VRF Tenant-B.
–  WCCP service group IDs 211 and 212 for VRF Tenant-C.
•  The WCCP interception is based on the inbound direction for each service group.
To configure NX7K/CORE-11 VRF-aware WCCP configurations
feature wccp
 
vrf context Tenant-B
ip wccp 111 redirect-list 111
ip wccp 112 redirect-list 112
 
vrf context Tenant-C
ip wccp 211 redirect-list 211
ip wccp 212 redirect-list 212
 
interface Vlan111
no shutdown
vrf member Tenant-B
ip address 10.1.111.1/24
ip router ospf 100 area 0.0.0.0
ip wccp 111 redirect in
 
interface Vlan112
no shutdown
vrf member Tenant-B
ip address 10.1.112.1/24
ip router ospf 100 area 0.0.0.0
ip wccp 112 redirect in
 
interface Vlan211
no shutdown
vrf member Tenant-C
ip address 10.2.211.1/24
ip router ospf 200 area 0.0.0.0
ip wccp 211 redirect in
 
interface Vlan212
no shutdown
vrf member Tenant-C
ip address 10.2.212.1/24
ip router ospf 200 area 0.0.0.0
ip wccp 212 redirect in
 
ip access-list 111
10 permit tcp 192.168.100.0/24 10.1.112.100/32
 
ip access-list 112
10 permit tcp 10.1.112.100/32 192.168.100.0/24
 
ip access-list 211
10 permit tcp 192.168.200.0/24 10.2.212.100/32
 
ip access-list 212
10 permit tcp 10.2.212.100/32 192.168.200.0/24
Note: HA and other more complex WCCP configurations are omitted in Code Listing 14-3.
For Tenant-B:
•  Inbound TCP traffic from remote branch B (192.168.100.0/24) to server S11 (10.1.112.100/32) is intercepted by service group 111 and redirected to SteelHead B for optimization.
•  Inbound TCP traffic from server S11 (10.1.112.100/32) to remote branch B (192.168.100.0/24), is intercepted by service group 112 and redirected to SteelHead B for optimization.
•  Service groups 111 and 112 are local to VRF Tenant-B.
For Tenant-C:
•  Inbound TCP traffic from remote branch C (192.168.200.0/24) to server S21 (10.2.212.100/32) is intercepted by service group 211 and redirected to SteelHead C for optimization.
•  Inbound TCP traffic from server S21 (10.2.212.100/32) to remote branch C (192.168.200.0/24), is intercepted by service group 212 and redirected to SteelHead C for optimization.
•  Service groups 211 and 212 are local to VRF Tenant-C.
IJK Enterprise Design Study (ASR 1000/IOS-XE Example)
IJK Enterprise specializes in automotive component manufacturing. The enterprise headquarters is based in Stuttgart, Germany. The headquarter WAN routers are made up Cisco ASR 1004 routers running on IOS-XE version 3.13S.
Note: The ASR 1000 (ASR1K) router is used as the example platform in this design study to further illustrate the VRF-aware WCCP configurations running on IOS-XE in a multi-VRF environment.
Figure: WAN Optimization with VRF-Aware WCCP on an ASR 1000 Router shows a portion of the WAN optimization setup on an ASR1K router (ASR-18) in IJK Enterprise Stuttgart headquarters. ASR-18 resides at the enterprise HQ WAN edge. The basic design is summarized as follows:
•  WAN optimization is provisioned for both the marketing and engineering departments in IJK Enterprise.
•  ASR-18 is functioning as a multi-VRF router (implementing VRF Lite), with VRF Mktg and VRF Engr created for the corresponding departments.
•  Because there are now two different VRFs in ASR-18, VRF-aware WCCP (or two separate instances of WCCP) has to be implemented on ASR-18 for proper interception and redirection of TCP traffic to the correct SteelHeads:
–  TCP traffic to and from server S10 is intercepted by the WCCP instance for VRF Mktg and redirected to SteelHead F for optimization.
–  TCP traffic to and from server S20 is intercepted by the WCCP instance for VRF Engr and redirected to SteelHead G for optimization.
•  The SteelHeads in this example can be in physical or virtual form factor.
Note: The deployment of VRFs without MPLS is known as VRF Lite. In VRF Lite, each routed interface (whether physical or virtual) typically belongs to one VRF.
Figure: WAN Optimization with VRF-Aware WCCP on an ASR 1000 Router
To configure the scenario shown in Figure: WAN Optimization with VRF-Aware WCCP on an ASR 1000 Router, use the following procedure.
The baseline configurations for ASR-18 are summarized as follows:
•  Two physical interfaces (G0/0/0 and G0/0/1) are used:
–  G0/0/0 is WAN facing and connects to a WAN-layer router (not shown in the diagram).
–  G0/0/1 is LAN facing and connects to LAN switch, SW-18.
–  Two 802.1Q VLAN subinterfaces (G0/0/0.10 and G0/0/0.20) are derived from G0/0/0.
–  Two 802.1Q VLAN subinterfaces (G0/0/1.10 and G0/0/1.20) are derived from G0/0/1.
•  Out of the four subinterfaces created:
–  Two (G0/0/0.10 and G0/0/1.10) are for the Marketing department (marketing server S10 and SteelHead F).
–  The other two (G0/0/0.20 and G0/0/1.20) are for the engineering department (engineering server S20 and SteelHead G).
•  Subinterfaces G0/0/0.10 (in IP subnet 10.1.9.0/24) and G0/0/1.10 (in IP subnet 10.1.10.0/24) are associated with VRF Mktg.
•  Subinterfaces G0/0/0.20 (in IP subnet 10.2.19.0/24) and G0/0/1.20 (in IP subnet 10.2.20.0/24) are associated with VRF Engr.
•  Two OSPFv2 processes are instantiated:
–  One (router ospf 10) for VRF Mktg.
–  The other (router ospf 20) for VRF Engr.
Note: The baseline configurations in this example are not illustrated.
The VRF-aware WCCP configurations are summarized as follows:
•  To facilitate the WCCP mask assignment scheme on the ASR1K router, two WCCP service groups are created for each department:
–  WCCP service group IDs 101 and 102 for VRF Mktg.
–  WCCP service group IDs 201 and 202 for VRF Engr.
•  The WCCP interception is based on the inbound direction for each service group.
To configure the ASR1K/ASR-18 VRF-aware WCCP configurations
vrf definition Engr
rd 20:20
address-family ipv4
exit-address-family
 
vrf definition Mktg
rd 10:10
address-family ipv4
exit-address-family
 
ip wccp vrf Engr 201 redirect-list ACL201
ip wccp vrf Engr 202 redirect-list ACL202
ip wccp vrf Mktg 101 redirect-list ACL101
ip wccp vrf Mktg 102 redirect-list ACL102
 
interface GigabitEthernet0/0/0.10
encapsulation dot1Q 10
vrf forwarding Mktg
ip address 10.1.9.1 255.255.255.0
ip wccp vrf Mktg 101 redirect in
 
interface GigabitEthernet0/0/0.20
encapsulation dot1Q 20
vrf forwarding Engr
ip address 10.2.19.1 255.255.255.0
ip wccp vrf Engr 201 redirect in
 
interface GigabitEthernet0/0/1.10
encapsulation dot1Q 10
vrf forwarding Mktg
ip address 10.1.10.1 255.255.255.0
ip wccp vrf Mktg 102 redirect in
 
interface GigabitEthernet0/0/1.20
encapsulation dot1Q 20
vrf forwarding Engr
ip address 10.2.20.1 255.255.255.0
ip wccp vrf Engr 202 redirect in
 
 
ip access-list extended ACL101
permit tcp 192.168.10.0 0.0.0.255 host 10.1.10.100
 
ip access-list extended ACL102
permit tcp host 10.1.10.100 192.168.10.0 0.0.0.255
 
ip access-list extended ACL201
permit tcp 192.168.20.0 0.0.0.255 host 10.2.20.100
 
ip access-list extended ACL202
permit tcp host 10.2.20.100 192.168.20.0 0.0.0.255
Note: HA and other more complex WCCP configurations are omitted in Code Listing 14-4 for brevity.
Marketing department TCP traffic flow and session interception:
•  Inbound TCP traffic from remote branch F (192.168.10.0/24) to server S10 (10.1.10.100/32) is intercepted by service group 101 and redirected to SteelHead F for optimization.
•  Inbound TCP traffic from server S10 (10.1.10.100/32) to remote branch F (192.168.10.0/24), is intercepted by service group 102 and redirected to SteelHead F for optimization.
•  Service groups 101 and 102 are local to VRF Mktg.
Engineering department TCP traffic flow and session interception:
•  Inbound TCP traffic from remote branch G (192.168.20.0/24) to server S20 (10.2.20.100/32) is intercepted by service group 201 and redirected to SteelHead G for optimization.
•  Inbound TCP traffic from server S20 (10.2.20.100/32) to remote branch G (192.168.20.0/24), is intercepted by service group 202 and redirected to SteelHead G for optimization.
•  Service groups 201 and 202 are local to VRF Engr.
ABC Retail Design Study (ISR/IOS Example)
ABC Retail is a convenience store chain that retails a range of everyday items such as groceries, snack foods, candy, toiletries, soft drinks, tobacco products, and newspapers. The main store office (headquarters) is based in Akron, Ohio. The headquarter WAN routers are made up Cisco 3945 Integrated Services Routers running on IOS version 15.4.2T1.
Note: The Cisco 3900 Integrated Services Router (ISR) is used as the example platform in this design study to further illustrate the VRF-aware WCCP configurations running on IOS in a multi-VRF environment.
Figure: WAN Optimization with VRF-Aware WCCP on a 3900 ISR shows a portion of the WAN optimization setup on an ISR router (ISR-71) in ABC Retail HQ office. ISR-71 resides at the WAN edge of the headquarter office. The basic design is summarized as follows:
•  WAN optimization is provisioned for both the inventory and accounting servers in ABC Retail.
•  ISR-71 is functioning as a multi-VRF router (implementing VRF Lite), with VRF Invt and VRF Acct created for the corresponding servers.
•  Because there are now two different VRFs in ISR-71, VRF-aware WCCP (or two separate instances of WCCP) will have to be implemented on ISR-71 for proper interception and redirection of TCP traffic to the correct SteelHeads:
–  TCP traffic to and from server S50 is intercepted by the WCCP instance for VRF Invt and redirected to SteelHead X for optimization.
–  TCP traffic to and from server S60 is intercepted by the WCCP instance for VRF Acct and redirected to SteelHead Y for optimization.
•  The SteelHeads in this example can be in physical or virtual form factor.
Note: The deployment of VRFs without MPLS is known as VRF Lite. In VRF Lite, each routed interface (whether physical or virtual) typically belongs to one VRF.
Figure: WAN Optimization with VRF-Aware WCCP on a 3900 ISR
To configure the scenario shown in Figure: WAN Optimization with VRF-Aware WCCP on a 3900 ISR, use the following procedures.
The baseline configurations for ISR-71 are summarized as follows:
•  Two physical interfaces (G0/0 and G0/1) are utilized:
–  G0/0 is WAN facing and connects to a WAN layer router (not shown in the diagram for brevity).
–  G0/1 is LAN facing and connects to LAN switch, SW-71.
–  Two 802.1Q VLAN subinterfaces (G0/0.50 and G0/0.60) are derived from G0/0.
–  Two 802.1Q VLAN subinterfaces (G0/1.50 and G0/1.60) are derived from G0/1.
•  Out of the four subinterfaces created:
–  Two (G0/0.50 and G0/1.50) are for inventory purposes (inventory server S50 and SteelHead X).
–  The other two (G0/0.60 and G0/1.60) are for accounting purposes (accounting server S60 and SteelHead Y).
•  Subinterfaces G0/0.50 (in IP subnet 10.1.49.0/24) and G0/1.50 (in IP subnet 10.1.50.0/24) are associated with VRF Invt.
•  Subinterfaces G0/0.60 (in IP subnet 10.2.59.0/24) and G0/1.60 (in IP subnet 10.2.60.0/24) are associated with VRF Acct.
•  Two OSPFv2 processes are instantiated:
–  One (router ospf 50) for VRF Invt.
–  The other (router ospf 60) for VRF Acct.
Note: For brevity, the baseline configurations in this example are not illustrated.
The VRF-aware WCCP configurations are summarized as follows:
•  Based to facilitate the WCCP mask assignment scheme on the ISR router, two WCCP service groups are created for each division:
–  WCCP service group IDs 151 and 152 for VRF Invt.
–  WCCP service group IDs 161 and 162 for VRF Acct.
•  The WCCP interception is based on the inbound direction for each service group.
To configure the ISR/ISR-71 VRF-aware WCCP configurations
ip vrf Acct
rd 60:60
 
ip vrf Invt
rd 50:50
 
ip wccp vrf Acct 161 redirect-list ACL161
ip wccp vrf Acct 162 redirect-list ACL162
ip wccp vrf Invt 151 redirect-list ACL151
ip wccp vrf Invt 152 redirect-list ACL152
 
interface GigabitEthernet0/0.50
encapsulation dot1Q 50
ip vrf forwarding Invt
ip address 10.1.49.1 255.255.255.0
ip wccp vrf Invt 151 redirect in
 
interface GigabitEthernet0/0.60
encapsulation dot1Q 60
ip vrf forwarding Acct
ip address 10.2.59.1 255.255.255.0
ip wccp vrf Acct 161 redirect in
 
interface GigabitEthernet0/1.50
encapsulation dot1Q 50
ip vrf forwarding Invt
ip address 10.1.50.1 255.255.255.0
ip wccp vrf Invt 152 redirect in
 
interface GigabitEthernet0/1.60
encapsulation dot1Q 60
ip vrf forwarding Acct
ip address 10.2.60.1 255.255.255.0
ip wccp vrf Acct 162 redirect in
 
 
ip access-list extended ACL151
permit tcp 192.168.50.0 0.0.0.255 host 10.1.50.100
 
ip access-list extended ACL152
permit tcp host 10.1.50.100 192.168.50.0 0.0.0.255
 
ip access-list extended ACL161
permit tcp 192.168.60.0 0.0.0.255 host 10.2.60.100
 
ip access-list extended ACL162
permit tcp host 10.2.60.100 192.168.60.0 0.0.0.255
Note: HA and other more complex WCCP configurations are omitted in Code Listing 14-5 for brevity.
Inventory division's TCP traffic flow and session interception:
•  Inbound TCP traffic from remote branch X (192.168.50.0/24) to server S50 (10.1.50.100/32) is intercepted by service group 151 and redirected to SteelHead X for optimization.
•  Inbound TCP traffic from server S50 (10.1.50.100/32) to remote branch X (192.168.50.0/24), is intercepted by service group 152 and redirected to SteelHead X for optimization.
•  Service groups 151 and 152 are local to VRF Invt.
Accounting division's TCP traffic flow and session interception:
•  Inbound TCP traffic from remote branch Y (192.168.60.0/24) to server S60 (10.2.60.100/32) is intercepted by service group 161 and redirected to SteelHead Y for optimization.
•  Inbound TCP traffic from server S60 (10.2.60.100/32) to remote branch Y (192.168.60.0/24), is intercepted by service group 162 and redirected to SteelHead Y for optimization.
•  Service groups 161 and 162 are local to VRF Acct.
VRF-Aware WCCP Best Practices
A list of VRF-aware WCCP best common practices is as follows:
•  In VRF-aware WCCP implementations, you can view each VRF as a separate router (or Layer-3 switch) with its own WCCP instance. On the corresponding SteelHead you define the corresponding WCCP service group IDs as if you were connecting to a single router (or Layer-3 switch).
•  Riverbed recommends that you join a SteelHead to WCCP service groups that could redirect overlapping IP segments. Riverbed recommends a separate SteelHead (physical or virtual form factor) per VRF (or per tenant).
•  In VRF-aware WCCP, a service group definition is local to the VRF in which it is defined. The WCCP service IDs defined within a VRF are logically separated from each other, implying that you can reuse the service group IDs across VRFs. Riverbed recommends this configuration in scenarios in which cloud service providers adopt the same addressing scheme for different tenants. In this case, you can implement the same WCCPv2 configurations (including redirect ACLs) for the various tenants.
•  Although VRF-aware WCCP is supported on the NX7K switch, it is not required if the virtual device context (VDC) is configured on the switch instead of VRF. In this case, the WCCP configurations use the usual global IPv4 routing table. Nevertheless, the Nexus 7000 switch supports only a maximum of four VDCs including the default VDC, so you would still need VRF when the number of tenants exceeds four. Therefore, on the NX7K switch, Riverbed recommends that you use both VDC and VRF together, with VRF being the subset of the VDC. Each VDC can be further virtualized to support multiple VRFs.
•  The NX7K switch or NX-OS (version 5.1(1) or later) also supports WCCP variable timers. The configuration command is ip wccp <service-group-number> hia-timeout <here_i_am-timeout-value>. The default hia-timeout is 10 seconds (implying the WCCP variable timers are not enabled by default on NX-OS). Riverbed does not recommend that you change the default HIA-timeout value when configuring WCCP on the NX7K switch.
•  In physical in-path deployment, all traffic traverses the respective WAN optimizers and there is no traffic fan-in control. Virtual in-path deployment with WCCP supports traffic fan-in control through the use of redirect access-control lists (ACLs). Riverbed recommends the use of redirect ACLs (or in short, redirect-lists) on the WCCP router (or Layer-3 switch) because they provide an explicit fan-in control mechanism that minimizes unnecessary routing, redirection, and packet processing. In most cases, Riverbed recommends that you refine the ACLs according to the number of TCP applications that requires WAN optimization for better fan-in control.
•  Riverbed does not recommend to use a virtual gateway address (derived from HSRP, VRRP, or GLBP) as the WCCP router ID.
•  The in-path module wccp-adjust-mss enable command adjusts the appropriate MSS size for the SteelHead during WCCP operation. Riverbed recommends that you enable this command on the SteelHead to avoid unnecessary packet fragmentation or drops due to outsize packets when WCCP GRE redirection and return are selected.
•  The SteelHead enables the WCCP redirection of ICMP messages to support Path MTU Discovery (PMTUD). PMTUD uses ICMP type 3, subtype 4 (Fragmentation needed and DF set) for notification. This is equivalent to the packet-too-big ICMP message defined on ICMP extended ACLs on the router (or Layer-3 switch). Riverbed recommends that you enable this feature on the SteelHead and the router (or Layer-3 switch) when end-to-end PMTUD support is required in the WCCP deployment.