Storage Area Network Replication
  
Storage Area Network Replication
Storage area network (SAN) data protection deployment includes SAN replication products such as EMC Symmetrix Remote Data Facility/Asynchronous (SRDF/A), EMC RecoverPoint, IBM Global Mirror, IBM XIV replication, and Hitachi Universal Replicator. This chapter includes the following sections:
•  Overview of SAN Replication
•  Storage Optimization Modules
•  Best Practices for SAN Replication Using TCP/IP
•  Best Practices for SAN Replication Using Cisco MDS FCIP
SAN replication is one of the common deployments for data protection. For information about the other options, see Common Data Protection Deployments.
Overview of SAN Replication
In SAN replication deployments, WAN links are typically large, often ranging from T3 (45 Mbps) to OC-48 (2.5 Gbps) or more. Often SAN replication solutions require dedicated links used exclusively by the SAN replication solution.
As a best practice for high-speed SAN replication solutions, use SteelHeads that are dedicated to only optimizing high-speed SAN replication workloads and that do not optimize large amounts of general application or end-user traffic. Doing this benefits you for the following reasons:
•  Increase both the level and predictability of performance delivered by SteelHeads, leading to consistent delivery of recovery point and time objectives (RPO/RTO).
•  With separate SteelHeads, the large data sets commonly associated with high-speed replication do not compete for SteelHead data store resources with other user-based traffic, and the reverse.
•  You can optimally tune separate SteelHeads for their respective workloads.
Disable any data compression on the SAN array (for example, EMC Symmetrix Gigabit Ethernet connectivity) and on the FCIP or iFCP gateways (for example, Cisco MDS, and QLogic), so the data enters the SteelHead in raw form. Disabling data compression allows the SteelHeads the opportunity to perform additional bandwidth reduction using RiOS SDR.
Use dedicated SteelHeads of the same model for this type of data protection scenario. Consult with your SAN vendor's customer service representative for best practice configuration of their arrays for use with SteelHeads.
For information about EMC qualification matrix for Riverbed Technology, see the Riverbed Knowledge Base article Deploying SteelHeads with EMC Storage, at https://supportkb.riverbed.com/support/index?page=content&id=s13363. To ensure a successful integration, Riverbed requires EMC involvement in any SRDF deployment.
Storage Optimization Modules
This section describes the storage optimization module options. This section includes the following topics:
•  FCIP Optimization Module
•  SRDF Optimization Module
RiOS 6.0.1 or later includes storage optimization modules for the FCIP and SRDF protocols. These modules provide enhanced data reduction capabilities. The modules use explicit knowledge of where protocol headers appear in the storage replication data stream to separate out headers from the payload data that was written to storage. In absence of a module, these headers represent an interruption to the network stream, reducing the ability of RiOS SDR to match on large, contiguous data patterns.
The modules must be configured based on the types of storage replication traffic present in the network environment. The following sections describe these options and when they would be applied.
FCIP Optimization Module
This section describes the storage optimization for FCIP and how to configure it. This section includes the following topics:
•  Configuring Base FCIP Module
•  Configuring FCIP Module Rules
The module for FCIP is appropriate for environments using storage technology that originates traffic as fibre channel (FC) and then uses a Cisco MDS gateway to convert the FC traffic to TCP for WAN transport.
For information about storage technologies that originate traffic via FC, see Storage Area Network Replication. For configuration best-practice details for Cisco MDS deployments, see Best Practices for SAN Replication Using Cisco MDS FCIP.
All configuration for FCIP must be applied on the SteelHead closest to the FCIP gateway that opens the FCIP TCP connection by sending the initial SYN packet. If you are unsure which gateway initiates the SYN in your environment, Riverbed recommends that you apply the module configuration to the SteelHeads on both ends of the WAN.
Configuring Base FCIP Module
By default, the FCIP module is disabled. When only the base FCIP module has been enabled, all traffic on the well-known FCIP TCP destination ports 3225, 3226, 3227, and 3228 are directed through the module for enhanced FCIP header isolation. In most environments, no further FCIP module configuration is required beyond enabling the base module.
To enable the base FCIP module
1. Connect to the SteelHead CLI and enter the following command:
protocol fcip {enable | disable}
2. If an environment uses one or more nonstandard TCP ports for FCIP traffic, the module can be configured to handle traffic on additional ports by entering the following command:
protocol fcip ports <port-list>
Where <port-list> is a comma-separated list of TCP ports. Prefix this command with no to remove one or more TCP ports from the list of those currently directed to the FCIP module.
You can check whether the module is currently enables or disabled, and you can determine on which TCP ports the module is looking for FCIP traffic.
To show current base FCIP module settings
•  Connect to the SteelHead CLI and enter the following command:
show protocol fcip settings
The Current Connections report shows optimized connections with the App label for each connection shown as FCIP, if the base FCIP module is enabled and connections are established. If the report shows an application of a connection as TCP, the module is not used and you must check the configuration.
To observe the current base FCIP module connections
•  Connect to the SteelHead CLI and enter the following command:
show connections
T Source Destination App Rdn Since
--------------------------------------------------------------------------------
O 10.12.254.2 4261 10.12.254.34 3225 FCIP 18% 2010/03/09 18:50:02
O 10.12.254.2 4262 10.12.254.34 3226 FCIP 86% 2010/03/09 18:50:02
O 10.12.254.142 4315 10.12.254.234 3225 FCIP 2% 2010/03/09 18:50:02
O 10.12.254.142 4316 10.12.254.234 3226 FCIP 86% 2010/03/09 18:50:02
--------------------------------------------------------------------------------
Configuring FCIP Module Rules
An environment that has RF-originated SRDF traffic between VMAX arrays requires additional configuration beyond enabling the FCIP base module. Specifically, the SRDF protocol implementation used to replicate between two VMAX arrays uses an additional Data Integrity Field (DIF) header, which further interrupts the data stream. For Open Systems environments (such as Windows and UNIX/Linux), the DIF header is injected into the data stream after every 512 bytes of storage data. For IBM iSeries (AS/400) environments, the DIF header is injected after every 520 bytes. Do not add a module rule isolating DIF headers in mainframe environments, because SRDF environments that replicate mainframe traffic do not currently include DIF headers.
Note: FCIP module rules are only required for VMAX-to-VMAX traffic.
If your environment includes RF-originated SRDF traffic between VMAX arrays, the module can be configured to look for DIF headers.
To configure the FCIP module to look for DIF headers in the FCIP data stream
•  Connect to the SteelHead CLI and enter the following command:
protocol fcip rule src-ip <ip-address> dst-ip <ip-address> dif {enable | disable} dif-blocksize <number-of-bytes>
For example, if the only FCIP traffic in your environment is RF-originated SRDF between VMAX arrays, you can allow for isolation of DIF headers on all FCIP traffic by modifying the default rule as follows:
protocol fcip rule src-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable
Environments that have a mix of VMAX-to-VMAX RF-originated SRDF traffic along with other FCIP traffic require additional configuration, because SteelHeads must be informed where DIF headers are expected. This configuration is made based on IP addresses of the FCIP gateways. In such a mixed environment, SAN zoning needs to be applied to ensure that DIF and non-DIF traffic are not carried within the same FCIP tunnel.
Assume your environment consists mostly of regular, non-DIF FCIP traffic but also some RF-originated SRDF between a pair of VMAX arrays. Assume a pair of FCIP gateways are configured with a tunnel to carry the traffic between these VMAX arrays, and that the source IP address of the tunnel is 10.0.0.1 and destination IP is 10.5.5.1. The preexisting default rule tells the module not to expect DIF headers on FCIP traffic. This setting allows for correct handling of the all the non-VMAX FCIP. To obtain the desired configuration, enter the following command to override the default behavior and perform DIF header isolation on the FCIP tunnel carrying the VMAX-to-VMAX SRDF traffic:
protocol fcip rule src-ip 10.0.0.1 dst-ip 10.5.5.1 dif enable
When configured, the FCIP module looks for a DIF header after every 512 bytes of storage data, which is typical for an Open Systems environment. If your environment uses IBM iSeries (AS/400) hosts, use the dif-blocksize to inform the module to look for a DIF header after every 520 bytes of storage data. Enter the following command to modify the default rule to look for DIF headers on all FCIP traffic in a VMAX-based, IBM iSeries (AS/400) environment:
protocol fcip rule src-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable dif-blocksize 520
To observe the current base FCIP module connections
•  Connect to the SteelHead CLI and enter the following command:
show protocol fcip rules
You can display each rule currently configured, whether DIF header isolation is enabled or disabled for that rule, and how much storage data is expected before each DIF header in traffic matching that rule.
SRDF Optimization Module
This section describes the storage optimization for SRDF and how to configure it. This section includes the following topics:
•  Configuring the Base SRDF Module
•  Detecting Symmetrix VMAX Microcode
•  Configuring SRDF Module Rules
•  Configuring SRDF Selective Optimization
•  Viewing SRDF Reports
The module for SRDF is appropriate for environments using EMC's Symmetrix Remote Data Facility (SRDF) with DMX and VMAX storage arrays when the traffic is originated directly from Gigabit Ethernet ports on the arrays (also referred to as RE ports). When in this configuration, the SRDF traffic appears on the network immediately as TCP. The SRDF protocol injects headers into the data stream; these headers interrupt the continuity of SteelHead Data Reduction (SDR). The SRDF module removes these headers from the data stream before performing data reduction, and then it reinjects them before sending data to the receiving EMC Symmetrix. In addition, the SRDF module automatically disables native EMC SRDF compression for SRDF transfers, which prevents the need for a BIN file change to disable compression. In the event of a SteelHead or network failure, the Symmetrix arrays fall back on native compression instead of transmitting at uncompressed bandwidth rates.
Note: Environments with SRDF traffic originated through Symmetrix fibre channel ports (RF ports) require configuration of the RiOS FCIP module, not the SRDF module. For information about RF ports, see FCIP Optimization Module.
All configuration for SRDF must be applied on the SteelHead closest to the Symmetrix array that opens the SRDF TCP connection by sending the initial SYN packet. If you are unsure which array initiates the SYN in your environment, Riverbed recommends that you apply module configuration to the SteelHeads on both ends of the WAN.
Configuring the Base SRDF Module
By default, the SRDF module is disabled. When only the base SRDF module has been enabled, all traffic on the well-known SRDF TCP destination port 1748 is directed through the module for enhanced header isolation. In most environments using SRDF only between DMX arrays or VMAX-to-DMX, no further SRDF module configuration is required beyond enabling the base module.
To enable the base SRDF module
1. Connect to the SteelHead CLI and enter the following command:
protocol srdf enable
To disable SRDF use the no protocol srdf enable command.
2. If an environment used one or more nonstandard TCP ports for RE-originated SRDF traffic, the module can be configured to handle traffic on additional ports by entering the following command:
protocol srdf ports <port-list>
Where <port-list> is a comma-separated list of TCP ports. Use the no command option to remove one or more TCP ports from the list of those currently directed to the SRDF module.
You can see whether the module is currently enabled or disabled, and you can determine on which TCP ports the module is looking for SRDF traffic.
To observe current base SRDF module settings
•  Connect to the SteelHead CLI and enter the following command:
show protocol srdf settings
The Current Connections report shows optimized connections with the App label for each connection shown as SRDF, if the base SRDF module is enabled and connections are established. If the report shows a connection's App as TCP, the module is not used and the configuration must be checked.
To observe the current base SRDF module connections
•  Connect to the SteelHead CLI and enter the following command:
show connections
T Source Destination App Rdn Since
--------------------------------------------------------------------------------
O 10.12.254.80 4249 10.12.254.102 1748 SRDF 82% 2010/03/09 16:35:40
O 10.12.254.80 4303 10.12.254.202 1748 SRDF 83% 2010/03/09 16:35:40
O 10.12.254.180 4250 10.12.254.102 1748 SRDF 85% 2010/03/09 16:35:40
O 10.12.254.180 4304 10.12.254.202 1748 SRDF 86% 2010/03/09 16:35:40
--------------------------------------------------------------------------------
Detecting Symmetrix VMAX Microcode
For Symmetrix VMAX running Enginuity microcode levels newer than 5874, you do not need to configure SRDF module rules (for details, see Configuring SRDF Selective Optimization). For Symmetrix VMAX running Enginuity level 5874 or older, configure SRDF module rules as described in the Configuring SRDF Module Rules section.
To detect Symmetrix microcode level for Open Systems-connected Symmetrix, use the symcfg command in EMC's Solutions Enabler software. Solutions Enabler is EMC software is typically used for managing Symmetrix storage arrays. The following example shows sample output:
# symcfg -sid 000194900363 list -v
Symmetrix ID: 000194900363
Time Zone : PST
Product Model : VMAX-1SE
Symmetrix ID : 000194900363
 
Microcode Version (Number) : 5875 (16F30000)
Microcode Registered Build : 0
Microcode Date : 11.22.2010
 
Microcode Patch Date : 11.22.2010
Microcode Patch Level : 122
Configuring SRDF Module Rules
An environment that has RE-originated SRDF traffic between VMAX arrays requires additional configuration beyond enabling the base module. Specifically, the SRDF protocol implementation used to replicate between two VMAX arrays employs an additional Data Integrity Field (DIF) header, which further interrupts the data stream. For Open Systems environments (such as Windows and UNIX/Linux), the DIF header is injected into the data stream after every 512 bytes of storage data. For IBM iSeries (AS/400) environments the DIF header is injected after every 520 bytes. Do not add a module rule isolating DIF headers in mainframe environments, because SRDF environments that replicate mainframe traffic do not currently include DIF headers.
Note: SRDF module rules are only required for VMAX-to-VMAX traffic.
If your environment includes RE-originated SRDF traffic between VMAX arrays, the module can be configured to look for DIF headers.
To configure the SRDF module to look for DIF headers
•  Connect to the SteelHead CLI and enter the following command:
(config) # protocol srdf rule src-ip <ip-address> dst-ip <ip-address> dif {enable | disable} dif-blocksize <number-of-bytes>
For example, if the only RE-originated SRDF traffic in your environment is between VMAX arrays, you can allow for isolation of DIF headers on all SRDF traffic by modifying the default rule as follows:
(config) # protocol srdf rule src-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable
Environments that have a mix of VMAX-to-VMAX and DMX-based SRDF traffic require additional configuration, because SteelHeads must be informed where DIF headers are expected. This configuration is made based on RE port IP addresses.
Assume your environment contained RE-originated SRDF traffic mostly between DMX arrays but also some between a pair of VMAX arrays. Assume the VMAX array in the primary location had RE ports of IP addresses 10.0.0.1 and 10.0.0.2 and the VMAX array in the secondary location had RE ports of IP addresses 10.5.5.1 and 10.5.5.2. The preexisting default rule tells the module not to expect DIF headers on all RE-originated SRDF traffic. This behavior allows for correct handling of the main DMX-based SRDF traffic. To obtain the desired configuration, enter the following commands to override the default behavior and perform DIF header isolation on the VMAX SRDF connections:
(config) # protocol srdf rule src-ip 10.0.0.1 dst-ip 10.5.5.1 dif enable
(config) # protocol srdf rule src-ip 10.0.0.1 dst-ip 10.5.5.2 dif enable
(config) # protocol srdf rule src-ip 10.0.0.2 dst-ip 10.5.5.1 dif enable
(config) # protocol srdf rule src-ip 10.0.0.2 dst-ip 10.5.5.2 dif enable
When configured, the SRDF module looks for a DIF header after every 512 bytes of storage data, which is typical for an Open Systems environment. If your environment uses IBM iSeries (AS/400) hosts, rules that use the dif-blocksize to inform the module to look for a DIF header after every 520 bytes of storage data. Enter the following command to modify the default rule to look for DIF headers on all SRDF traffic in a
VMAX-based, IBM iSeries (AS/400) environment:
(config) # protocol srdf rule src-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable dif-blocksize 520
To observe the current SRDF rule settings
•  Connect to the SteelHead CLI and enter the following command:
show protocol srdf rules
This command displays each rule currently configured, whether DIF header isolation is enabled or disabled for that rule, and how much storage data is expected before each DIF header in traffic matching that rule.
Configuring SRDF Selective Optimization
RiOS 6.1.2 or later feature selective optimization. Selective optimization provides different types of optimization to different RDF Groups, and it allows you to tune for the best optimization setting for each RDF group to maximize the SteelHead usability. Selective optimization also depends on Symmetrix VMAX Enginuity microcode levels newer than 5874.
Consider an example with three types of data:
•  Oracle logs (RDF group 1)
•  Encrypted check images (RDF group 2)
•  Virtual machine images (RDF group 3)
In this example, assign LZ-only compression to the Oracle logs, no optimization to the encrypted check images, and default SDR to the virtual machine images. To assign these levels of optimization, configure the SteelHead to associate specific RE port IP addresses with specific Symmetrix arrays, and then assign rules to specific RDF groups for different optimization policies.
To configure the SteelHead to associate RE ports with a specific Symmetrix
1. Connect to the SteelHead CLI and enter the following command:
(config) # protocol srdf symm id <symm-id> address <ip-address>
The Symmetrix ID is an alphanumeric string that can contain hyphens and underscores (for example, a standard Symmetrix serial number is 000194900363). Do not use spaces or special characters.
2. Add a rule to affect traffic coming from the RE ports associated with this SYMMID:
(config) # protocol srdf symm id <symm-id> rdf_group <rdf-group> optimization <opt-policy> [description *]
OPT_POLICY is one of: none, lz-only, or sdr-default.
RDF_GROUP in RiOS is specified as a decimal number; however some EMC utilities report RDF group numbers in hexadecimal.
When you have Symmetrix arrays serving Open Systems hosts, and you are using EMC Solutions Enabler, RDF group numbers are reported in decimal—ranging from 1 to 255. By default, this is how RDF_GROUP is entered in RiOS and shown in reports. For mainframe-attached Symmetrix arrays, tools report RDF group numbers in hexadecimal, starting from 0. Use the following command for SteelHeads serving only mainframe-attached Symmetrix arrays, and you want the representation of RDF_GROUP to be in the range 0 to 254:
(config) # protocol srdf symm id base_rdf_group 0
When you have three RDF groups (assuming a SYMMID of 123), enter the following commands:
(config) # protocol srdf symm id 123 rdf group 1 optimization lz-only description Oracle1_DB
(config) # protocol srdf symm id 123 rdf group 2 optimization none description Checkimages
(config) # protocol srdf symm id 123 rdf group 3 optimization sdr-default description VMimages
The following example shows sample output:
(config) # show protocol srdf symm stats
Time SYMM RDF group opt policy Reduction LAN Mbps WAN Mbps LAN KB WAN KB description
------------------ ---- --------- ---------- --------- -------- -------- ------ ------ -----------
10/2/2010 10:14:49 0123 1 lz-only 68% 222 71.04 222,146 71,040 Oracle1 DB
10/2/2010 10:14:49 0123 2 none 0% 79 79 79,462 79,462 Checkimages
10/2/2010 10:14:49 0123 3 sdr-default 94% 299 17.94 299,008 17,943 VMimages
Note: Data reduction is highest for RDF Group 3, which is treated with default SDR.
For more information about commands that show the current SRDF configuration (show protocol srdf symm id) or show reduction statistics for all or specific SYMMIDs (show protocol srdf symm stats), see the Riverbed Command-Line Interface Reference Manual.
To fine-tune SRDF optimization on a per-RDF-group basis
1. Check the level of data reduction currently achieved on each RDF group with the show protocol srdf symm stats command.
2. For RDF groups achieving low data reduction (for example, less than 20%), change the optimization policy to LZ-only.
3. For RDF groups achieving no data reduction (0%), first check to determine whether the RDF groups contain information that is intentionally encrypted. If so, change the optimization policy to none. If not, Riverbed recommends investigating whether source encryption can be disabled.
Viewing SRDF Reports
In RiOS 7.0 or later, you can report SRDF statistics on a per-RDF-group basis. The following command displays statistics for all RDF groups being optimized:
(config) # show stats protocol srdf [interval <interval>] | [start-time <date> end-time <date>]
This command shows statistics from a specific Symmetrix machine (indicated by symm_id):
(config) # show stats protocol srdf symm id <symm-id> [interval <interval>] | [start-time <date> end-time <date>]
This command shows statistics from a specific RDF group (indicated by rdf group):
(config) # show stats protocol srdf symm id <symm-id> rdf-group <rdf-group> [interval <interval>] | [start-time <date> end-time <date>]
To view these reports, open the Management Console, and choose Reports > Optimization: SRDF.
Best Practices for SAN Replication Using TCP/IP
Many SAN arrays support replication using direct connectivity via TCP/IP. In this case, SteelHeads optimize connections that are initiated directly between the SAN arrays participating in the replication. The following table shows a best practice configuration running RiOS 5.5.3 (or later) with TCP/IP connectivity directly from storage array.
Feature
CLI Commands
Enable RiOS SDR-M.
datastore sdr-policy sdr-m
Set compression level (LZ1)
datastore codec compression level 1
Multicore Balancing
datastore codec multi-core-bal
Enable MX-TCP class covering replication traffic.
qos classification class add class-name “blast” priority realtime min-pct 99.0000000 link-share 100.0000000 upper-limit-pct 100 “root”
qos classification rule add class-name “blast” traffic-type optimized destination port <port-number> rulenum 1
qos classification rule add class-name “blast” traffic-type optimized source port <port-number> rulenum 1
Replace <port-number> with the port used by the replication application.
Set WAN TCP buffers
protocol connection wan receive def-buf-size <2*BDP>
protocol connection wan send def-buf-size <2*BDP>
Set LAN TCP buffers
protocol connection lan send buf-size 1048576
tcp adv-win-scale -1
Note: The tcp adv-win-scale -1 command is for RiOS 5.5.6c or later.
Reset existing connections on start up
in-path kickoff
in-path kickoff-resume
Note: The in-path kickoff-resume command is for RiOS 6.0.1a or later.
Never pass-through SYN packets
in-path always-probe enable
Increase encoder buffer sizes
datastore codec multi-codec encoder max-ackqlen 20
datastore codec multi-codec encoder global-txn-max 128
SRDF/A optimization
Note: Use only with SRDF/A Replication and RiOS 6.0.1 or later.
protocol srdf enable
VMAX DIF header optimization
protocol srdf rule src-ip <x.x.x.x> dst-ip <y.y.y.y> dif enable
Replace <x.x.x.x> and <y.y.y.y> with IP address pairs for RE ports. For details, see Storage Optimization Modules.
Note: Use only with EMC VMAX and RiOS 6.0.1 or later.
Restart the optimization service
restart
Best Practices for SAN Replication Using Cisco MDS FCIP
This section describes the key concepts and recommended settings in the MDS. This section includes the following topics:
•  FCIP Profiles
•  FCIP Tunnels
•  Configuring a Cisco MDS FCIP Deployment
•  Best Practices for a RiOS 5.5.3 or Later with Cisco MDS FCIP Configuration
FCIP Profiles
An FCIP profile defines characteristics of FCIP tunnels that are defined through a particular MDS Gigabit Ethernet interface. Profile characteristics include the:
•  IP address of the MDS Gigabit Ethernet interface that is originating the tunnel.
•  TCP port number.
•  bandwidth and latency characteristics of the WAN link.
•  advanced settings that are typically left to their default values.
The MDS enables you to define up to three FCIP profiles per physical MDS Gigabit Ethernet interface. Because a tunnel can be created for each profile, a Cisco MDS switch with two physical Gigabit Ethernet ports can have up to six profiles. Most configurations have only one profile per Gigabit Ethernet interface. Riverbed recommends maximizing the number of profiles configured for each GigE port to increase the total number of TCP connections.
In the profile setting, the default maximum and minimum bandwidth settings per FCIP profile are 1000 Mbps and 500 Mbps, respectively. You can achieve better performance for unoptimized and optimized traffic using 1000 Mbps and 800 Mbps. These bandwidth setting is the rate of the LAN-side TCP entering the SteelHead, so that setting it aggressively high does not have any downside, because the SteelHead terminates TCP locally on the LAN side and the MDS can slow down if it tries to go too fast by advertising a smaller TCP window.
Similarly, leave the round-trip setting at its default (1000 ms in the Management Console, 1 ms in the CLI), because the network in this context is effectively the LAN connection between the MDS and the SteelHead.
If you are doing unoptimized runs, configure the bandwidth and latency settings in the MDS to reflect the actual network conditions of the WAN link. These settings improve performance in terms of enabling the MDS to fill-the-pipe with unoptimized runs in the presence of latency.
FCIP Tunnels
An FCIP tunnel configuration is attached to a profile and defines the IP address and TCP port number of a far-side MDS to which an FCIP connection is established. You can keep the tunnel configuration default settings, with the following key exceptions:
•  In the Advanced tab of the MDS GUI:
–  Turn on the Write Accelerator option. Always use this option when testing with SteelHeads in the presence of latency. This is an optimization in the MDS (and similar features exist in other FCIP/iFCP products) to reduce round trips.
–  Set the FCIP configuration for each tunnel to Passive on one of the MDS switches. By default, when first establishing FCIP connectivity, each MDS normally tries to constantly initiate new connections in both directions, and it is difficult to determine which side ends up with the well-known destination port (for example, 3225). This behavior can make it difficult to interpret SteelHead reports. When you set one side to Passive, the nonpassive side always initiates connections, hence the behavior is deterministic.
FCIP settings allow you to specify the number of TCP connections associated with each FCIP tunnel. By default, this setting is 2: one for Control traffic, and one for the Data traffic. Do not change the default value. The single-TCP mode only exists to maintain compatibility with older FCIP implementations. Separating the Control and Data traffic has performance implications because FC is highly jitter sensitive.
You can set whether the MDS compresses the FCIP data within the FCIP tunnel configuration. You must disable it when the SteelHead is optimizing. On the MDS the default setting is off. The best practices of common SAN replication vendors (for example, EMC) recommend turning on this setting when there are no WAN optimization controller (WOC) systems present. However, when adding SteelHeads to an existing environment, it should be disabled.
Configuring a Cisco MDS FCIP Deployment
The following example shows a Cisco MDS FCIP gateway configuration. Cisco-style configurations typically do not show the default values (for example, compression is off by default, and it is not present in this configuration dump). Also, this configuration does not show any non-FCIP elements (such as the FC ports that connect to the SAN storage array and VSANs). This example shows a standard and basic topology that includes an MDS FCIP gateway at each end of a WAN link, MDS1, and MDS2.
To configure a standard and basic topology that includes an MDS FCIP gateway
1. Configure MDS1.
fcip profile 1
ip address 10.12.254.15
tcp max-bandwidth-mbps 1000 min-available-bandwidth-mbps 800 round-trip-time-ms 1
fcip profile 2
ip address 10.12.254.145
tcp max-bandwidth-mbps 1000 min-available-bandwidth-mbps 800 round-trip-time-ms 1
interface fcip1
use-profile 1
peer-info ipaddr 10.12.254.45
write-accelerator
no shutdown
interface fcip2
use-profile 2
peer-info ipaddr 10.12.254.245
write-accelerator
no shutdown
ip route 10.12.254.32 255.255.255.224 10.12.254.30
ip route 10.12.254.224 255.255.255.224 10.12.254.130
interface GigabitEthernet1/1
ip address 10.12.254.15 255.255.255.224
switchport description LAN side of mv-emcsh1
no shutdown
interface GigabitEthernet1/2
ip address 10.12.254.145 255.255.255.224
switchport description LAN side of mv-emcsh1
no shutdown
2. Configure MDS2.
fcip profile 1
ip address 10.12.254.45
tcp max-bandwidth-mbps 1000 min-available-bandwidth-mbps 800 round-trip-time-ms 1
fcip profile 2
ip address 10.12.254.245
tcp max-bandwidth-mbps 1000 min-available-bandwidth-mbps 800 round-trip-time-ms 1
interface fcip1
use-profile 1
passive-mode
peer-info ipaddr 10.12.254.15
write-accelerator
no shutdown
interface fcip2
use-profile 2
passive-mode
peer-info ipaddr 10.12.254.145
write-accelerator
no shutdown
ip route 10.12.254.0 255.255.255.224 10.12.254.60
ip route 10.12.254.128 255.255.255.224 10.12.254.230
interface GigabitEthernet1/1
ip address 10.12.254.45 255.255.255.224
switchport description LAN side of mv-emcsh2
no shutdown
interface GigabitEthernet1/2
ip address 10.12.254.245 255.255.255.224
switchport description LAN side of mv-emcsh2
no shutdown
Best Practices for a RiOS 5.5.3 or Later with Cisco MDS FCIP Configuration
Riverbed recommends the following best practices regarding a Cisco MDS FCIP configuration:
•  Enable the RiOS 5.5 or later multicore balancing feature due to the small number of data connections.
•  Use an in-path rule to specify the neural-mode as never for FCIP traffic.
•  Set the always-probe port to 3225 to ensure that MDS aggressive SYN-sending behavior does not result in unwanted pass-through connections.
The following table summarizes the CLI commands RiOS 5.5.3 or later with Cisco MDS FCIP.
Feature
CLI Commands
Enable RiOS SDR-M.
datastore sdr-policy sdr-m
Set compression level (LZ1)
datastore codec compression level 1
Multicore Balancing
datastore codec multi-core-bal
Turn Off Nagle
in-path rule auto-discover srcaddr all-ip dstaddr all-ip
dstport “3225” preoptimization “none” optimization
“normal” latency-opt “normal” vlan -1 neural-mode
“never” wan-visibility “correct” description “”
rulenum start
MX-TCP class covering FCIP traffic
qos classification class add class-name “blast”
priority realtime min-pct 99.0000000 link-share
100.0000000 upper-limit-pct 100.0000000 queue-type
mxtcp queue-length 100 parent “root”
qos classification rule add class-name “blast”
traffic-type optimized destination port 3225
rulenum 1
qos classification rule add class-name “blast”
traffic-type optimized source port 3225 rulenum 1
Set WAN TCP buffers
protocol connection wan receive def-buf-size <2*BDP>
protocol connection wan send def-buf-size <2*BDP>
Set LAN TCP buffers
protocol connection lan send buf-size 1048576
tcp adv-win-scale -1
Note: tcp adv-win-scale -1 is for RiOS 5.5.6c or later.
Reset existing connections on startup
in-path kickoff
in-path kickoff-resume
Note: in-path kickoff-resume is for RiOS 6.0.1a or later.
Never pass-through SYN packets
in-path always-probe enable
Change always-probe port to FCIP
in-path always-probe port 3225
Increase encoder buffer sizes
datastore codec multi-codec encoder max-ackqlen 20
datastore codec multi-codec encoder global-txn-max 128
FCIP optimization
protocol fcip enable
Note: Use only with RiOS 6.0.1 or later.
DIF header optimization
protocol fcip rule src-ip <x.x.x.x> dst-ip <y.y.y.y> dif enable
Replace <x.x.x.x> and <y.y.y.y> with IP address pairs for MDS Gigabit Ethernet ports. For details, see Storage Optimization Modules.
Note: Use only with EMC VMAX and RiOS 6.0.1 or later.
Restart the optimization service
restart
If you increase the number of FCIP profiles, you must also create separate in-path rules to disable Nagle for other TCP ports (for example, 3226 and 3227).
Similarly, if you decide to set QoS rules to focus on port 3225 to drive traffic into a particular class, you must create rules for both ports 3226 and 3227.