Deployment Best Practices : Core best practices
  
Core best practices
This section describes best practices for deploying the Core.
Deploy on gigabit Ethernet networks
The iSCSI protocol enables block-level traffic over IP networks. However, iSCSI is both latency and bandwidth sensitive. To optimize performance reliability, deploy Core and the storage array on Gigabit Ethernet networks.
Use CHAP
For additional security, use CHAP between Core and the storage array, and between Edge and the server. One-way CHAP is also supported.
Configure initiators and storage groups or LUN masking
To avoid unwanted hosts to access LUNs mapped to Core, configure initiator and storage groups between Core and the storage system. This particular practice is also known as LUN masking or Storage Access Control.
When mapping Fibre Channel LUNs to the Core appliances, ensure the ESXi servers in the cluster that are hosting the Core appliances have access to these LUNs. Configure the ESXi servers in the cluster that are not hosting the Core appliances to not have access to these LUNs.
Segregate storage traffic from management traffic
To increase overall security, minimize congestion, minimize latency, and simplify the overall configuration of your storage infrastructure, segregate storage traffic from regular LAN traffic using VLANs.
When to pin and prepopulate the LUN
The product technology has built-in file system awareness for NTFS and VMFS file systems. You will likely need to pin and prepopulate the LUN if it contains file systems other than NTFS and VMFS (or unstructured data), or if frequent, prolonged periods of WAN outages are expected.
LUNs containing file systems other than NTFS and VMFS and LUNs containing unstructured data
Pin and prepopulate the LUN for unoptimized file systems such as FAT, FAT32, ext3, and so on. You can also pin the LUN for applications like databases that use raw disk file format or proprietary file systems.
Data availability at the branch during a WAN link outage
When the WAN link between the remote branch office and the data center is down, data no longer travels through the WAN link. Hence, the product technology and its intelligent prefetch mechanisms no longer functions. Pin and prepopulate the LUN if frequent, prolonged periods of WAN outages are expected.
By default, the Edge keeps a write reserve that is 10 percent of the blockstore size. If prolonged periods of WAN outages are expected, appropriately increase the write reserve space.
Core configuration export
Store and back up the configuration on an external server in case of system failure. Enter the following CLI commands to export the configuration:
enable
configure terminal
configuration bulk export scp://username:password@server/path/to/config
Complete this export each time a configuration operation is performed or you have some other changes on your configuration.
Core in HA configuration replacement
If the configuration has been saved on an external server, the failed Core can be seamlessly replaced. Enter the following CLI commands to retrieve what was previously saved:
enable
configure terminal
no service enable
configuration bulk import scp://username:password@server/path/to/config
service enable
Hardware upgrade of Core appliances in HA configuration
Due to expansion of your deployment, it might be necessary to replace existing Core appliances with models that provide greater capacity. If you already have existing Cores deployed in an HA configuration, it is possible to perform the hardware replacement with minimal, or even zero, impact to normal data service operations. However, before replacing the hardware, we strongly recommend using Riverbed Professional Services to help plan and implement this process.
The following steps outline the required tasks:
1. Verify failover configuration of both the Core and Edge devices by reviewing the current settings in the Core Management Console pages for failover and storage.
2. Install both of the new Cores and apply a basic jumpstart configuration including a temporary IP address on the primary interface of each Core to allow for management access.
3. Check that the new Cores have the correct software version and licenses installed. Apply any updates if needed.
4. Trigger a failover of one of the existing production Core devices by stopping product services.
5. Check that the surviving (active) production Core is continuing to provide storage services to all Edges as expected.
6. Using either the Management Console or CLI, export the current configuration of the passive failover Core to an external device: for example, the workstation you are performing these tasks with.
7. Shut down the passive failover Core.
8. Stop the product services on the first of the new Core devices.
9. Swap over all of the network cables from the passive (old) Core to the first of the new Core devices, making sure that ports and cables match.
10. Check that the new Core is still accessible via its temporary IP address on the primary interface.
11. Connect a serial console cable to the new Core and perform a configuration jumpstart, applying the IP address and hostname of the old passive Core.
12. Connect to the Management Console of the new Core and import the configuration file of the old Core.
13. Once the configuration for the new Core is correctly imported and applied, start the product services.
14. Verify that the new Core is peered with the remaining active production Core that has yet to be upgraded.
15. Using either the Management Console or CLI, export the current configuration of the active failover Core to an external device: for example, the workstation you are performing these tasks with.
16. Shut down the active failover Core that has yet to be upgraded.
17. Check that the surviving (newly upgraded) production Core is continuing to provide storage services to all Edges as expected.
18. Stop the product services on the second new Core device.
19. Swap over all of the network cables from the remaining (old) Core to the second new Core device, making sure that ports and cables match.
20. Check that the second new Core is still accessible via its temporary IP address on the primary interface.
21. Connect a serial console cable to the second new Core and perform a configuration jumpstart, applying the IP address and hostname of the remaining (old) Core.
22. Connect to the Management Console of the second new Core and import the configuration file of the remaining (old) Core.
23. Once the configuration for the second new Core is correctly imported and applied, start the product services.
24. Verify that the second new Core is peered with the first new Core that was upgraded and that all data services to the Edge device are operating as normal.
LUN-based data protection limits
When using LUN-based data protection, be aware that each snapshot/backup operation takes approximately 2 minutes to complete. This means that if the hourly option is configured for more than 30 LUNs, it is quite possible that there could be an increasing number of nonreplicated snapshots on Edges.
WAN usage consumption for a Core to Edge VMDK data migration
When provisioning VMs as part of a data migration, it is possible to see high traffic usage across the WAN link. This can be due to the type of VMDK that is being migrated. This table gives an example of WAN usage consumption for a Core to Edge VMDK data migration containing a 100-GiB VMDK with 20 GiB used.
VMDK type
WAN traffic usage
Space used on array thick LUNs
Space used on array thin LUNs
VMDK fragmentation
Thin
20 GB
20 GiB
20 GiB
High
Thick eager zero
100 GB + 20 GB = 120 GB
100 GiB
100 GiB
None (flat)
Thick lazy zero (default)
20 GB + 20 GB = 40 GB
100 GiB
100 GiB
None (flat)
For more details, go to Knowledge Base article S23357.
Reserve memory and CPU resources when deploying Core
We strongly recommend that you allocate and reserve the correct amount of resources. Reserving these resources ensures that the memory and recommended CPU are dedicated for use by the Core instance, enabling it to perform as expected.