About data store performance
Data store performance settings are under Optimization > Data Replication: Performance.
Segment Replacement Policy selects the technique used to replace the data in the data store. While the default setting works best for most appliances, occasionally we recommend changing the policy to improve performance. We recommend that the segment replacement policy matches on both the client-side and server-side SteelHeads.
• LRU replaces the least recently used data in the data store, which improves hit rates when the data aren’t equally used. Default.
• FIFO replaces data in the order received (first in, first out).
You can also optimize the data store for high-throughput data replication (DR) or data center workloads over a high-bandwidth WAN. DR and SAN replication workloads at these high throughputs might benefit from settings that enhance data store performance while still receiving data reduction benefits from SDR. For DR workloads, we recommend using separate appliances from those used for acceleration of other application traffic to maintain consistent levels of performance for DR workloads.
Adaptive Data Streamlining Modes monitors and controls the different resources available on the appliance, and adapts the utilization of them to optimize LAN throughput. We recommend you select another setting only with guidance from Riverbed. Generally, the default setting provides the most data reduction. When choosing an adaptive streamlining mode for your network, contact Riverbed Support to help you evaluate available settings.
Use caution with the SDR-Adaptive Legacy setting, particularly when you are optimizing CIFS or NFS with prepopulation.
You can’t use data store synchronization with SDR-M.
After changing this setting, you must restart the service on the client-side and server-side appliances.
After changing the streamlining setting, you can verify whether changes have had the desired effect by reviewing the Optimized Throughput report.
• Default is enabled by default and works for most implementations. It provides the most data reduction while reducing random disk seeks and improving disk throughput by discarding very small data margin segments that are no longer necessary. This margin segment elimination (MSE) process provides network-based disk defragmentation, writes large page clusters, and monitors the disk write I/O response time to provide more throughput.
• SDR-adaptive legacy includes the default settings and also balances writes and reads, and monitors both read and write disk I/O response, and CPU load. Based on statistical trends, you can employ a blend of disk-based and non-disk-based data reduction techniques to enable sustained throughput during periods of disk/CPU-intensive workloads.
• SDR-adaptive advanced maximizes LAN-side throughput dynamically under different data workloads. This switching mechanism is governed with a throughput and bandwidth reduction goal using the available WAN bandwidth.
• SDR-M performs data reduction entirely in memory, which prevents the SteelHead from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. This is typically the preferred configuration mode for SAN replication environments. SDR-M is most efficient when used between two identical high-end SteelHead models. When used between two different SteelHead models, the smaller model limits the performance. After enabling SDR-M on both the client-side and the server-side appliances, restart both appliancess to avoid performance degradation. If you select SDR-M as the adaptive data streamlining mode, the Clear the Data Store option isn’t available when you restart the optimization service because the SDR-M mode has no effect on the data store disk.
CPU settings allow you adjust compression level, enable adaptive compression, and enable multi-core balancing of connection loads. These features are useful for high-traffic loads to scale back compression, increase throughput, and maximize Long Fat Network (LFN) utilization.
• Compression Level specifies the relative trade-off of data compression for LAN throughput speed. Generally, a lower number provides faster throughput and slightly less data reduction. Level 1 sets minimum compression and uses less CPU; level 9 sets maximum compression and uses more CPU. The default value is 6. We recommend setting the compression level to 1 in high-throughput environments such as data center-to-data center replication.
• Adaptive Compression detects LZ data compression performance for a connection dynamically and disables it (sets the compression level to 0) momentarily if it’s not achieving optimal results. Improves end-to-end throughput over the LAN by maximizing the WAN throughput. By default, this setting is disabled.
• Multi-Core Balancing enables multicore balancing, which ensures better distribution of workload across all CPUs, thereby maximizing throughput by keeping all CPUs busy. Core balancing is useful when handling a small number of high-throughput connections (approximately 25 or fewer). By default, this setting is disabled and should be enabled only after careful consideration.