SteelHead™ Deployment Guide : QoS Configuration and Integration : Overview of Riverbed QoS
  
Overview of Riverbed QoS
Riverbed QoS is complementary to RiOS WAN optimization. Whereas SDR, transport streamlining, and application streamlining techniques perform best with bandwidth-hungry or thick applications (such as email, file sharing, and backup), QoS improves performance in latency-sensitive or thin applications (such as VoIP and interactive applications). QoS depends on accurate classification of traffic, bandwidth reservation, and proper traffic priorities.
You can also look at the perspective as WAN optimization speeds up some traffic by reducing its bandwidth needs and accelerating it and QoS slows down some traffic to guarantee latency and bandwidth to other traffic. A combination of both techniques is ideal.
Because the SteelHead acts as a TCP proxy, the appliance already works with traffic flows. Riverbed QoS is an extensive flow-based QoS system, which can queue traffic on a per-flow basis and uses standard TCP mechanics for traffic shaping to avoid packet loss on a congested link. SteelHeads support inbound and outbound QoS.
The major functionalities of QoS are as follows:
  • Classification - Identifies and groups traffic. Riverbed QoS identifies and groups traffic using the TCP/UDP-header information, VLAN ID, or AFE. Identified traffic is grouped into classes or QoS marked (differentiated services code point [DSCP] or Type of Service [ToS])—or both.
  • For more information about AFE, see Application Flow Engine.
  • Policing - Defines the action against the classified traffic. Riverbed QoS can define a minimum and maximum bandwidth per class, the priority of a class relative to other classes, and a weight for the usage of excess bandwidth (unused bandwidth, which is allocated to other classes).
  • Enforcement - Determines how the action takes place. Enforcement is performed using the SteelHead QoS scheduler, which is based on the Hierarchical Fair Service Curve (HFSC) algorithm.
  • You can perform policing and enforcement on a downstream networking device when classification and QoS marking are performed on the SteelHead. This is useful if the SteelHead must integrate with an existing QoS implementation.
    Riverbed QoS is flow based. For TCP, the SteelHead must detect the three-way handshake or it cannot classify a traffic flow. If a traffic flow is not classified, it falls into the default class.
    After a traffic flow is classified, it is registered and cannot be reclassified. If a flow changes to a different application and is not reset by the application, the classification stays the same as before the change.
    SteelHeads set up 1024-packet-deep packet buffers per configured class, regardless of the packet size. You can adjust the depths of the packet buffers using the CLI.
    For information about adjusting packet buffers, see the SteelHead Management Console User’s Guide.
    Riverbed QoS takes effect as soon as traffic congestion occurs on a link. Congestion occurs when multiple flows are sending data at the same time and the packets of the flows are not forwarded immediately, forming a queue.
    There are two types of congestion: long term, which lasts a second or longer, and short term, which occurs for less than a second. Both types of congestion signal that one or more applications are slowed down because of other traffic.
    You can best manage long-term congestion by shaping traffic: reserving bandwidth for the more important traffic. This is the most well-known implementation for QoS.
    Short-term congestion is what can cause applications to hang for a second, or reduce the quality of a VoIP conversation. You can manage short-term congestion by prioritizing traffic packets that are latency-sensitive to move ahead in the queue.
    A well-designed QoS environment uses both prioritization and traffic shaping to guarantee bandwidth and latency for applications.
    Many QoS implementations use some form of packet fair queueing (PFQ), such as weighted fair queueing (WFQ) or class-based weighted fair queueing (CBWFQ). As long as high-bandwidth traffic requires a high priority (or vice versa), PFQ systems perform adequately. However, problems arise for PFQ systems when the traffic mix includes high-priority, low-bandwidth traffic (such as VoIP), or high-bandwidth traffic that does not require a high priority (such as e-mail), particularly when both of these traffic types occur together.
    Additional features such as low-latency queueing (LLQ) attempt to address these concerns by introducing a separate system of strict priority queueing that is used for high-priority traffic. However, LLQ is not a principled way of handling bandwidth and latency trade-offs. LLQ is a separate queueing mechanism meant as a work around for PFQ limitations.
    The Riverbed QoS system is based on a patented version of HFSC. HFSC allows bandwidth allocation for multiple applications of varying latency sensitivity. HFSC explicitly considers delay and bandwidth at the same time. Latency is described in six priority levels (real-time, interactive, business critical, normal, low, and best-effort) that you assign to classes.
    If you assign a priority to a class, the class can tolerate X delay, in which X is the priority setting. At the same time, bandwidth guarantees are respected. This enables Riverbed to deliver low latency to traffic without wasting bandwidth and deliver high bandwidth to delay-insensitive traffic without disrupting delay-sensitive traffic.
    The Riverbed QoS system achieves the benefits of LLQ without the complexity and potential configuration errors of separate, parallel queueing mechanisms.
    For example, you can enforce a mix of high-priority, low-bandwidth traffic patterns (SSH, Telnet, Citrix, RDP, CRM systems, and so on) with lower-priority, high-bandwidth traffic (FTP, backup, replication, and so on). This enables you to protect delay-sensitive traffic such as VoIP, alongside other delay-sensitive traffic such as video conferencing, RDP, and Citrix. You can do this without having to reserve large amounts of bandwidth for the traffic classes.
    Additionally HFSC provides a framework for the following:
  • Link sharing - Specifies how excess bandwidth is allocated among sibling classes. By default, all link shares are equal. QoS classes with a larger link-share weight are allocated more of the excess bandwidth than QoS classes with a lower link share weight.
  • Class hierarchy - A class hierarchy lets a user create QoS classes as children of QoS classes other than the root class. This allows creating a class tree with overall parameters for a certain traffic types and specific parameters for subtypes of that traffic.
  • For more information about QoS classes see QoS Classes.
    You can apply Riverbed QoS to both pass-through and optimized traffic, and it does not require the optimization service. QoS classification occurs during connection setup for optimized traffic—before optimization and compression. QoS shaping and enforcement occurs after optimization and compression. Pass-through traffic has the QoS shaping and enforcement applied appropriately. However, with the introduction of the SteelHead CX and SteelHead EX, there are platform-specific limits defined for the following QoS settings for outbound QoS:
  • Maximum configurable root bandwidth
  • Maximum number of classes
  • Maximum number of rules
  • Maximum number of sites
  • There are no platform-specific limits for Inbound QoS.
    For more information about limits, see Guidelines for the Maximum Number of QoS Classes, Sites, and Rules.
    You can perform differentiated services code point (DSCP) marking and QoS enforcement on the same traffic. First mark the traffic, and then perform QoS qualification and management on the post-marked traffic.
    For information about marking traffic, QoS Marking.