June 29, 2012
The Real Story about Random Early Discard (RED)
Image from FreeDigitalPhotos.com
What is RED and its Effect on Queue Control?
Random Early Discard (RED) is a data queue control mechanism to improve data utilization during network congestion. Some radio vendors have made exaggerated claims about its capacity to improve “radio link utilization.”
To help control network congestion (i.e., overloading) the Internet Transmission Control Protocol (TCP) uses a mechanism known as the TCP sliding window, which is designed to maximize bandwidth usage while avoiding traffic congestion. Under control of the sliding window, TCP connections have their window size (i.e., share of bandwidth) increased as acknowledgements (ACKs) are received. With multiple connections in play this can reach a point where all bandwidth is consumed, resulting in network congestion and the dropping (i.e., tail dropping) of frames. At this point, the sliding window mechanism initiates a simultaneous reduction in window size for all TCP connections after which, with a return to stability, it ramps up their window size to create an oscillating traffic pattern. It results in inefficient use of the available traffic bandwidth. This oscillating behavior is termed “TCP global synchronization.”
Counteracting the Problem
In order to counteract this problem different queuing control mechanisms have been devised. A common one is Random Early Discard or Radom Early Detection (RED). In RED, when the queue exceeds a certain size the network component marks each arriving packet with a probability that depends on the queue size. When the buffer is full, the probability reaches 1 and all incoming packets are dropped. The chance that the network component notifies a particular sender to reduce its data transmission rate is proportional to the sender’s share of the bandwidth of the link—an improvement over tail dropping.
An issue that was realized early on about the RED algorithm is that it cannot differentiate between traffic types. A variation of the RED algorithm that addresses this problem is called Weighted Random Early Detection (WRED). In WRED, the probability of dropping packets is based on the size of the queue and the traffic flow type (IP precedence).
Improvement Using RED and WRED Algorithms are Modest
Although some microwave vendors claim to obtain up to 25 percent improvement in “radio link utilization” with the RED and WRED algorithms, independent studies show that RED improvement for real data applications is more modest. Also, bear in mind that RED is only beneficial where the bulk of the traffic is TCP/IP.
The first study from AT&T Labs and Stanford University using a simple analytical model showed that although RED may prevent the traffic rate from moving in lockstep, this algorithm was not enough to prevent the traffic rate from oscillating and wasting throughput for all traffic flows. The study suggests that if the buffer sizes are small, then randomized policies are unlikely to reduce aggregated periodic behavior.
A second study from the University of North Carolina concludes that below saturation point (90 percent utilization) there is little difference in performance between RED and tail dropping. For loads from 90 percent to 100 percent, RED can be tuned to outperform tail dropping but only with careful RED parameter settings and degradation in latency response. Analyzing the results from these and other studies it becomes clear that the claim of 25 percent link improvement is not realistic.
Marketing Engineering Specialist
- Congestion management and avoidance (ccarunas.wordpress.com)
- TCP Effective Bandwidth (1) (gkf168.wordpress.com)
- Feds Update Spectrum Release to Relieve Wireless Congestion (blog.aviatnetworks.com)
- Fundamental Progress Solving Bufferbloat (gettys.wordpress.com)
- The Internet is Broken, and How to Fix It (gettys.wordpress.com)