fbpx
Wikipedia

TCP congestion control

Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start[1] and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet.[2][3][4] Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

To avoid congestive collapse, TCP uses multi-faceted congestion-control strategy. For each connection, TCP maintains a CWND, limiting the total number of unacknowledged packets that may be in transit end-to-end. This is somewhat analogous to TCP's sliding window used for flow control.

Additive increase/multiplicative decrease edit

The additive increase/multiplicative decrease (AIMD) algorithm is a closed-loop control algorithm. AIMD combines linear growth of the congestion window with an exponential reduction when a congestion takes place. Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link.[5]

This is the algorithm that is described in RFC 5681 for the "congestion avoidance" state.[6]

Congestion window edit

In TCP, the congestion window (CWND) is one of the factors that determines the number of bytes that can be sent out at any time. The congestion window is maintained by the sender and is a means of stopping a link between the sender and the receiver from becoming overloaded with too much traffic. This should not be confused with the sliding window maintained by the sender which exists to prevent the receiver from becoming overloaded. The congestion window is calculated by estimating how much congestion there is on the link.

When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of the maximum segment size (MSS) allowed on that connection. Further variance in the congestion window is dictated by an additive increase/multiplicative decrease (AIMD) approach. This means that if all segments are received and the acknowledgments reach the sender on time, some constant is added to the window size. It will follow different algorithms.

A system administrator may adjust the maximum window size limit, or adjust the constant added during additive increase, as part of TCP tuning.

The flow of data over a TCP connection is also controlled by the use of the receive window advertised by the receiver. A sender can send data less than its own congestion window and the receive window.

Slow start edit

Slow start, defined by RFC 5681.[7] is part of the congestion control strategy used by TCP in conjunction with other algorithms to avoid sending more data than the network is capable of forwarding, that is, to avoid causing network congestion.

Slow start begins initially with a congestion window size (CWND) of 1, 2, 4 or 10 MSS.[8][3]: 1  The value for the congestion window size can be increased by 1 MSS with each acknowledgement (ACK) received, effectively doubling the window size each RTT.[a]

The transmission rate will be increased by the slow-start algorithm until either a packet loss is detected, the receiver's advertised window (rwnd) becomes the limiting factor, or slow start threshold (ssthresh) is reached, which is used to determine whether the slow start or congestion avoidance algorithm is used, a value set to limit slow start.

If the CWND reaches ssthresh, TCP switches to the congestion avoidance algorithm. It should be increased by up to 1 MSS for each RTT. A common formula is that each new ACK increases the CWND by MSS * MSS / CWND. It increases almost linearly and provides an acceptable approximation.

If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network. These measures depend on the exact TCP congestion avoidance algorithm used.

When a TCP sender detects segment loss using the retransmission timer and the given segment has not yet been resent, the value of ssthresh must be set to no more than half of the amount of data that has been sent but not yet cumulatively acknowledged or 2 * MSS, whichever value is greater.

TCP Tahoe
When a loss occurs, retransmit is sent, half of the current CWND is saved as ssthresh and slow start begins again from its initial CWND.
TCP Reno
A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new CWND, thus skipping slow start and going directly to the congestion avoidance algorithm. The overall algorithm here is called fast recovery.

Slow start assumes that unacknowledged segments are due to network congestion. While this is an acceptable assumption for many networks, segments may be lost for other reasons, such as poor data link layer transmission quality. Thus, slow start can perform poorly in situations with poor reception, such as wireless networks.

The slow start protocol also performs badly for short-lived connections. Older web browsers would create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server. Connections, however, cannot be reused for the multiple third-party servers used by web sites to implement web advertising, sharing features of social networking services,[9] and counter scripts of web analytics.

Fast retransmit edit

Fast retransmit is an enhancement to TCP that reduces the time a sender waits before retransmitting a lost segment. A TCP sender normally uses a simple timer to recognize lost segments. If an acknowledgment is not received for a particular segment within a specified time (a function of the estimated round-trip delay time), the sender will assume the segment was lost in the network and will retransmit the segment.

Duplicate acknowledgment is the basis for the fast retransmit mechanism. After receiving a packet an acknowledgement is sent for the last in-order byte of data received. For an in-order packet, this is effectively the last packet's sequence number plus the current packet's payload length. If the next packet in the sequence is lost but a third packet in the sequence is received, then the receiver can only acknowledge the last in-order byte of data, which is the same value as was acknowledged for the first packet. The second packet is lost and the third packet is not in order, so the last in-order byte of data remains the same as before. Thus a Duplicate acknowledgment occurs. The sender continues to send packets, and a fourth and fifth packet are received by the receiver. Again, the second packet is missing from the sequence, so the last in-order byte has not changed. Duplicate acknowledgments are sent for both of these packets.

When a sender receives three duplicate acknowledgments, it can be reasonably confident that the segment carrying the data that followed the last in-order byte specified in the acknowledgment was lost. A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout. On receipt of the retransmitted segment, the receiver can acknowledge the last in-order byte of data received. In the above example, this would acknowledge to the end of the payload of the fifth packet. There is no need to acknowledge intermediate packets since TCP uses cumulative acknowledgments by default.

Algorithms edit

The naming convention for congestion control algorithms (CCAs) may have originated in a 1996 paper by Kevin Fall and Sally Floyd.[10][failed verification]

The following is one possible classification according to the following properties:

  1. the type and amount of feedback received from the network
  2. incremental deployability on the current Internet
  3. the aspect of performance it aims to improve: high bandwidth-delay product networks (B); lossy links (L); fairness (F); advantage to short flows (S); variable-rate links (V); speed of convergence (C)
  4. the fairness criterion it uses

Some well-known congestion avoidance mechanisms are classified by this scheme as follows:

Variant Feedback Required changes Benefits Fairness
(New) Reno Loss Delay
Vegas Delay Sender Less loss Proportional
High Speed Loss Sender High bandwidth
BIC Loss Sender High bandwidth
CUBIC Loss Sender High bandwidth
C2TCP[11][12] Loss/Delay Sender Ultra-low latency and high bandwidth
NATCP[13] Multi-bit signal Sender Near Optimal Performance
Elastic-TCP Loss/Delay Sender High bandwidth/short & long-distance
Agile-TCP Loss Sender High bandwidth/short-distance
H-TCP Loss Sender High bandwidth
FAST Delay Sender High bandwidth Proportional
Compound TCP Loss/Delay Sender High bandwidth Proportional
Westwood Loss/Delay Sender Lossy links
Jersey Loss/Delay Sender Lossy links
BBR[14] Delay Sender BLVC, Bufferbloat
CLAMP Multi-bit signal Receiver, Router Variable-rate links Max-min
TFRC Loss Sender, Receiver No Retransmission Minimum delay
XCP Multi-bit signal Sender, Receiver, Router BLFC Max-min
VCP 2-bit signal Sender, Receiver, Router BLF Proportional
MaxNet Multi-bit signal Sender, Receiver, Router BLFSC Max-min
JetMax Multi-bit signal Sender, Receiver, Router High bandwidth Max-min
RED Loss Router Reduced delay
ECN Single-bit signal Sender, Receiver, Router Reduced loss

TCP Tahoe and Reno edit

TCP Tahoe and Reno algorithms were retrospectively named after the versions or flavours of the 4.3BSD operating system in which each first appeared (which were themselves named after Lake Tahoe and the nearby city of Reno, Nevada). The Tahoe algorithm first appeared in 4.3BSD-Tahoe (which was made to support the CCI Power 6/32 "Tahoe" minicomputer), and was later made available to non-AT&T licensees as part of the 4.3BSD Networking Release 1; this ensured its wide distribution and implementation. Improvements were made in 4.3BSD-Reno and subsequently released to the public as Networking Release 2 and later 4.4BSD-Lite.

While both consider retransmission timeout (RTO) and duplicate ACKs as packet loss events, the behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs:

  • Tahoe: if three duplicate ACKs are received (i.e. four ACKs acknowledging the same packet, which are not piggybacked on data and do not change the receiver's advertised window), Tahoe performs a fast retransmit, sets the slow start threshold to half of the current congestion window, reduces the congestion window to 1 MSS, and resets to slow start state.[15]
  • Reno: if three duplicate ACKs are received, Reno will perform a fast retransmit and skip the slow start phase by instead halving the congestion window (instead of setting it to 1 MSS like Tahoe), setting the ssthresh equal to the new congestion window, and enter a phase called fast recovery.[16]

In both Tahoe and Reno, if an ACK times out (RTO timeout), slow start is used, and both algorithms reduce the congestion window to 1 MSS.[citation needed]

TCP New Reno edit

TCP New Reno, defined by RFC 6582 (which obsolesces previous definitions in RFC 3782 and RFC 2582), improves retransmission during the fast-recovery phase of TCP Reno.

During fast recovery, to keep the transmit window full, for every duplicate ACK that is returned, a new unsent packet from the end of the congestion window is sent.

The difference from Reno is that New Reno does not halve ssthresh immediately which may reduce the window too much if multiple packet losses occur. It does not exit fast-recovery and reset ssthresh until it acknowledges all of the data.

After retransmission, newly acknowledged data have two cases:

  • Full acknowledgments: The ACK acknowledges all the intermediate segments sent, the ssthresh can be not changed, cwnd can be set to ssthresh
  • Partial acknowledgments: The ACK does not acknowledge all data. It means another loss may occur, retransmit the first unacknowledged segment if permitted

It uses a variable called "recover" to record how much data needs to be recovered. After a retransmit timeout, it records the highest sequence number transmitted in the recover variable and exits the fast recovery procedure. If this sequence number is acknowledged, TCP returns to the congestion avoidance state.

A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. In this case, New Reno mistakenly enters fast recovery. When the reordered packet is delivered, duplicate and needless retransmissions are immediately sent.

New Reno performs as well as SACK at low packet error rates and substantially outperforms Reno at high error rates.[17]

TCP Vegas edit

Until the mid-1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer. University of Arizona researchers Larry Peterson and Lawrence Brakmo introduced TCP Vegas in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. In a comparison study of various TCP CCAs, TCP Vegas appeared to be the smoothest followed by TCP CUBIC.[18]

TCP Vegas was not widely deployed outside Peterson's laboratory but was selected as the default congestion control method for DD-WRT firmware v24 SP2.[19]

TCP Hybla edit

TCP Hybla[20][21] aims to eliminate penalties to TCP connections that use high-latency terrestrial or satellite radio links. Hybla improvements are based on analytical evaluation of the congestion window dynamics.[22]

TCP BIC edit

Binary Increase Congestion control (BIC) is a TCP implementation with an optimized CCA for high-speed networks with high latency, known as long fat networks (LFNs).[23] BIC is used by default in Linux kernels 2.6.8 through 2.6.18.[citation needed]

TCP CUBIC edit

CUBIC is a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. CUBIC is used by default in Linux kernels since version 2.6.19.

Agile-SD TCP edit

Agile-SD is a Linux-based CCA which is designed for the real Linux kernel. It is a receiver-side algorithm that employs a loss-based approach using a novel mechanism, called agility factor (AF). to increase the bandwidth utilization over high-speed and short-distance networks (low bandwidth-delay product networks) such as local area networks or fiber-optic network, especially when the applied buffer size is small.[24] It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows) and CUBIC (the default of Linux) using NS-2 simulator. It improves the total performance up to 55% in term of average throughput.

TCP Westwood+ edit

Westwood+ is a sender-only modification of TCP Reno that optimizes the performance of TCP congestion control over both wired and wireless networks. TCP Westwood+ is based on end-to-end bandwidth estimation to set the congestion window and slow-start threshold after a congestion episode, that is, after three duplicate acknowledgments or a timeout. The bandwidth is estimated by averaging the rate of returning acknowledgment packets. In contrast with TCP Reno, which blindly halves the congestion window after three duplicate ACKs, TCP Westwood+ adaptively sets a slow-start threshold and a congestion window that takes into account an estimate of bandwidth available at the time congestion is experienced. Compared to Reno and New Reno, Westwood+ significantly increases throughput over wireless links and improves fairness in wired networks.[citation needed]

Compound TCP edit

Compound TCP is a Microsoft implementation of TCP which maintains two different congestion windows simultaneously, with the goal of achieving good performance on LFNs while not impairing fairness. It has been widely deployed in Windows versions since Microsoft Windows Vista and Windows Server 2008 and has been ported to older Microsoft Windows versions as well as Linux.

TCP Proportional Rate Reduction edit

TCP Proportional Rate Reduction (PRR)[25] is an algorithm designed to improve the accuracy of data sent during recovery. The algorithm ensures that the window size after recovery is as close as possible to the slow start threshold. In tests performed by Google, PRR resulted in a 3–10% reduction in average latency and recovery timeouts were reduced by 5%.[26] PRR is available in Linux kernels since version 3.2.[27]

TCP BBR edit

Bottleneck Bandwidth and Round-trip propagation time (BBR) is a CCA developed at Google in 2016.[28] While most CCAs are loss-based, in that they rely on packet loss to detect congestion and lower rates of transmission, BBR, like TCP Vegas, is model-based. The algorithm uses the maximum bandwidth and round-trip time at which the network delivered the most recent flight of outbound data packets to build a model of the network. Each cumulative or selective acknowledgment of packet delivery produces a rate sample that records the amount of data delivered over the time interval between the transmission of a data packet and the acknowledgment of that packet.[29]

When implemented at YouTube, BBRv1 yielded an average of 4% higher network throughput and up to 14% in some countries.[30] BBR has been available for Linux TCP since Linux 4.9.[31] It is also available for QUIC.[32]

BBR version 1 (BBRv1) fairness to non-BBR streams is disputed. While Google's presentation shows BBRv1 co-existing well with CUBIC,[28] researchers like Geoff Huston and Hock, Bless and Zitterbart found it unfair to other streams and not scalable.[33] Hock et al. also found "some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss" in the BBR implementation of Linux 4.9.[34] Soheil Abbasloo et al. (authors of C2TCP) show that BBRv1 doesn't perform well in dynamic environments such as cellular networks.[11][12] They have also shown that BBR has an unfairness issue. For instance, when a CUBIC flow (which is the default TCP implementation in Linux, Android, and MacOS) coexists with a BBR flow in the network, the BBR flow can dominate the CUBIC flow and get the whole link bandwidth from it (see figure 16 in [11]).

Version 2 attempts to deal with the issue of unfairness when operating alongside loss-based congestion management such as CUBIC.[35] In BBRv2 the model used by BBRv1 is augmented to include information about packet loss and information from Explicit Congestion Notification (ECN).[36] Whilst BBRv2 may at times have lower throughput than BBRv1 it is generally considered to have better goodput.[citation needed]

Version 3 (BBRv3) fixes two bugs in BBRv2 (premature end of bandwidth probing, bandwidth convergence) and performs some performance tuning. There is also a variant, termed BBR.Swift, optimized for datacenter-internal links: it uses network_RTT (excluding receiver delay) as the main congestion control signal.[36]

C2TCP edit

Cellular Controlled Delay TCP (C2TCP)[11][12] was motivated by the lack of a flexible end-to-end TCP approach that can satisfy various QoS requirements for different applications without requiring any changes in the network devices. C2TCP aims to satisfy ultra-low latency and high-bandwidth requirements of applications such as virtual reality, video conferencing, online gaming, vehicular communication systems, etc. in a highly dynamic environment such as current LTE and future 5G cellular networks. C2TCP works as an add-on on top of loss-based TCP (e.g. Reno, NewReno, CUBIC, BIC, ...), it is only required to be installed on the server-side and makes the average delay of packets bounded to the desired delays set by the applications.

Researchers at NYU[37] showed that C2TCP outperforms the delay and delay-variation performance of various state-of-the-art TCP schemes. For instance, they showed that compared to BBR, CUBIC, and Westwood on average, C2TCP decreases the average delay of packets by about 250%, 900%, and 700% respectively on various cellular network environments.[11]

Elastic-TCP edit

Elastic-TCP was proposed in February 2019 to increase bandwidth utilization over high-BDP networks in support of cloud computing. It is a Linux-based CCA that is designed for the Linux kernel. It is a receiver-side algorithm that employs a loss-delay-based approach using a novel mechanism called a window-correlated weighting function (WWF). It has a high level of elasticity to deal with different network characteristics without the need for human tuning. It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows), CUBIC (the default for Linux) and TCP-BBR (the default of Linux 4.9 used by Google) using the NS-2 simulator and testbed. Elastic-TCP significantly improves the total performance in terms of average throughput, loss ratio, and delay.[38]

NATCP edit

Soheil Abbasloo et al. proposed NATCP (Network-Assisted TCP)[13] a controversial[according to whom?] TCP design targeting multi-access edge computing (MEC). The key idea of NATCP is that if the characteristics of the network were known beforehand, TCP would have been designed differently. Therefore, NATCP employs the available features and properties in the current MEC-based cellular architectures to push the performance of TCP close to the optimal performance. NATCP uses out-of-band feedback from the network to the servers located nearby. The feedback from the network, which includes the capacity of the cellular access link and the minimum RTT of the network, guides the servers to adjust their sending rates. As preliminary results show, NATCP outperforms the state-of-the-art TCP schemes.[13][39]

Other TCP congestion avoidance algorithms edit

TCP New Reno was the most commonly implemented algorithm,[citation needed] SACK support is very common[citation needed] and is an extension to Reno/New Reno. Most others are competing proposals that still need evaluation. Starting with 2.6.8 the Linux kernel switched the default implementation from New Reno to BIC. The default implementation was again changed to CUBIC in the 2.6.19 version. FreeBSD from version 14.X onwards also uses CUBIC as the default algorithm.[51] Previous version used New Reno. However, FreeBSD supports a number of other choices.[52]

When the per-flow product of bandwidth and latency increases, regardless of the queuing scheme, TCP becomes inefficient and prone to instability. This becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links.

TCP Interactive (iTCP)[53] allows applications to subscribe to TCP events and respond accordingly enabling various functional extensions to TCP from outside TCP layer. Most TCP congestion schemes work internally. iTCP additionally enables advanced applications to directly participate in congestion control such as to control the source generation rate.

Zeta-TCP detects congestion from both latency and loss rate measures. To maximize the goodput Zeta-TCP and applies different congestion window backoff strategies based on the likelihood of congestion. It also has other improvements to accurately detect packet losses, avoiding retransmission timeout retransmission; and accelerate and control the inbound (download) traffic.[54]

Classification by network awareness edit

CCAs may be classified in relation to network awareness, meaning the extent to which these algorithms are aware of the state of the network. This consist of three primary categories: black box, grey box, and green box.[55]

Black box algorithms offer blind methods of congestion control. They operate only on the binary feedback received upon congestion and do not assume any knowledge concerning the state of the networks which they manage.

Grey box algorithms use time-instances[clarification needed] in order to obtain measurements and estimations of bandwidth, flow contention, and other knowledge of network conditions.

Green box algorithms offer bimodal methods of congestion control which measures the fair share of total bandwidth which should be allocated for each flow, at any point, during the system's execution.

Black box edit

  • Highspeed-TCP[56]
  • BIC TCP (Binary Increase Congestion Control Protocol) uses a concave increase of the sources rate after each congestion event until the window is equal to that before the event, in order to maximize the time that the network is fully utilized. After that, it probes aggressively.
  • CUBIC TCP – a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event.
  • AIMD-FC (additive increase multiplicative decrease with fast convergence), an improvement of AIMD.[57]
  • Binomial Mechanisms
  • SIMD Protocol
  • GAIMD

Grey box edit

  • TCP Vegas – estimates the queuing delay, and linearly increases or decreases the window so that a constant number of packets per flow are queued in the network. Vegas implements proportional fairness.
  • FAST TCP – achieves the same equilibrium as Vegas, but uses proportional control instead of linear increase, and intentionally scales the gain down as the bandwidth increases with the aim of ensuring stability.
  • TCP BBR – estimates the queuing delay but uses exponential increase. Intentionally slows down periodically for fairness and decreased delay.
  • TCP-Westwood (TCPW) – a loss causes the window to be reset to the sender's estimate of the bandwidth-delay product (the smallest measured RTT multiplied by the observed rate of receiving ACKs).[58]
  • C2TCP[12][11]
  • TFRC[59]
  • TCP-Real
  • TCP-Jersey

Green box edit

  • Bimodal Mechanism – Bimodal Congestion Avoidance and Control mechanism.
  • Signalling methods implemented by routers
  • Network-Assisted Congestion Control
    • NATCP[13] - Network-Assisted TCP uses out-of-band explicit feedback indicating minimum RTT of the network and capacity of the cellular access link.
    • The variable-structure congestion control protocol (VCP) uses two ECN bits to explicitly feedback the network state of congestion. It includes an end host side algorithm as well.[citation needed]

The following algorithms require custom fields to be added to the TCP packet structure:

  • Explicit Control Protocol (XCP) – XCP packets carry a congestion header with a feedback field, indicating the increase or decrease of the sender's congestion window. XCP routers set the feedback value explicitly for efficiency and fairness.[60]
  • MaxNet – Uses a single header field, which carries the maximum congestion level of any router on a flow's path. The rate is set as a function of this maximum congestion, resulting in max-min fairness.[61]
  • JetMax, like MaxNet, responds only to the maximum congestion signal, but also carries other overhead fields.

Linux usage edit

  • BIC is used by default in Linux kernels 2.6.8 through 2.6.18. (August 2004 – September 2006)
  • CUBIC is used by default in Linux kernels since version 2.6.19. (November 2006)
  • PRR is incorporated in Linux kernels to improve loss recovery since version 3.2. (January 2012)
  • BBRv1 is incorporated in Linux kernels to enable model-based congestion control since version 4.9. (December 2016)

See also edit

Notes edit

  1. ^ Even if, actually, the receiver may delay its ACKs, typically sending one ACK for every two segments that it receives[2]

References edit

  1. ^ Jacobson & Karels 1988.
  2. ^ a b W. Stevens (January 1997). TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms. doi:10.17487/RFC2001. RFC 2001.
  3. ^ a b M. Allman; S. Floyd; C. Partridge (October 2002). Increasing TCP's Initial Window. doi:10.17487/RFC3390. RFC 3390.
  4. ^ "TCP Congestion Avoidance Explained via a Sequence Diagram" (PDF). eventhelix.com.
  5. ^ Chiu, Dah-Ming; Raj Jain (1989). "Analysis of increase and decrease algorithms for congestion avoidance in computer networks". Computer Networks and ISDN Systems. 17: 1–14. CiteSeerX 10.1.1.136.8108. doi:10.1016/0169-7552(89)90019-6.
  6. ^ Allman, M.; Paxson, V. (September 2009). TCP Congestion Control. IETF. sec. 3.1. doi:10.17487/RFC5681. RFC 5681. Retrieved 4 March 2021.
  7. ^ Blanton, Ethan; Paxson, Vern; Allman, Mark (September 2009). "TCP Congestion Control". IETF.
  8. ^ Corbet, Jonathan. "Increasing the TCP initial congestion window". LWN. Retrieved 10 October 2012.
  9. ^ Nick O'Neill. "What's Making Your Site Go Slow? Could Be The Like Button". AllFacebook, 10 November 2010. Retrieved on 12 September 2012.
  10. ^ Fall, Kevin; Sally Floyd (July 1996). "Simulation-based Comparisons of Tahoe, Reno and SACK TCP" (PDF). Computer Communications Review. 26 (3): 5–21. CiteSeerX 10.1.1.586.2403. doi:10.1145/235160.235162. S2CID 7459148.
  11. ^ a b c d e f Abbasloo, S.; Xu, Y.; Chao, H. J. (2019). "C2TCP: A Flexible Cellular TCP to Meet Stringent Delay Requirements". IEEE Journal on Selected Areas in Communications. 37 (4): 918–932. arXiv:1810.13241. doi:10.1109/JSAC.2019.2898758. ISSN 0733-8716. S2CID 53107038.
  12. ^ a b c d Abbasloo, S.; Li, T.; Xu, Y.; Chao, H. J. (May 2018). "Cellular Controlled Delay TCP (C2TCP)". 2018 IFIP Networking Conference (IFIP Networking) and Workshops. pp. 118–126. arXiv:1807.02689. Bibcode:2018arXiv180702689A. doi:10.23919/IFIPNetworking.2018.8696844. ISBN 978-3-903176-08-9. S2CID 49650788.
  13. ^ a b c d Abbasloo et al. 2019.
  14. ^ Cardwell, Neal; Cheng, Yuchung; Gunn, C. Stephen; Yeganeh, Soheil Hassas; Jacobson, Van (2016). "BBR: Congestion-Based Congestion Control". Queue. 14 (5): 20–53. doi:10.1145/3012426.3022184.
  15. ^ Kurose & Ross 2008, p. 284.
  16. ^ Kurose & Ross 2012, p. 277.
  17. ^ VasanthiN., V.; SinghM., Ajith; Kumar, Romen; Hemalatha, M. (2011). "Evaluation of Protocols and Algorithms for Improving the Performance of TCP over Wireless/Wired Network". In Das, Vinu V; Thankachan, Nessy (eds.). Computational Intelligence and Information Technology. Communications in Computer and Information Science. Vol. 250. Springer. pp. 693–697. doi:10.1007/978-3-642-25734-6_120. ISBN 978-3-642-25733-9.
  18. ^ "Performance Analysis of TCP Congestion Control Algorithms" (PDF). Retrieved 26 March 2012.
  19. ^ "DD-WRT changelog". Retrieved 2 January 2012.
  20. ^ . Archived from the original on 11 October 2007. Retrieved 4 March 2007.
  21. ^ Caini, Carlo; Firrincieli, Rosario (2004). "TCP Hybla: a TCP enhancement for heterogeneous networks". International Journal of Satellite Communications and Networking. 22 (5): 547–566. doi:10.1002/sat.799. ISSN 1542-0973. S2CID 2360535.
  22. ^ Caini, C.; Firrincieli, R.; Lacamera, D. (2009). "Comparative Performance Evaluation of TCP Variants on Satellite Environments". 2009 IEEE International Conference on Communications. pp. 1–5. doi:10.1109/ICC.2009.5198834. S2CID 8352762.
  23. ^ V., Jacobson; R.T., Braden. TCP extensions for long-delay paths. doi:10.17487/RFC1072. RFC 1072.
  24. ^ Alrshah, M.A.; Othman, M.; Ali, B.; Hanapi, Z.M. (September 2015). "Agile-SD: A Linux-based TCP congestion control algorithm for supporting high-speed and short-distance networks". Journal of Network and Computer Applications. 55: 181–190. arXiv:1601.05908. doi:10.1016/j.jnca.2015.05.011. S2CID 2645016.
  25. ^ Mathis, M.; Dukkipati, N.; Cheng, Y. (2013). Proportional Rate Reduction for TCP. doi:10.17487/RFC6937. RFC 6937.
  26. ^ Corbet, Jonathan. "LPC: Making the net go faster". Retrieved 6 June 2014.
  27. ^ "Linux 3.2 - Linux Kernel Newbies". Retrieved 6 June 2014.
  28. ^ a b "BBR: Congestion-Based Congestion Control". Retrieved 25 August 2017.
  29. ^ Cheng, Yuchung; Cardwell, Neal; Yeganeh, Soheil Hassas; Jacobson, Van (3 July 2017). "Delivery Rate Estimation". IETF. Retrieved 25 August 2017.
  30. ^ "TCP BBR congestion control comes to GCP – your Internet just got faster". Retrieved 25 August 2017.
  31. ^ "BBR congestion control [LWN.net]". lwn.net.
  32. ^ "BBR update". IETF.
  33. ^ "TCP and BBR" (PDF). Retrieved 27 May 2018.
  34. ^ "Experimental Evaluation of BBR Congestion Control" (PDF). Retrieved 27 May 2018.
  35. ^ "A Performance Evaluation of TCP BBRv2". Retrieved 12 January 2021.
  36. ^ a b Google TCP BBR team; Google QUIC BBR team (26 July 2023). BBRv3: Algorithm Bug Fixes and Public Internet Deployment. IETF 117: San Francisco.
  37. ^ "Cellular Controlled Delay TCP (C2TCP)". wp.nyu.edu. Retrieved 27 April 2019.
  38. ^ Alrshah, M.A.; Al-Maqri, M.A.; Othman, M. (June 2019). "Elastic-TCP: Flexible Congestion Control Algorithm to Adapt for High-BDP Networks". IEEE Systems Journal. 13 (2): 1336–1346. arXiv:1904.13105. Bibcode:2019ISysJ..13.1336A. doi:10.1109/JSYST.2019.2896195.
  39. ^ Abbasloo, Soheil (3 June 2019), GitHub - Soheil-ab/natcp, retrieved 5 August 2019
  40. ^ Yuan, Cao; Tan, Liansheng; Andrew, Lachlan L. H.; Zhang, Wei; Zukerman, Moshe (6 June 2008). "A generalized FAST TCP scheme". Computer Communications. 31 (14): 3242–3249. doi:10.1016/j.comcom.2008.05.028. hdl:1959.3/44051. S2CID 17988768.
  41. ^ a b "Rice Networks Group".
  42. ^ "TCP Veno: TCP Enhancement for Transmission over Wireless Access Networks" (PDF). IEEE Journal on Selected Areas in Communication.
  43. ^ "XCP @ ISI".
  44. ^ "High speed TPC" (PDF). www.csc.lsu.edu.
  45. ^ . Archived from the original on 3 April 2011. Retrieved 5 March 2011.{{cite web}}: CS1 maint: archived copy as title (link)
  46. ^ Benaboud, H.; Berqia, A.; Mikou, N. (2002). "An analytical study of CANIT algorithm in TCP protocol". ACM SIGMETRICS Performance Evaluation Review. 30 (3): 20. doi:10.1145/605521.605530. S2CID 6637174.
  47. ^ Rouhani, Modjtaba (2010). "Nonlinear Neural Network Congestion Control Based on Genetic Algorithm for TCP/IP Networks". 2010 2nd International Conference on Computational Intelligence, Communication Systems and Networks. pp. 1–6. doi:10.1109/CICSyN.2010.21. ISBN 978-1-4244-7837-8. S2CID 15126416.
  48. ^ Kanagarathinam, Madhan Raj; Singh, Sukhdeep; Sandeep, Irlanki; Roy, Abhishek; Saxena, Navrati (January 2018). "D-TCP: Dynamic TCP congestion control algorithm for next generation mobile networks". 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC). pp. 1–6. doi:10.1109/CCNC.2018.8319185. ISBN 978-1-5386-4790-5. S2CID 3991163.
  49. ^ Kanagarathinam, Madhan Raj; Singh, Sukhdeep; Sandeep, Irlanki; Kim, Hanseok; Maheshwari, Mukesh Kumar; Hwang, Jaehyun; Roy, Abhishek; Saxena, Navrati (2020). "NexGen D-TCP: Next Generation Dynamic TCP Congestion Control Algorithm". IEEE Access. 8: 164482–164496. Bibcode:2020IEEEA...8p4482K. doi:10.1109/ACCESS.2020.3022284. ISSN 2169-3536. S2CID 221846931.
  50. ^ Arun, Venkat; Balakrishnan, Hari (2018). "Copa: Practical Delay-Based Congestion Control for the Internet". 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18): 329–342. ISBN 978-1-939133-01-4.
  51. ^ "tcp: make CUBIC the default congestion control mechanism". 13 September 2022.
  52. ^ "Summary of Five New TCP Congestion Control Algorithms Project". 8 March 2011.
  53. ^ "iTCP - Interactive Transport Protocol - Medianet Lab, Kent State University".
  54. ^ "Whitepaper: Zeta-TCP - Intelligent, Adaptive, Asymmetric TCP Acceleration" (PDF). Retrieved 6 December 2019.
  55. ^ Lefteris Mamatas; Tobias Harks; Vassilis Tsaoussidis (January 2007). (PDF). Journal of Internet Engineering. 1 (1). Archived from the original (PDF) on 21 February 2014.
  56. ^ "HighSpeed TCP". www.icir.org.
  57. ^ . neu.edu. Archived from the original on 13 January 2009. Retrieved 13 March 2016.
  58. ^ "Welcome to Network Research Lab". www.cs.ucla.edu.
  59. ^ "Equation-Based Congestion Control for Unicast Applications". www.icir.org.
  60. ^ Katabi, Dina; Handley, Mark; Rohrs, Charlie (2002). "Congestion control for high bandwidth-delay product networks". Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications. New York, New York, USA: ACM Press. p. 89. doi:10.1145/633025.633035. ISBN 1-58113-570-X.
  61. ^ "MaxNet -- Max-Min Fair, Stable Explicit Signalling Congestion Control". netlab.caltech.edu.

Sources edit

  • Kurose, James; Ross, Keith (2008). Computer Networking: A Top-Down Approach (4th ed.). Addison Wesley. ISBN 978-0-13-607967-5.
  • Kurose, James; Ross, Keith (2012). Computer Networking: A Top-Down Approach (6th ed.). Pearson. ISBN 978-0-13-285620-1.
  • Abbasloo, Soheil; Xu, Yang; Chao, H. Jonathon; Shi, Hang; Kozat, Ulas C.; Ye, Yinghua (2019). "Toward Optimal Performance with Network Assisted TCP at Mobile Edge". 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19). Renton, WA: USENIX Association.
  • Afanasyev, A.; N. Tilley; P. Reiher; L. Kleinrock (2010). "Host-to-host congestion control for TCP" (PDF). IEEE Communications Surveys and Tutorials. 12 (3): 304–342. CiteSeerX 10.1.1.228.3080. doi:10.1109/SURV.2010.042710.00114. S2CID 8638824.
  • Jacobson, Van; Karels, Michael J. (November 1988). "Congestion avoidance and control" (PDF). ACM SIGCOMM Computer Communication Review. 18 (4): 314–329. doi:10.1145/52325.52356.

External links edit

  • Approaches to Congestion Control in Packet Networks
  • Papers in Congestion Control
  • Allman, Mark; Paxson, Vern; Stevens, W. Richard (April 1999). "Fast Retransmit/Fast Recovery". TCP Congestion Control. IETF. sec. 3.2. doi:10.17487/RFC2581. RFC 2581. Retrieved 1 May 2010.
  • TCP Congestion Handling and Congestion Avoidance Algorithms – The TCP/IP Guide

congestion, control, transmission, control, protocol, uses, congestion, control, algorithm, that, includes, various, aspects, additive, increase, multiplicative, decrease, aimd, scheme, along, with, other, schemes, including, slow, start, congestion, window, c. Transmission Control Protocol TCP uses a congestion control algorithm that includes various aspects of an additive increase multiplicative decrease AIMD scheme along with other schemes including slow start 1 and congestion window CWND to achieve congestion avoidance The TCP congestion avoidance algorithm is the primary basis for congestion control in the Internet 2 3 4 Per the end to end principle congestion control is largely a function of internet hosts not the network itself There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet To avoid congestive collapse TCP uses multi faceted congestion control strategy For each connection TCP maintains a CWND limiting the total number of unacknowledged packets that may be in transit end to end This is somewhat analogous to TCP s sliding window used for flow control Contents 1 Additive increase multiplicative decrease 2 Congestion window 3 Slow start 4 Fast retransmit 5 Algorithms 5 1 TCP Tahoe and Reno 5 2 TCP New Reno 5 3 TCP Vegas 5 4 TCP Hybla 5 5 TCP BIC 5 6 TCP CUBIC 5 7 Agile SD TCP 5 8 TCP Westwood 5 9 Compound TCP 5 10 TCP Proportional Rate Reduction 5 11 TCP BBR 5 12 C2TCP 5 13 Elastic TCP 5 14 NATCP 5 15 Other TCP congestion avoidance algorithms 6 Classification by network awareness 6 1 Black box 6 2 Grey box 6 3 Green box 7 Linux usage 8 See also 9 Notes 10 References 10 1 Sources 11 External linksAdditive increase multiplicative decrease editThe additive increase multiplicative decrease AIMD algorithm is a closed loop control algorithm AIMD combines linear growth of the congestion window with an exponential reduction when a congestion takes place Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link 5 This is the algorithm that is described in RFC 5681 for the congestion avoidance state 6 Congestion window editIn TCP the congestion window CWND is one of the factors that determines the number of bytes that can be sent out at any time The congestion window is maintained by the sender and is a means of stopping a link between the sender and the receiver from becoming overloaded with too much traffic This should not be confused with the sliding window maintained by the sender which exists to prevent the receiver from becoming overloaded The congestion window is calculated by estimating how much congestion there is on the link When a connection is set up the congestion window a value maintained independently at each host is set to a small multiple of the maximum segment size MSS allowed on that connection Further variance in the congestion window is dictated by an additive increase multiplicative decrease AIMD approach This means that if all segments are received and the acknowledgments reach the sender on time some constant is added to the window size It will follow different algorithms A system administrator may adjust the maximum window size limit or adjust the constant added during additive increase as part of TCP tuning The flow of data over a TCP connection is also controlled by the use of the receive window advertised by the receiver A sender can send data less than its own congestion window and the receive window Slow start editSlow start defined by RFC 5681 7 is part of the congestion control strategy used by TCP in conjunction with other algorithms to avoid sending more data than the network is capable of forwarding that is to avoid causing network congestion Slow start begins initially with a congestion window size CWND of 1 2 4 or 10 MSS 8 3 1 The value for the congestion window size can be increased by 1 MSS with each acknowledgement ACK received effectively doubling the window size each RTT a The transmission rate will be increased by the slow start algorithm until either a packet loss is detected the receiver s advertised window rwnd becomes the limiting factor or slow start threshold ssthresh is reached which is used to determine whether the slow start or congestion avoidance algorithm is used a value set to limit slow start If the CWND reaches ssthresh TCP switches to the congestion avoidance algorithm It should be increased by up to 1 MSS for each RTT A common formula is that each new ACK increases the CWND by MSS MSS CWND It increases almost linearly and provides an acceptable approximation If a loss event occurs TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network These measures depend on the exact TCP congestion avoidance algorithm used When a TCP sender detects segment loss using the retransmission timer and the given segment has not yet been resent the value of ssthresh must be set to no more than half of the amount of data that has been sent but not yet cumulatively acknowledged or 2 MSS whichever value is greater TCP Tahoe When a loss occurs retransmit is sent half of the current CWND is saved as ssthresh and slow start begins again from its initial CWND TCP Reno A fast retransmit is sent half of the current CWND is saved as ssthresh and as new CWND thus skipping slow start and going directly to the congestion avoidance algorithm The overall algorithm here is called fast recovery Slow start assumes that unacknowledged segments are due to network congestion While this is an acceptable assumption for many networks segments may be lost for other reasons such as poor data link layer transmission quality Thus slow start can perform poorly in situations with poor reception such as wireless networks The slow start protocol also performs badly for short lived connections Older web browsers would create many consecutive short lived connections to the web server and would open and close the connection for each file requested This kept most connections in the slow start mode which resulted in poor response time To avoid this problem modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server Connections however cannot be reused for the multiple third party servers used by web sites to implement web advertising sharing features of social networking services 9 and counter scripts of web analytics Fast retransmit editFast retransmit is an enhancement to TCP that reduces the time a sender waits before retransmitting a lost segment A TCP sender normally uses a simple timer to recognize lost segments If an acknowledgment is not received for a particular segment within a specified time a function of the estimated round trip delay time the sender will assume the segment was lost in the network and will retransmit the segment Duplicate acknowledgment is the basis for the fast retransmit mechanism After receiving a packet an acknowledgement is sent for the last in order byte of data received For an in order packet this is effectively the last packet s sequence number plus the current packet s payload length If the next packet in the sequence is lost but a third packet in the sequence is received then the receiver can only acknowledge the last in order byte of data which is the same value as was acknowledged for the first packet The second packet is lost and the third packet is not in order so the last in order byte of data remains the same as before Thus a Duplicate acknowledgment occurs The sender continues to send packets and a fourth and fifth packet are received by the receiver Again the second packet is missing from the sequence so the last in order byte has not changed Duplicate acknowledgments are sent for both of these packets When a sender receives three duplicate acknowledgments it can be reasonably confident that the segment carrying the data that followed the last in order byte specified in the acknowledgment was lost A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout On receipt of the retransmitted segment the receiver can acknowledge the last in order byte of data received In the above example this would acknowledge to the end of the payload of the fifth packet There is no need to acknowledge intermediate packets since TCP uses cumulative acknowledgments by default Algorithms editThe naming convention for congestion control algorithms CCAs may have originated in a 1996 paper by Kevin Fall and Sally Floyd 10 failed verification The following is one possible classification according to the following properties the type and amount of feedback received from the network incremental deployability on the current Internet the aspect of performance it aims to improve high bandwidth delay product networks B lossy links L fairness F advantage to short flows S variable rate links V speed of convergence C the fairness criterion it usesSome well known congestion avoidance mechanisms are classified by this scheme as follows Variant Feedback Required changes Benefits Fairness New Reno Loss DelayVegas Delay Sender Less loss ProportionalHigh Speed Loss Sender High bandwidthBIC Loss Sender High bandwidthCUBIC Loss Sender High bandwidthC2TCP 11 12 Loss Delay Sender Ultra low latency and high bandwidthNATCP 13 Multi bit signal Sender Near Optimal PerformanceElastic TCP Loss Delay Sender High bandwidth short amp long distanceAgile TCP Loss Sender High bandwidth short distanceH TCP Loss Sender High bandwidthFAST Delay Sender High bandwidth ProportionalCompound TCP Loss Delay Sender High bandwidth ProportionalWestwood Loss Delay Sender Lossy linksJersey Loss Delay Sender Lossy linksBBR 14 Delay Sender BLVC BufferbloatCLAMP Multi bit signal Receiver Router Variable rate links Max minTFRC Loss Sender Receiver No Retransmission Minimum delayXCP Multi bit signal Sender Receiver Router BLFC Max minVCP 2 bit signal Sender Receiver Router BLF ProportionalMaxNet Multi bit signal Sender Receiver Router BLFSC Max minJetMax Multi bit signal Sender Receiver Router High bandwidth Max minRED Loss Router Reduced delayECN Single bit signal Sender Receiver Router Reduced lossTCP Tahoe and Reno edit TCP Tahoe and Reno algorithms were retrospectively named after the versions or flavours of the 4 3BSD operating system in which each first appeared which were themselves named after Lake Tahoe and the nearby city of Reno Nevada The Tahoe algorithm first appeared in 4 3BSD Tahoe which was made to support the CCI Power 6 32 Tahoe minicomputer and was later made available to non AT amp T licensees as part of the 4 3BSD Networking Release 1 this ensured its wide distribution and implementation Improvements were made in 4 3BSD Reno and subsequently released to the public as Networking Release 2 and later 4 4BSD Lite While both consider retransmission timeout RTO and duplicate ACKs as packet loss events the behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs Tahoe if three duplicate ACKs are received i e four ACKs acknowledging the same packet which are not piggybacked on data and do not change the receiver s advertised window Tahoe performs a fast retransmit sets the slow start threshold to half of the current congestion window reduces the congestion window to 1 MSS and resets to slow start state 15 Reno if three duplicate ACKs are received Reno will perform a fast retransmit and skip the slow start phase by instead halving the congestion window instead of setting it to 1 MSS like Tahoe setting the ssthresh equal to the new congestion window and enter a phase called fast recovery 16 In both Tahoe and Reno if an ACK times out RTO timeout slow start is used and both algorithms reduce the congestion window to 1 MSS citation needed TCP New Reno edit TCP New Reno defined by RFC 6582 which obsolesces previous definitions in RFC 3782 and RFC 2582 improves retransmission during the fast recovery phase of TCP Reno During fast recovery to keep the transmit window full for every duplicate ACK that is returned a new unsent packet from the end of the congestion window is sent The difference from Reno is that New Reno does not halve ssthresh immediately which may reduce the window too much if multiple packet losses occur It does not exit fast recovery and reset ssthresh until it acknowledges all of the data After retransmission newly acknowledged data have two cases Full acknowledgments The ACK acknowledges all the intermediate segments sent the ssthresh can be not changed cwnd can be set to ssthresh Partial acknowledgments The ACK does not acknowledge all data It means another loss may occur retransmit the first unacknowledged segment if permittedIt uses a variable called recover to record how much data needs to be recovered After a retransmit timeout it records the highest sequence number transmitted in the recover variable and exits the fast recovery procedure If this sequence number is acknowledged TCP returns to the congestion avoidance state A problem occurs with New Reno when there are no packet losses but instead packets are reordered by more than 3 packet sequence numbers In this case New Reno mistakenly enters fast recovery When the reordered packet is delivered duplicate and needless retransmissions are immediately sent New Reno performs as well as SACK at low packet error rates and substantially outperforms Reno at high error rates 17 TCP Vegas edit Main article TCP Vegas Until the mid 1990s all of TCP s set timeouts and measured round trip delays were based upon only the last transmitted packet in the transmit buffer University of Arizona researchers Larry Peterson and Lawrence Brakmo introduced TCP Vegas in which timeouts were set and round trip delays were measured for every packet in the transmit buffer In addition TCP Vegas uses additive increases in the congestion window In a comparison study of various TCP CCAs TCP Vegas appeared to be the smoothest followed by TCP CUBIC 18 TCP Vegas was not widely deployed outside Peterson s laboratory but was selected as the default congestion control method for DD WRT firmware v24 SP2 19 TCP Hybla edit TCP Hybla 20 21 aims to eliminate penalties to TCP connections that use high latency terrestrial or satellite radio links Hybla improvements are based on analytical evaluation of the congestion window dynamics 22 TCP BIC edit Main article BIC TCP Binary Increase Congestion control BIC is a TCP implementation with an optimized CCA for high speed networks with high latency known as long fat networks LFNs 23 BIC is used by default in Linux kernels 2 6 8 through 2 6 18 citation needed TCP CUBIC edit Main article CUBIC TCP CUBIC is a less aggressive and more systematic derivative of BIC in which the window is a cubic function of time since the last congestion event with the inflection point set to the window prior to the event CUBIC is used by default in Linux kernels since version 2 6 19 Agile SD TCP edit Agile SD is a Linux based CCA which is designed for the real Linux kernel It is a receiver side algorithm that employs a loss based approach using a novel mechanism called agility factor AF to increase the bandwidth utilization over high speed and short distance networks low bandwidth delay product networks such as local area networks or fiber optic network especially when the applied buffer size is small 24 It has been evaluated by comparing its performance to Compound TCP the default CCA in MS Windows and CUBIC the default of Linux using NS 2 simulator It improves the total performance up to 55 in term of average throughput TCP Westwood edit Main article TCP Westwood plus Westwood is a sender only modification of TCP Reno that optimizes the performance of TCP congestion control over both wired and wireless networks TCP Westwood is based on end to end bandwidth estimation to set the congestion window and slow start threshold after a congestion episode that is after three duplicate acknowledgments or a timeout The bandwidth is estimated by averaging the rate of returning acknowledgment packets In contrast with TCP Reno which blindly halves the congestion window after three duplicate ACKs TCP Westwood adaptively sets a slow start threshold and a congestion window that takes into account an estimate of bandwidth available at the time congestion is experienced Compared to Reno and New Reno Westwood significantly increases throughput over wireless links and improves fairness in wired networks citation needed Compound TCP edit Main article Compound TCP Compound TCP is a Microsoft implementation of TCP which maintains two different congestion windows simultaneously with the goal of achieving good performance on LFNs while not impairing fairness It has been widely deployed in Windows versions since Microsoft Windows Vista and Windows Server 2008 and has been ported to older Microsoft Windows versions as well as Linux TCP Proportional Rate Reduction edit TCP Proportional Rate Reduction PRR 25 is an algorithm designed to improve the accuracy of data sent during recovery The algorithm ensures that the window size after recovery is as close as possible to the slow start threshold In tests performed by Google PRR resulted in a 3 10 reduction in average latency and recovery timeouts were reduced by 5 26 PRR is available in Linux kernels since version 3 2 27 TCP BBR edit Bottleneck Bandwidth and Round trip propagation time BBR is a CCA developed at Google in 2016 28 While most CCAs are loss based in that they rely on packet loss to detect congestion and lower rates of transmission BBR like TCP Vegas is model based The algorithm uses the maximum bandwidth and round trip time at which the network delivered the most recent flight of outbound data packets to build a model of the network Each cumulative or selective acknowledgment of packet delivery produces a rate sample that records the amount of data delivered over the time interval between the transmission of a data packet and the acknowledgment of that packet 29 When implemented at YouTube BBRv1 yielded an average of 4 higher network throughput and up to 14 in some countries 30 BBR has been available for Linux TCP since Linux 4 9 31 It is also available for QUIC 32 BBR version 1 BBRv1 fairness to non BBR streams is disputed While Google s presentation shows BBRv1 co existing well with CUBIC 28 researchers like Geoff Huston and Hock Bless and Zitterbart found it unfair to other streams and not scalable 33 Hock et al also found some severe inherent issues such as increased queuing delays unfairness and massive packet loss in the BBR implementation of Linux 4 9 34 Soheil Abbasloo et al authors of C2TCP show that BBRv1 doesn t perform well in dynamic environments such as cellular networks 11 12 They have also shown that BBR has an unfairness issue For instance when a CUBIC flow which is the default TCP implementation in Linux Android and MacOS coexists with a BBR flow in the network the BBR flow can dominate the CUBIC flow and get the whole link bandwidth from it see figure 16 in 11 Version 2 attempts to deal with the issue of unfairness when operating alongside loss based congestion management such as CUBIC 35 In BBRv2 the model used by BBRv1 is augmented to include information about packet loss and information from Explicit Congestion Notification ECN 36 Whilst BBRv2 may at times have lower throughput than BBRv1 it is generally considered to have better goodput citation needed Version 3 BBRv3 fixes two bugs in BBRv2 premature end of bandwidth probing bandwidth convergence and performs some performance tuning There is also a variant termed BBR Swift optimized for datacenter internal links it uses network RTT excluding receiver delay as the main congestion control signal 36 C2TCP edit Cellular Controlled Delay TCP C2TCP 11 12 was motivated by the lack of a flexible end to end TCP approach that can satisfy various QoS requirements for different applications without requiring any changes in the network devices C2TCP aims to satisfy ultra low latency and high bandwidth requirements of applications such as virtual reality video conferencing online gaming vehicular communication systems etc in a highly dynamic environment such as current LTE and future 5G cellular networks C2TCP works as an add on on top of loss based TCP e g Reno NewReno CUBIC BIC it is only required to be installed on the server side and makes the average delay of packets bounded to the desired delays set by the applications Researchers at NYU 37 showed that C2TCP outperforms the delay and delay variation performance of various state of the art TCP schemes For instance they showed that compared to BBR CUBIC and Westwood on average C2TCP decreases the average delay of packets by about 250 900 and 700 respectively on various cellular network environments 11 Elastic TCP edit Elastic TCP was proposed in February 2019 to increase bandwidth utilization over high BDP networks in support of cloud computing It is a Linux based CCA that is designed for the Linux kernel It is a receiver side algorithm that employs a loss delay based approach using a novel mechanism called a window correlated weighting function WWF It has a high level of elasticity to deal with different network characteristics without the need for human tuning It has been evaluated by comparing its performance to Compound TCP the default CCA in MS Windows CUBIC the default for Linux and TCP BBR the default of Linux 4 9 used by Google using the NS 2 simulator and testbed Elastic TCP significantly improves the total performance in terms of average throughput loss ratio and delay 38 NATCP edit Soheil Abbasloo et al proposed NATCP Network Assisted TCP 13 a controversial according to whom TCP design targeting multi access edge computing MEC The key idea of NATCP is that if the characteristics of the network were known beforehand TCP would have been designed differently Therefore NATCP employs the available features and properties in the current MEC based cellular architectures to push the performance of TCP close to the optimal performance NATCP uses out of band feedback from the network to the servers located nearby The feedback from the network which includes the capacity of the cellular access link and the minimum RTT of the network guides the servers to adjust their sending rates As preliminary results show NATCP outperforms the state of the art TCP schemes 13 39 Other TCP congestion avoidance algorithms edit FAST TCP Generalized FAST TCP 40 H TCP Data Center TCP High Speed TCP HSTCP LP 41 TCP Illinois TCP LP 41 TCP SACK Scalable TCP TCP Veno 42 Westwood XCP 43 YeAH TCP 44 TCP FIT 45 Congestion Avoidance with Normalized Interval of Time CANIT 46 Non linear neural network congestion control based on genetic algorithm for TCP IP networks 47 D TCP 48 NexGen D TCP 49 Copa 50 TCP New Reno was the most commonly implemented algorithm citation needed SACK support is very common citation needed and is an extension to Reno New Reno Most others are competing proposals that still need evaluation Starting with 2 6 8 the Linux kernel switched the default implementation from New Reno to BIC The default implementation was again changed to CUBIC in the 2 6 19 version FreeBSD from version 14 X onwards also uses CUBIC as the default algorithm 51 Previous version used New Reno However FreeBSD supports a number of other choices 52 When the per flow product of bandwidth and latency increases regardless of the queuing scheme TCP becomes inefficient and prone to instability This becomes increasingly important as the Internet evolves to incorporate very high bandwidth optical links TCP Interactive iTCP 53 allows applications to subscribe to TCP events and respond accordingly enabling various functional extensions to TCP from outside TCP layer Most TCP congestion schemes work internally iTCP additionally enables advanced applications to directly participate in congestion control such as to control the source generation rate Zeta TCP detects congestion from both latency and loss rate measures To maximize the goodput Zeta TCP and applies different congestion window backoff strategies based on the likelihood of congestion It also has other improvements to accurately detect packet losses avoiding retransmission timeout retransmission and accelerate and control the inbound download traffic 54 Classification by network awareness editCCAs may be classified in relation to network awareness meaning the extent to which these algorithms are aware of the state of the network This consist of three primary categories black box grey box and green box 55 Black box algorithms offer blind methods of congestion control They operate only on the binary feedback received upon congestion and do not assume any knowledge concerning the state of the networks which they manage Grey box algorithms use time instances clarification needed in order to obtain measurements and estimations of bandwidth flow contention and other knowledge of network conditions Green box algorithms offer bimodal methods of congestion control which measures the fair share of total bandwidth which should be allocated for each flow at any point during the system s execution Black box edit Highspeed TCP 56 BIC TCP Binary Increase Congestion Control Protocol uses a concave increase of the sources rate after each congestion event until the window is equal to that before the event in order to maximize the time that the network is fully utilized After that it probes aggressively CUBIC TCP a less aggressive and more systematic derivative of BIC in which the window is a cubic function of time since the last congestion event with the inflection point set to the window prior to the event AIMD FC additive increase multiplicative decrease with fast convergence an improvement of AIMD 57 Binomial Mechanisms SIMD Protocol GAIMDGrey box edit TCP Vegas estimates the queuing delay and linearly increases or decreases the window so that a constant number of packets per flow are queued in the network Vegas implements proportional fairness FAST TCP achieves the same equilibrium as Vegas but uses proportional control instead of linear increase and intentionally scales the gain down as the bandwidth increases with the aim of ensuring stability TCP BBR estimates the queuing delay but uses exponential increase Intentionally slows down periodically for fairness and decreased delay TCP Westwood TCPW a loss causes the window to be reset to the sender s estimate of the bandwidth delay product the smallest measured RTT multiplied by the observed rate of receiving ACKs 58 C2TCP 12 11 TFRC 59 TCP Real TCP JerseyGreen box edit Bimodal Mechanism Bimodal Congestion Avoidance and Control mechanism Signalling methods implemented by routers Random Early Detection RED randomly drops packets in proportion to the router s queue size triggering multiplicative decrease in some flows Explicit Congestion Notification ECN Network Assisted Congestion Control NATCP 13 Network Assisted TCP uses out of band explicit feedback indicating minimum RTT of the network and capacity of the cellular access link The variable structure congestion control protocol VCP uses two ECN bits to explicitly feedback the network state of congestion It includes an end host side algorithm as well citation needed The following algorithms require custom fields to be added to the TCP packet structure Explicit Control Protocol XCP XCP packets carry a congestion header with a feedback field indicating the increase or decrease of the sender s congestion window XCP routers set the feedback value explicitly for efficiency and fairness 60 MaxNet Uses a single header field which carries the maximum congestion level of any router on a flow s path The rate is set as a function of this maximum congestion resulting in max min fairness 61 JetMax like MaxNet responds only to the maximum congestion signal but also carries other overhead fields Linux usage editThis section does not cite any sources Please help improve this section by adding citations to reliable sources Unsourced material may be challenged and removed July 2022 Learn how and when to remove this template message BIC is used by default in Linux kernels 2 6 8 through 2 6 18 August 2004 September 2006 CUBIC is used by default in Linux kernels since version 2 6 19 November 2006 PRR is incorporated in Linux kernels to improve loss recovery since version 3 2 January 2012 BBRv1 is incorporated in Linux kernels to enable model based congestion control since version 4 9 December 2016 See also editTransmission Control Protocol Congestion control and Development Network congestion Mitigation Low Extra Delay Background Transport LEDBAT Notes edit Even if actually the receiver may delay its ACKs typically sending one ACK for every two segments that it receives 2 References edit Jacobson amp Karels 1988 a b W Stevens January 1997 TCP Slow Start Congestion Avoidance Fast Retransmit and Fast Recovery Algorithms doi 10 17487 RFC2001 RFC 2001 a b M Allman S Floyd C Partridge October 2002 Increasing TCP s Initial Window doi 10 17487 RFC3390 RFC 3390 TCP Congestion Avoidance Explained via a Sequence Diagram PDF eventhelix com Chiu Dah Ming Raj Jain 1989 Analysis of increase and decrease algorithms for congestion avoidance in computer networks Computer Networks and ISDN Systems 17 1 14 CiteSeerX 10 1 1 136 8108 doi 10 1016 0169 7552 89 90019 6 Allman M Paxson V September 2009 TCP Congestion Control IETF sec 3 1 doi 10 17487 RFC5681 RFC 5681 Retrieved 4 March 2021 Blanton Ethan Paxson Vern Allman Mark September 2009 TCP Congestion Control IETF Corbet Jonathan Increasing the TCP initial congestion window LWN Retrieved 10 October 2012 Nick O Neill What s Making Your Site Go Slow Could Be The Like Button AllFacebook 10 November 2010 Retrieved on 12 September 2012 Fall Kevin Sally Floyd July 1996 Simulation based Comparisons of Tahoe Reno and SACK TCP PDF Computer Communications Review 26 3 5 21 CiteSeerX 10 1 1 586 2403 doi 10 1145 235160 235162 S2CID 7459148 a b c d e f Abbasloo S Xu Y Chao H J 2019 C2TCP A Flexible Cellular TCP to Meet Stringent Delay Requirements IEEE Journal on Selected Areas in Communications 37 4 918 932 arXiv 1810 13241 doi 10 1109 JSAC 2019 2898758 ISSN 0733 8716 S2CID 53107038 a b c d Abbasloo S Li T Xu Y Chao H J May 2018 Cellular Controlled Delay TCP C2TCP 2018 IFIP Networking Conference IFIP Networking and Workshops pp 118 126 arXiv 1807 02689 Bibcode 2018arXiv180702689A doi 10 23919 IFIPNetworking 2018 8696844 ISBN 978 3 903176 08 9 S2CID 49650788 a b c d Abbasloo et al 2019 Cardwell Neal Cheng Yuchung Gunn C Stephen Yeganeh Soheil Hassas Jacobson Van 2016 BBR Congestion Based Congestion Control Queue 14 5 20 53 doi 10 1145 3012426 3022184 Kurose amp Ross 2008 p 284 Kurose amp Ross 2012 p 277 VasanthiN V SinghM Ajith Kumar Romen Hemalatha M 2011 Evaluation of Protocols and Algorithms for Improving the Performance of TCP over Wireless Wired Network In Das Vinu V Thankachan Nessy eds Computational Intelligence and Information Technology Communications in Computer and Information Science Vol 250 Springer pp 693 697 doi 10 1007 978 3 642 25734 6 120 ISBN 978 3 642 25733 9 Performance Analysis of TCP Congestion Control Algorithms PDF Retrieved 26 March 2012 DD WRT changelog Retrieved 2 January 2012 Hybla home page Archived from the original on 11 October 2007 Retrieved 4 March 2007 Caini Carlo Firrincieli Rosario 2004 TCP Hybla a TCP enhancement for heterogeneous networks International Journal of Satellite Communications and Networking 22 5 547 566 doi 10 1002 sat 799 ISSN 1542 0973 S2CID 2360535 Caini C Firrincieli R Lacamera D 2009 Comparative Performance Evaluation of TCP Variants on Satellite Environments 2009 IEEE International Conference on Communications pp 1 5 doi 10 1109 ICC 2009 5198834 S2CID 8352762 V Jacobson R T Braden TCP extensions for long delay paths doi 10 17487 RFC1072 RFC 1072 Alrshah M A Othman M Ali B Hanapi Z M September 2015 Agile SD A Linux based TCP congestion control algorithm for supporting high speed and short distance networks Journal of Network and Computer Applications 55 181 190 arXiv 1601 05908 doi 10 1016 j jnca 2015 05 011 S2CID 2645016 Mathis M Dukkipati N Cheng Y 2013 Proportional Rate Reduction for TCP doi 10 17487 RFC6937 RFC 6937 Corbet Jonathan LPC Making the net go faster Retrieved 6 June 2014 Linux 3 2 Linux Kernel Newbies Retrieved 6 June 2014 a b BBR Congestion Based Congestion Control Retrieved 25 August 2017 Cheng Yuchung Cardwell Neal Yeganeh Soheil Hassas Jacobson Van 3 July 2017 Delivery Rate Estimation IETF Retrieved 25 August 2017 TCP BBR congestion control comes to GCP your Internet just got faster Retrieved 25 August 2017 BBR congestion control LWN net lwn net BBR update IETF TCP and BBR PDF Retrieved 27 May 2018 Experimental Evaluation of BBR Congestion Control PDF Retrieved 27 May 2018 A Performance Evaluation of TCP BBRv2 Retrieved 12 January 2021 a b Google TCP BBR team Google QUIC BBR team 26 July 2023 BBRv3 Algorithm Bug Fixes and Public Internet Deployment IETF 117 San Francisco Cellular Controlled Delay TCP C2TCP wp nyu edu Retrieved 27 April 2019 Alrshah M A Al Maqri M A Othman M June 2019 Elastic TCP Flexible Congestion Control Algorithm to Adapt for High BDP Networks IEEE Systems Journal 13 2 1336 1346 arXiv 1904 13105 Bibcode 2019ISysJ 13 1336A doi 10 1109 JSYST 2019 2896195 Abbasloo Soheil 3 June 2019 GitHub Soheil ab natcp retrieved 5 August 2019 Yuan Cao Tan Liansheng Andrew Lachlan L H Zhang Wei Zukerman Moshe 6 June 2008 A generalized FAST TCP scheme Computer Communications 31 14 3242 3249 doi 10 1016 j comcom 2008 05 028 hdl 1959 3 44051 S2CID 17988768 a b Rice Networks Group TCP Veno TCP Enhancement for Transmission over Wireless Access Networks PDF IEEE Journal on Selected Areas in Communication XCP ISI High speed TPC PDF www csc lsu edu Archived copy Archived from the original on 3 April 2011 Retrieved 5 March 2011 a href Template Cite web html title Template Cite web cite web a CS1 maint archived copy as title link Benaboud H Berqia A Mikou N 2002 An analytical study of CANIT algorithm in TCP protocol ACM SIGMETRICS Performance Evaluation Review 30 3 20 doi 10 1145 605521 605530 S2CID 6637174 Rouhani Modjtaba 2010 Nonlinear Neural Network Congestion Control Based on Genetic Algorithm for TCP IP Networks 2010 2nd International Conference on Computational Intelligence Communication Systems and Networks pp 1 6 doi 10 1109 CICSyN 2010 21 ISBN 978 1 4244 7837 8 S2CID 15126416 Kanagarathinam Madhan Raj Singh Sukhdeep Sandeep Irlanki Roy Abhishek Saxena Navrati January 2018 D TCP Dynamic TCP congestion control algorithm for next generation mobile networks 2018 15th IEEE Annual Consumer Communications amp Networking Conference CCNC pp 1 6 doi 10 1109 CCNC 2018 8319185 ISBN 978 1 5386 4790 5 S2CID 3991163 Kanagarathinam Madhan Raj Singh Sukhdeep Sandeep Irlanki Kim Hanseok Maheshwari Mukesh Kumar Hwang Jaehyun Roy Abhishek Saxena Navrati 2020 NexGen D TCP Next Generation Dynamic TCP Congestion Control Algorithm IEEE Access 8 164482 164496 Bibcode 2020IEEEA 8p4482K doi 10 1109 ACCESS 2020 3022284 ISSN 2169 3536 S2CID 221846931 Arun Venkat Balakrishnan Hari 2018 Copa Practical Delay Based Congestion Control for the Internet 15th USENIX Symposium on Networked Systems Design and Implementation NSDI 18 329 342 ISBN 978 1 939133 01 4 tcp make CUBIC the default congestion control mechanism 13 September 2022 Summary of Five New TCP Congestion Control Algorithms Project 8 March 2011 iTCP Interactive Transport Protocol Medianet Lab Kent State University Whitepaper Zeta TCP Intelligent Adaptive Asymmetric TCP Acceleration PDF Retrieved 6 December 2019 Lefteris Mamatas Tobias Harks Vassilis Tsaoussidis January 2007 Approaches to Congestion Control in Packet Networks PDF Journal of Internet Engineering 1 1 Archived from the original PDF on 21 February 2014 HighSpeed TCP www icir org AIMD FC Homepage neu edu Archived from the original on 13 January 2009 Retrieved 13 March 2016 Welcome to Network Research Lab www cs ucla edu Equation Based Congestion Control for Unicast Applications www icir org Katabi Dina Handley Mark Rohrs Charlie 2002 Congestion control for high bandwidth delay product networks Proceedings of the 2002 conference on Applications technologies architectures and protocols for computer communications New York New York USA ACM Press p 89 doi 10 1145 633025 633035 ISBN 1 58113 570 X MaxNet Max Min Fair Stable Explicit Signalling Congestion Control netlab caltech edu Sources edit Kurose James Ross Keith 2008 Computer Networking A Top Down Approach 4th ed Addison Wesley ISBN 978 0 13 607967 5 Kurose James Ross Keith 2012 Computer Networking A Top Down Approach 6th ed Pearson ISBN 978 0 13 285620 1 Abbasloo Soheil Xu Yang Chao H Jonathon Shi Hang Kozat Ulas C Ye Yinghua 2019 Toward Optimal Performance with Network Assisted TCP at Mobile Edge 2nd USENIX Workshop on Hot Topics in Edge Computing HotEdge 19 Renton WA USENIX Association Afanasyev A N Tilley P Reiher L Kleinrock 2010 Host to host congestion control for TCP PDF IEEE Communications Surveys and Tutorials 12 3 304 342 CiteSeerX 10 1 1 228 3080 doi 10 1109 SURV 2010 042710 00114 S2CID 8638824 Jacobson Van Karels Michael J November 1988 Congestion avoidance and control PDF ACM SIGCOMM Computer Communication Review 18 4 314 329 doi 10 1145 52325 52356 External links editApproaches to Congestion Control in Packet Networks Papers in Congestion Control Allman Mark Paxson Vern Stevens W Richard April 1999 Fast Retransmit Fast Recovery TCP Congestion Control IETF sec 3 2 doi 10 17487 RFC2581 RFC 2581 Retrieved 1 May 2010 TCP Congestion Handling and Congestion Avoidance Algorithms The TCP IP Guide Retrieved from https en wikipedia org w index php title TCP congestion control amp oldid 1200189844, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.