TCP dominates 90% of network traffic. Most of today's popular applications are running on TCP. But as a protocol that was developed decades ago, TCP is becoming a bottleneck due to the fast evolution of mobile devices, cloud computing and web network access technologies. The key problem is that TCP does not adapt well to significant packet loss and latency.
To date, there are three generations of TCP optimization:
- Loss-based – which seeks to improve standard TCP while still utilizing loss to govern speed
- Delay-based – which uses queuing delay instead of loss to govern speed
- Learning-based – which uses analytics and learned session observations to govern speed in real-time
Both loss-based and delay-based TCP optimizations are static in nature because they apply “one-size-fits-all” algorithms to adjust the speed of the transmission and frequency of retransmission across all sessions, regardless of the different characteristics of the network paths. While the delay-based approach is an improvement over loss-based TCP optimization, it can be disrupted by small queues in the data path or by frequent latency changes, which is a typical occurrence in mobile networks.