Why is the Internet so slow?!

Latency is a critical determinant of the quality of experience for many Internet applications. Google and Bing report that a few hundred milliseconds of additional latency in delivering search results causes significant reduction in search volume, and hence, revenue. In online gaming, tens of milliseconds make a huge difference, thus driving gaming companies to build specialized networks targeted at reducing latency.

Present efforts at reducing latency, nevertheless, fall far short of the lower bound dictated by the speed of light in vacuum. What if the Internet worked at the speed of light? Ignoring the technical challenges and cost of designing for that goal for the moment, let us briefly think about its implications.

A speed-of-light Internet would not only dramatically enhance Web browsing and gaming as well as various forms of “tele-immersion”, but it could also potentially open the door for new, creative applications to emerge. Thus, we set out to understand and quantify the gap between the typical latencies we observe today and what is theoretically achievable.

Our largest set of measurements was performed between popular Web servers and PlanetLab nodes, a set of generally well-connected machines in academic and research institutions across the World. We evaluated our measured latencies against the lower bound of c-latency; that is, the time needed to traverse the geodesic distance between the two endpoints at the speed of light in vacuum.

Our measurements reveal that the Internet is much, much slower than it could be: fetching just the HTML of the landing pages of popular websites is (in the median) ~37 times worse than c-latency. Note that this is typically tens of kilobytes of data, thus making bandwidth constraints largely irrelevant in this context.

Where does this huge slowdown come from?

The figure below shows a breakdown of the inflation of HTTP connections. As expected, the network protocol stack — DNS, TCP handshake, and TCP slow-start — contributes to the Internet’s latency inflation. Note also, however, that the infrastructure itself is much slower than it could be: the ping time is more than 3x inflated.

In light of these measurements, how should the networking research community reduce the Internet’s large latency inflation? Improvements to the protocol stack are certainly necessary, and are addressed by many efforts across industry and academia. What is often ignored, however, is the infrastructural factor.

If the 3x slowdown from the infrastructure were eliminated, each round-trip-time being 3x faster would affect all the protocols above, and we could immediately cut the latency inflation from ~37x to around 10x, without any protocol modifications.

Further, for applications such as gaming, infrastructural improvements are the only way to reduce the network’s contribution to large latencies. Hence, we believe reducing latency at the lowest layer is of utmost importance towards the goal of a speed-of-light Internet.

We encourage interested readers to read our paper to understand the details of our measurement work and results, and visit our website to learn more about our ongoing work towards building a speed-of-light Internet.


This article originally appeared at: https://blog.apnic.net/2017/06/19/why-is-the-internet-so-slow/.



You just earned points!
Login to save points.
Earn your spot on the leaderboard.

You earned Ochen points!

You're on your way to the top of the leaderboard!