HTTP/2 : Winds Of Change – Part IV

This article is part 3 of a 5 part series by Mark B. Friedman (LinkedIn).

To read the previous posts part of this series please see links to Part 1 and Part 2 below.

  • Click here to read part 1 of the series – Link
  • Click here to read part 2 of the series – Link
  • Click here to read part 3 of the series – Link

HTTP/2 and the TCP Protocol  – The latest set of changes to the HTTP protocol provide support for multiplexing across a single TCP connection, with the goal of pushing as many bytes into the TCP Send Window as early as possible. This goal of HTTP/2 comes into conflict with standard TCP congestion control policies that are very conservative about overloading a connection. These congestion control mechanisms include slow start, the small initial size of the congestion window (cwin), and ramping up the size of the cwin slowly using additive increase, multiplicative decrease. These standard TCP policies may all need to be modified for web sites that want to take advantage of running the HTTP/2 protocol. In this section we will look briefly at the standard TCP behavior that conflicts with HTTP/2 web server multiplexing.

The basic congestion control strategy is for TCP to begin conservatively, widen the session’s congestion window gradually until it reaches the full size of the session’s advertised Receive Window, and includes backing off the transmission rate sharply whenever the Receive Window is full. The TCP congestion control mechanism defines a congestion window (or cwin) that overlays the Send Window. By default, the cwin is initially a single TCP segment. Using slow start, the cwin increases incrementally until TCP detects a congestion signal, the most frequent congestion signal being a Send Window full condition that causes TCP to pause to wait for an Acknowledgement packet from the Receiver. Upon detecting a congestion signal, standard TCP also immediately cuts the size of the cwin in half.

To understand why these standard TCP congestion control mechanisms conflict with the HTTP/2 goal of maximizing throughput over a single TCP connection, let’s return to the Facebook example GET Request that was described earlier. If you remember, the initial Response message that Facebook generates dynamically based on the identity of the Requester is quite large, approximately 550 KB. This initial Response message frames the web page and contains numerous links to many additional static files – JavaScript files, style sheets, various images, plus advertising content. Let’s look at how TCP on the web server transmits the initial Response message first, and then we will look into the page composition process that gathers all the additional content referenced in the original Response message.

As part of establishing the TCP connection, the web server and the web client negotiate an AdvertisedWindow (The name of the sixteen-bit field in the TCP header that is used to advertise a Receive Window size. The 16-bit AdvertisedWindow field can specify a maximum Receive Window of 64 KB. An additional scaling factor can be added to the TCP Options to increase the size of the sliding Receive Window up to I GB. Windows uses a dynamic approach to called Receive Window auto-tuning to optimize the size of the Receive Window based on measurement feedback and congestion signals). To improve the page load time, the web server might attempt to negotiate a very large Send Window, but the web client is likely to reject a Send Window larger than 64 KB for the connection, which is the Windows default. So, let’s assume a negotiated value of 64 KB for the TCP AdvertisedWindow.

As part of the standard slow start mechanism in force at the beginning of the session, from the 550 KB Response message TCP initially sends one 1460 byte segment and then awaits the ACK from the client. The slow start mechanism then increments the congestion window by one segment, so TCP next sends two packets to the client and pauses to wait for the ACK. Then three packets, then four packets, etc.

Consider a connection with an RTT of 100 ms. TCP can send 1/0.1, or ten cwin-sized transmissions per second. If the size of the cwin increases by one for each cwin, then during the 1st second of the Response message transmission, TCP can only send 55 packets, or only about the first 80 KB of the full 550 KB message.

Windows provides several TCP options that can significantly improve the throughput of the initial Response message transmission, and a similar set of tuning options is available for Linux Apache web servers. These options are useful for improving throughput for any web site that frequently serves up large HTTP objects, but they are especially important in HTTP/2 because the protocol changes have the effect of boosting the potential throughput of each connection. The first is an option to increase the size of the initial cwin: InitialCongestionWindowMss. The second option is to change the IIS web server’s default CongestionProvider to the Compound TCP policy because Compound TCP tries to negotiate a larger Send Window and uses bigger increments to ramp up the cwin more aggressively.

For example, the following Windows Powershell command:

Set-NetTCPSetting –SettingName Custom –CongestionProvider CTCP –InitialCongestionWindowMss 16

sets the initial size of the cwin to 16 segments (about 23 KB) and switches to the Compound TCP congestion policy, which increases the size of the cwin faster than the normal policy. These are more aggressive settings than the very conservative TCP congestion control defaults, but are consistent with customers accessing your web site over mainly high-bandwidth broadband connections.

Another TCP option setting is especially important in HTTP/2 where the web client and web server communicate over a single TCP connection. By default, TCP uses Additive Increase/Multiplicative Decrease to reduce the size of the CongestionWindow and slow down the rate of data transmission when a congestion signal is detected. The most common congestion signal is a Send Window full condition when the Sender has filled the AdvertisedWindow with unacknowledged data and is forced to pause and wait for an ACK from the Receiver before it can send more data. As the name implies, Additive Increase/ Multiplicative Decrease cuts the size of the current CongestionWindow in half when a congestion signal is detected.

Returning to the monolithic Facebook web application for a moment, you may recall that the initial HTTP Response message referenced as many as 200 external resources, most of which are files consolidated on just two domains. With HTTP/2 multiplexing, the web browser establishes one session for each of those two domains and starts firing off GET Requests to them without pausing to wait for Response messages. On the web server side at a facility like Facebook, these GET Requests are processed in parallel on the massive web server back-end, like the one illustrated in Figure 3. A congestion signal that shrinks the size of the cwin on the single TCP connection that HTTP/2 clients and servers use to communicate has a major impact on potential throughput for that connection.

This is one of the places where a federated HHTP/1.x site often outperforms HTTP/2. To understand why, consider what happens when a congestion signal for one of the many parallel TCP connections under HTTP/1.x is detected. Instead of funneling the content requested through the front-end web proxy server that consolidates all the Response messages into a set of interleaved streams across a single connection, in HTTP/1.x it is a Best Practice to distribute content across multiple physical domains, which can be accessed using parallel sessions. Figure 4 illustrates the web client in HTTP/1.x opening up six parallel TCP connections in order to initiate six concurrent GET Requests to each of these sharded domains. Note that because HTTP/1.x is sessionless, the parallel connections can be handled independently by separate front-end proxy servers, which adds another element of parallelism to HTTP version 1 web server processing.

Pic24Figure . Under HTTP/1.x, which supports as many as six parallel sessions per domain, a congestion signal that reduces the size of the cwin in one connection has limited impact on the overall throughput of the active TCP sessions established between the web server and web client.

Figure 24 illustrates what happens when one of the 6 parallel TCP sessions detects a congestion signal that causes TCP to shrink the current size of the congestion window by 50%. Since the congestion signal only impacts one of the parallel TCP sessions, throughput to the web server, given the six concurrent sessions, is only reduced by 1/12 in the HTTP/1.x example illustrated in Figure 4.

The other congestion signal that causes TCP to shrink the size of the cwin is an unacknowledged packet that needs to be retransmitted. On the likely assumption that the unACKed packet is lost due to congestion encountered somewhere along the route to its destination, when TCP retransmits a packet, it not only reduces the cwin by 50%, it also reverts to slow start. Again, comparing HTTP/2 network traffic that relies on a single connection to HTTP/1.x that has multiple parallel connections, this congestion control policy reduces the overall throughput on the multiplexed HTTP/2 connection disproportionately. In Windows, setting the CwndRestart TCP option to True directs TCP to widen the cwin normally following a retransmit, instead of reverting to slow start.

Setting the CwndRestart option is important in HTTP/2, as a recent study of SPDY performance conducted by researchers at Lancaster University in the UK, headed by Yehia Elkhatib, showed. In the study, Elkhatib and his team systematically investigated the effect of bandwidth, latency and packet loss on SPDY performance, using a range of web applications that varied in size and complexity. On low latency links with significant packet loss – the type of connections many cell phone users get – Elkhatib’s research found that the HTTP/1.x protocol outperformed SPDY. The more significant issue for cell phone web access is that the type of complex web pages served up from monolithic web sites that HTTP/2 is optimized do not display well on portable devices with small screens. These portable devices are better served by one of the following strategies: redirecting the GET Requests to a mobile version of the site, responsive web development that queries the screen size and customizes the content accordingly, or native cell phone apps that communicate using web services. As noted earlier, the HTTP/2 multiplexing changes have no impact on web services.

Summary – In summary, HTTP/2 is the first change to the protocol used by all web applications in over 15 years. The major new features that HTTP/2 supports include multiplexing and server push. Multiplexing frees the web client from the serial sequence of Request:Response messages that the original sessionless HTTP protocol required. Along with server push technology, under HTTP/2 web servers acquire greater flexibility to interleave Response messages and maximize throughput across a single TCP connection. Multiplexing and server push should also eliminate the need to inline resources or to aggregate many smaller HTTP objects into one consolidated file in order to reduce the number of GET Requests required to compose the page, each of which should make web site administration simpler under HTTP/2 since domain sharding is no longer necessary. Interestingly, the new header compression feature, as well as some other aspects of the HTTP/2 changes, introduces additional session-oriented behavior into the protocol web applications utilize.

Based on performance testing with Google’s experimental SPDY protocol, which introduced similar web application protocol changes impacting both the browser and the web server, it appears that HTTP/2 will benefit monolithic web sites that generate large, complex pages. HTTP/2 multiplexing will have less of an impact on HTTP/1.x sites that are already configured to take advantage of the web client’s capability to establish up to six concurrent sessions with a single domain and download content in parallel. HTTP/1.x web sites that are configured today to maximize parallelism by spreading web page content across many domains, using either a federated model or domain sharding, may need to be re-configured to take better advantage of HTTP/2.

Finally, the standard TCP congestion control policies that are initially quite conservative about overloading a TCP connection come into conflict with the HTTP/2 goal of maximizing throughput across a single TCP connection. Web site administrators should consider setting TCP options that increase the initial size of the TCP congestion window and enable the Compound TCP congestion policy in Windows that increases the size of the cwin more rapidly. Another setting that allows TCP to more aggressive in the wake of a lost packet congestion signal should also be considered. These specific TCP performance options tend to improve throughput over high bandwidth, long latency connections, so they can also apply to HTTP/1.x sites that need to serve up large HTTP objects.

Mark B. Friedman (LinkedIn) is a Principal and the CTO at Demand Technology Software, which develops Mark_FriedmanWindows performance tools for computer professionals. As a professional software developer, he is also the author of two well-regarded books on Windows performance, as well as numerous other articles on storage and related performance topics. He was a recipient of the Computer Measurement Group’s A. A. Michelson lifetime achievement award in 2005. He currently lives near Seattle, WA and blogs at

Related Posts

  • HTTP/2 : Winds Of Change – Part IIIHTTP/2 : Winds Of Change – Part III This article is part 3 of a 5 part series by Mark B. Friedman (LinkedIn). To read the previous posts part of this series please see links to Part 1 and Part 2 below. Click here to read part 1 of the series - Link Click here to read part 2 of the series - Link What's In Store […]
  • HTTP/2 : Winds Of Change – Part IIHTTP/2 : Winds Of Change – Part II This article is part 2 of a 5 part series by Mark B. Friedman (LinkedIn). Click here to read part 1 of the series - Link SPDY 101- Here's what Wikipedia ( has got to say about SPDY. SPDY. (pronounced speedy) is an open networking protocol […]
  • HTTP/2 : Winds Of Change – Part IHTTP/2 : Winds Of Change – Part I This article is part 1 of a 5 part series by Mark B. Friedman (LinkedIn). Introduction: The Internet Engineering Task Force (IETF), the standards body responsible for Internet technology, recently accepted the HTTP/2 draft specification, the final official hurdle to be cleared prior […]
  • A Step by Step Guide to High Performing Websites and Web Applications "Optimizing Digital Experience - A Step by Step Guide to High Performing Websites and Applications " is your practical learning guide for how to deliver great digital experiences to your end users. Get the essential information you need to reduce user wait times for better audience […]