An introduction to HTTP/2

SUMMARY: This article is a discussion on the changes brought by HTTP/2, along with their implications. Read on for an introduction to the protocol, and a brief discussion on how it improves Internet user experience.

 

HTTP/2 is the latest revision of the HyperText Transfer Protocol or HTTP [01], which is used by browsers to communicate with web servers. Derived from the older SPDY [02] protocol, HTTP/2 is the first new version of HTTP since the standardization of HTTP/1.1 in RFC 2068 in 1997.

It was developed by the Internet Engineering Task Force (IETF) HTTP working group httpbis (where “bis” means “twice”), and published as RFC 7540 [03] in May 2015.

HTTP/2 adoption

HTTP/2 has been increasingly adopted by working websites since its official publication. The online survey service W3Techs [04] notes that that from September 2017 to September 2018, HTTP/2 support rose from 16% to 30% of all monitored web sites.

Furthermore, major browsers (e.g. Chrome, Firefox, Edge, etc) already provide full support for HTTP/2 [05]. (Some even developed experimental implementations before HTTP/2 was accepted as a standard.)

This widespread adoption means that HTTP/2 has the potential to become the de facto communications protocol of the Web.

Motivation behind HTTP/2

Httpbis‘ charter [06] mentions several components of HTTP/1.1 that could be improved as motivation for HTTP/2. However, the group’s primary goal was to decrease the latency perceived by the end user.

To do this, httpbis considered minimizing bandwidth overhead via header compression and aggressive prefetching techniques (e.g. server push), while at the same time trying to systematically address known performance issues such as connection congestion and the Head-of-Line (HoL) blocking problem [07].

Moreover, HTTP/2 was required to be backwards-compatible, meaning that it had to use the same method verbs, status codes, URIs, and (most) header fields found in HTTP/1.1. HTTP/2 also had to be designed to support common HTTP use cases, such as desktop and mobile web browsers, programming interfaces, proxies and firewalls.

To maintain this compatibility, the working group developed a protocol negotiation mechanism which would allow clients and servers to select among HTTP/1.1, HTTP/2, or even non-HTTP protocols.

So what’s new in HTTP/2?

HTTP/2 still uses the same URI schemes and port numbers used in HTTP/1.1 (i.e. port 80 for http URIs, and port 443 for https URIs), but many things are done differently under the hood.

The most fundamental change is the introduction of frames as the basic data unit of HTTP/2.

HTTP/1.1 traditionally uses packets to represent network data. A client constructs a request packet with a method verb (e.g. GET or POST), appending a list of headers describing the connection, and a body that contains application data.

Upon receiving a request packet, an HTTP/1.1 server responds with a similar response packet containing the requested information. As a result, each request and response cycle requires a new connection.

Conversely, HTTP/2 clients establish a single network connection with the server, which they use for all subsequent network communications. Headers, user data, error messages, and any such information are packed into distinct binary data structures called frames, before being transmitted over the network.

This seems like a small change, but it carries significant implications.

Header compression

A great benefit of using frames is that HTTP/2 headers are packed into a HEADER frame, which can be compressed using normal compression methods. Headers must be transferred prior to any data, so header compression can decrease the bandwidth overhead imposed by HTTP/2.

Header compression, along with the following performance improving HTTP/2 features, can be especially useful in mobile or internet-of-things (IOT) applications, where minimal network usage is required.

Streams and multiplexing

An independent sequence of semantically relevant frames is called a stream. Streams are assigned a unique identifier by the endpoint (i.e. client or server) that created them, so that other endpoints can distinguish among them.

Endpoints can interleave frames from several streams over the same HTTP/2 connection, allowing a single network connection to support multiple concurrently open streams.This process is called multiplexing [08].

Reusing the same connection mitigates problems such as connection congestion and the HoL problem mentioned before, and offers better performance and smoother user experience than previous HTTP versions.

Stream dependency and prioritization

Managing multiple concurrent streams means that some streams will be processed before others. HTTP/2 allows the developer (or administrator) to fine-tune this behavior with a feature called stream dependency.

A stream can depend on the complete transfer of another stream before it gets handled. For example, on a site where the main content of a web page should be loaded before any recommendations for similar content, HTTP/2 allows the recommendation stream to be created as dependent upon the main content stream.

HTTP/2 also supports stream prioritization. That is, each stream can be assigned a priority to suggest how urgently the endpoints should allocate resources to handle the stream’s frames.

Prioritization and stream dependency help developers and web site owners optimize their site’s network usage, which can significantly improve their site’s user experience.

Server Push

Finally, HTTP/2 can improve a web site’s performance by providing “push” functionality. An HTTP/2 web server can respond with data for more queries than the client has originally requested. This allows the server to supply data it knows a web browser will need to render a page, without waiting for the browser to examine the first response, and thus without the overhead of an additional request cycle.

Server push gives developers complete control over the number of requests required for a browser to render their web site. When used correctly, this feature can minimize network overhead.

Naturally, misuse of the push feature can also waste more bandwidth than is actually necessary. For this reason, HTTP/2 allows a client to request that server push be disabled when first negotiating a connection.

HTTP/2 Security

If you’ve read up to this point, it should be clear that the developers of HTTP/2 really put effort into improving performance. However, it should be noted that HTTP/2 can also help improve browser users’ security overall.

More specifically, HTTP/2 is defined for both HTTP URIs (i.e. without encryption) and HTTPS URIs (over TLS encrypted channels). Although the standard itself does not require the use of encryption, all major browser implementations (i.e. Firefox [09], Chrome, Safari, Opera, IE, Edge) have decided that they will only support HTTP/2 over TLS.

In fact, browsers distinguish between clear-text HTTP/2 and HTTP/2 over encrypted TLS as two different protocols. Encrypted HTTP/2 is called h2 and clear-text h2c. As of this writing, none of the major browsers support h2c , which means that TLS encryption is mandatory for a web site to take advantage of HTTP/2’s other advantages.  Hence, when HTTP/2 becomes the default web network protocol, legacy web site owners that have not yet upgraded to SSL/TLS, will be strongly motivated to finally do so.

If you don’t know why protecting a web site with an SSL/TLS certificate is really important for user security, please take a look at this article.

Conclusion

Widespread adoption of HTTP/2 will bring about a new and improved Web. It is faster, needs less bandwidth and it helps web sites to stay secure. Its mainstream adoption is sure to make overall web user experience smoother and safer.

Get a certificate today and join us in the future.

References

  1. HTTP protocol
  2. SPDY protocol
  3. HTTP/2 specification
  4. W3Techs HTTP/2 adoption survey
  5. HTTP/2 adoption in browsers
  6. httpbis charter
  7. HOL Blocking
  8. Multiplexing
  9. Firefox on HTTP/2