HTTP/2 is the first upgrade to the Hypertext Transfer Protocol since 1999. It’s goal is to improve website performance by optimizing how HTTP is expressed “on-the-wire.” It doesn’t change the semantics of HTTP, which means header fields, status codes, and cookies work exactly the same way as in HTTP/1.1.
HTTP/2 implements the same tools that HTTP1.1 does, but on top of that it has some sweet new things. HTTP/2 as opposed to HTTP1.1 is a multiplexed protocol which means it can handle parallel requests; it's implemented with a is a binary protocol which allows the use of improved optimization techniques; it compresses headers; and it allows a server to populate data in a client cache, in advance of it being required, through a mechanism called the server push. Besides that, all of the browsers only support HTTP/2 if they're over TLS which means they're always encrypted.
Benefits of HTTP/2
HTTP/2 introduces several new features, and they’re all designed to improve page load times for your website visitors
1. Multiplexing:
This is the most significant feature of HTTP/2. HTTP 1.1 require each request to use its own TCP connection. But Multiplexing allows browser to include multiple requests in a single TCP connection.
2. Header Compression:
HTTP/2 force all HTTP headers to sent in compressed format, reducing the amount of information that needs to exchange between browser and server. HTTP 1.1 does not provide any header compression.
3. Server Push:
This lets our edge network send web assets back to your browser before knows it need them. This will speeds up page load time by elimination unnecessary round tips.
4. Stream Priority:
Stream priority is a mechanism for browsers to specify which assets they would like to receive first. For example, an HTTP/2-aware browser can use stream priority to load the HTML for a page first, followed by CSS, then JavaScript, and finally image assets. This order allows the browser to render the page as quickly as possible.
What's Wrong with HTTP 1.1
HTTP1.1 was limited to processing only one outstanding request per TCP connection, forcing browsers to use multiple TCP connections to process multiple requests simultaneously.
However, using too many TCP connections in parallel leads to TCP congestion that causes unfair monopolization of network resources. Web browsers using multiple connections to process additional requests occupy a greater share of the available network resources, hence downgrading network performance for other users.
HTTP requests
Issuing multiple requests from the browser also causes data duplication on data transmission wires, which in turn requires additional protocols to extract the desired information free of errors at the end-nodes.
The internet industry was naturally forced to hack these constraints with practices such as domain sharding, concatenation, data inlining and spriting, among others. Ineffective use of the underlying TCP connections with HTTP1.1 also leads to poor resource prioritization, causing exponential performance degradation as web applications grow in terms of complexity, functionality and scope.
Domain sharding
The web has evolved well beyond the capacity of legacy HTTP-based networking technologies. The core qualities of HTTP1.1 developed over a decade ago have opened the doors to several embarrassing performance and security loopholes.
The Cookie Hack for instance, allows cybercriminals to reuse a previous working session to compromise account passwords because HTTP1.1 provides no session endpoint-identity facilities. While the similar security concerns will continue to haunt HTTP/2, the new application protocol is designed with better security capabilities such as the improved implementation of new TLS features.
Resources: cloudflares , wikipedia
The Tech Platform
Comments