On Dev
Programming: tools and languages
From a web developer’s perspective, I’d like to see a few changes to the HyperText Transfer Protocol. When thinking about some areas of the protocol, I’d love to just start with a new protocol from scratch. But since it’s been around for so long I don’t think we’re going to get rid of it that easily (just like Java).
What could be improved in HTTP?
The first rule in Yahoo’s “Best Practices for Speeding Up Your Web Site” is to minimize the number of HTTP requests your web site requires. This means combining all your CSS into one file, all your JS into one file, etc. We can take this further:
Most front end developers probably know about CSS sprites. CSS sprites are a way to reduce the number of image files downloaded from a server by packaging multiple images in one image file. Using CSS to place the images, you can specify which part of the combined image you want to show in certain parts of the page. This has advantages and disadvantages.
By combining media types into one file, performance is improved server side and in terms of bandwidth because only one response needs to be sent (for that group of images). However, complexity increases: the developer has to do extra work to support this by combining JS and CSS and by creating sprites and writing the CSS to place them correctly on the page (there are tools to help with this).
Is it possible to build this type of functionality into the protocol? Can the user make one request and get all the data it needs from the server for that request in one response?
When a client makes a request to an HTTP server for HTML (whether it is dynamic or static), the server could know what else the client needs in terms of CSS, JS and media by processing the response its going to send. For example, it could parse the HTML and look for image tags, stylesheets and JS files. Or the related files could be specified in a configuration file. The server could then start sending related media files immediately after the requested file is sent.
There are a few approaches one could take:
- A design like Mutltipart messages could be used. Many files are sent at once under one content-type. (an ugly solution in my opinion)
- Many files can be sent sequentially as if they were requested in a persistent connection.
- An archive of all the necessary files could be sent as one file.
Approach (2) seems the most efficient because the browser can start parsing the HTML first and then apply the related files as they come in from the wire.
But what if the client only needs a single file within that archive because they have the rest of it cached? In both of these approaches, it would be complex to specify which files are needed by the client when only some of the files are needed. What if the user clicks on a link on the first page that goes to a very similar page on the same server? How does the client request only the files it needs? There is no easy solution to this problem.
For this post, I’ve focused on one way the protocol could be improved in terms of performance. Is it smart to focus on performance when improving the HTTP protocol? What other areas in the protocol need to be improved? Functionality, security, caching? If you were designing a new protocol, how would you improve the web? How do you imagine the web working in 20 years? Leave a comment…