HTTP Pipelining and Efficient Proxies
It has taken about a week longer than I expected, but 3 total rewrites later, I have a functioning distributed bi-directional caching proxy written in Erlang. I am still trying to settle on the name of the project and the license, but it will be open source. And available on github. Basically, after using Varnish in production for a year, haproxy for longer, and an ever increasing number of key-value data stores, web frameworks, and over a dozen programming languages, I just decided to make a new piece of infrastructure that bridges the gap between caching proxies like Squid and Varnish, tcp/ip proxies like Haproxy, and RESTful web services.
The principle design decisions sought to do the following:
- Make URI routing easy, allowing for canonical URIs to map to multiple logical entities over time
- Make it easy to manage cache state across multiple clusters, so if I PUT to a URI that change will be reflected across all caches asap
- Allow for multiple applications to implement segments of a URL.
- Support pipelining and HTTP tunneling over HTTP
This last bit is what I really want to talk about today. HTTP is a rather expensive protocol to parse. But it also is a very