The SPDY protocol illustrated with diagrams
Posted on: 2012-07-26 21:21:24

Google is trying to make web browsers faster by inventing a new protocol called SPDY. The acronym doesn't mean anything, but it sounds similar to the word "speedy". SPDY has a few features to help download web pages faster, but it hides a lot of hidden complexity that implementors have to worry about.

The driving force behind SPDY is to save TCP setup time. Previously, to retrieve a web page, the browser would make a single connection to the web server. Then, it would request the resources one at a time. For example, it may first request the HTML page, then the scripts, and then the graphics. The problem is that the web server may take a long time to generate some parts of the web page, but it is able to serve other parts immediately. If the browser happens to request the slow parts first, all of the other resources on the web page will have to wait, and it will feel slower.

The new SPDY protocol is simply a way of allowing different resources to be sent over the same connection, in parallel. Here is how it works.

First, the browser opens a TCP connection to the server. This is when SPDY takes over, and where the first bit of complexity begins. Instead of sending an HTTP request, the browser will send a special control frame. SPDY is a framed protocol. That is, each message is prefixed by its length.

  |C| Version(15bits) | Type(16bits) |
  | Flags (8)  |  Length (24 bits)   |
  |               Data               |

The first bit of complexity then, is how the browser knows that it may use the SPDY protocol. Maybe it is talking to one of the millions of servers that do not speak SPDY. On the server side, the server must detect that the client supports SPDY. In the beginning of an HTTP connection, the first four bytes sent will always be "HTTP". With SPDY, they consist of the version and type. The server could, perhaps, assume that the client is using SPDY if the first message is a semantically valid SPDY message. In practice, however, connections are performed over SSL which includes a negotiation step for the type of connection to be used.


At any time in the connection, either side may send the HELLO frame. The HELLO frame may contain four numbers:
  • uplink and downlink bandwidth,
  • maximum number of streams (which should be at least 100)
  • round trip time

However, finding out what these values are is difficult and unreliable. Your bandwidth could change from one moment to the next. At best, they are an approximation. No response is necessaryu for the hello packet.


Either the server or the client may send a PING packet. Its payload is a 32-bit number, which is expected to be unique. Whenever the client or server receives the PING packet, it should echo it back as soon as possible, with a priority higher than all other activities. This helps in calculating the round trip time. However, there are some issues with this priority scheme. What seems to be a simple protocol becomes more complex, if both the client and server need to maintain separate queues of data to be sent, with different levels of priority.


To create a stream, send SYN_STREAM control message. (This SYN_STREAM may also be used to half-close the stream) The syn_stream contains a stream id and compressed HTTP headers, and a flag indicating whether it is bidirectional or unidirectional. If it is bidirectional, the other party replies with a SYN_REPLY message, which can also contain headers.

Part of what makes SPDY speedy is that the headers are compressed. Although a general zlib compressor is used, it is initialized with a standardized string of text prior to compression. This text string contains many common HTTP headers so that it performs best when these are used.

How Data is transmitted

After a stream is setup, and if the stream does not have the FINAL flag, the sender transmits the rest of the data in DATA frames. The final data frame contains a FINAL flag. Although the stream is considered done, other streams may be started.

Ending the connection

Either side may end the TCP connection at any time. However, any active streams that have not been closed, by receipt of the FINAL flag, are deemed to be aborted.

Buffer Bloat, the enemy of SPDY

SPDY attempts to make browsing faster by allowing a single TCP connection to be multiplexed over different resources. However, a problem with this approach is that TCP is not designed to multiplex resources at different priority levels. Consider the case where the server is sending a large web page over a fast, but high-latency connection. During the download of the web page, the client notices that it must download some javascript to continue processing the page. However, due to the nature of TCP, although the client requests the javascript right away, it must download the entire web page before it begins to receive the script file. Here is a sequence diagram depicting this condition. We have used an additional participant in the diagram to show how the various servers buffer data and cause high latency.

The interesting thing is that if the browser had not used SPDY, and instead used an additional TCP connection to request the script file, it could have arrived much sooner. A single TCP connection is like highway that can clog up with traffic. All the cars ahead of you have to get to the end before you can. Having multiple TCP connections is like having an extra lane, over which higher priority traffic can arrive.

Perhaps servers can avoid this condition by carefully managing their transfer rate for all streams, and by assigning streams priorities and bandwidth. But this management comes at great complexity cost to what should be a simple protocol. Hence, implementers of have to deal with a lot of hidden complexity.


  1. Speedy Whitepaper
  2. Life beyond HTTP 1.1: Google's SPDY

Post comment

Real Name:
Your Email (Not displayed):

Text only. No HTML. If you write "http:" your message will be ignored.
Choose an edit password if you want to be able to edit or delete your comment later.
Editing Password (Optional):