Series: Experiences with CoralCDN

Over the next few weeks, I’ll be posting a number of my “experiences” from the design and deployment of CoralCDN.  For those who aren’t familiar with CoralCDN, it’s a semi-open, self-organizing content distribution network (CDN) that I’ve been operating on PlanetLab for the past five years.

Our goal with CoralCDN was to democratize content distribution:  to make desired content available to everybody, regardless of the publisher’s own resources or dedicated hosting services.  It provides an open infrastructure that any publisher is free to use, without any prior registration. Publishing through CoralCDN is as simple as appending a suffix to a URL’s hostname, e.g., http://www.cnn.com.nyud.net/.  Clients accessing such Coralized URLs are transparently directed by its  network of DNS servers to nearby participating proxies. These proxies, in turn, coordinate to serve content and thus minimize load on origin servers.

CoralCDN was designed to automatically and scalably handle sudden spikes in traffic for new content. It can efficiently discover cached content anywhere in its network, and it dynamically replicates content in proportion to its popularity.  Both techniques help minimize origin requests and satisfy changing traffic demands.

While originally designed for decentralized and unmanaged settings, CoralCDN was deployed on the PlanetLab research network in March 2004.  It has since remained publicly available at hundreds of PlanetLab sites world-wide.  Accounting for the majority of public PlanetLab traffic and users, CoralCDN typically serves several terabytes of data per day, in response to 10–25 million HTTP requests from more than a million users (unique client IP addresses).

Over the course of its deployment, I’ve come to acknowledge several surprising realities.  On a positive note, CoralCDN’s notably simple interface led to widespread and innovative uses.  Sites began using CoralCDN as an elastic infrastructure, dynamically redirecting traffic to CoralCDN at times of high resource contention and pulling back as traffic levels abated.  These usage patterns perhaps presaged some of the “elastic computing” arguments and deployments for cloud infrastructure such as Amazon’s AWS.

On the flipside, fundamental parts of CoralCDN’s design were wrong.  If one were to consider its various usage patterns—for supporting random surfing, resurrecting unavailable content, or distributing popular content—CoralCDN’s design is unnecessary for the first, insufficient for the second, and overkill for the third.

But also interesting were the unexpected challenges it encountered while operating on a deployment platform (PlanetLab) that was heterogeneous, shared, virtualized, loosely managed, and itself oversubscribed.

Over the next set of posts, I’ll be running through a set of frustrations, realizations, sometimes amusing anecdotes, and hopefully some core lessons that I experienced while operating CoralCDN.  While these will be focused on CoralCDN’s architecture and deployment, they hopefully will prove to be a bit more broadly applicable.  On the platform side, there are important points for operators deploying services atop virtualized hosting platforms—not merely restricted to PlanetLab—as well as those managing these very platforms.  From the perspective of open content delivery, they encourage a more peer-to-peer step forward (part of the reason we’re now building FireCoral), as opposed to CoralCDN’s design point of working with unmodified browsers.

Before we go into lessons, however, I’ll be providing a brief post that summarizes CoralCDN’s system architecture.  A more detailed description can be found in our original NSDI’04 paper.

  • A very clever idea, will implement it as soon as possible. It really helped me a lot and hope it will be helpful for others also. Thanks a lot for sharing.Looking forward more information to read.