Our geek adventures
I’m writing a series of posts about how we scaled our product at Floorplanner.com. We started with a single server, then we created application servers in the cloud and after that we moved our databases to the cloud. This post is about our next step, about how we improved the delivery of our static files.
The floorplan drawing tool (the Editor) is a very important part of our product. It’s the place where all the floorplans are created. It comes with a 3D view, which is actually a separate application. Both apps are Flash application, so they run on the client side in the browser with the Flash Player.
Inside the 2D editor a floor plan can be decorated with furniture items and floor textures. These are all separate files in our system and we have thousands of them. We use separate files to keep the system flexible. A disadvantage of that is that it takes a lot of HTTP requests to load a floorplan with 50+ furniture items, but that’s another subject. Let’s move on.
Our file server
All these files were stored on the one machine we had. Being a file server was just another task it had to do. After we moved the website to the application nodes and the databases to the cloud, the old machine was still serving our files. The problem with this setup was that is was slow and it was a single point of failure.
The machine was located in the east of the US. For visitors from inside the US the time it took to download the 2D editor and the related files was acceptable. In Europe it was a bit slower, but also good enough. South America (especially Brazil) and Asia was a completely different story. It took 10-20 times longer to get the files to our users there. Distance matters. There was only one solution, we had to make sure that the files would be closer to our users.
CloudFront & S3
Luckily this problem was already solved by others, we had to use a content distribution network, aka CDN. Amazon had it’s own, called CloudFront. Before we could use that, we had to put our files on S3 (Simple Storage Service), the file storage solution from AWS.
“Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region.” This directly solved our single point of failure issue. Although this is true in our experience, there have been some reports on S3 outages. “The number of objects you can store is unlimited.” That’s a nice plus, no need to worry about that. The number one reason for a crashing server was a full hard drive.
However, we were seeing rapid growth in Brazil and Australia and CloudFront didn’t have any edge servers there (now they have one in Sao Paolo). So we still had a latency problem in those countries. Eventually that made us switch to CDNetworks which has 100 edge locations worldwide on 6 continents.