About Archie Russell

I'm a Backend Engineer at Flickr

A Year Without a Byte

One of the largest cost drivers in running a service like Flickr is storage. We’ve described multiple techniques to get this cost down over the years: use of COS, creating sizes dynamically on GPUs and perceptual compression. These projects have been very successful, but our storage cost is still significant.
At the beginning of 2016, we challenged ourselves to go further — to go a full year without needing new storage hardware. Using multiple techniques, we got there.

The Cost Story

A little back-of-the-envelope math shows storage costs are a real concern. On a very high-traffic day, Flickr users upload as many as twenty-five million photos. These photos require an average of 3.25 megabytes of storage each, totalling over 80 terabytes of data. Stored naively in a cloud service similar to S3, this day’s worth of data would cost over $30,000 per year, and continue to incur costs every year.

And a very large service will have over two hundred million active users. At a thousand images each, storage in a service similar to S3 would cost over $250 million per year (or $1.25 / user-year) plus network and other expenses. This compounds as new users sign up and existing users continue to take photos at an accelerating rate. Thankfully, our costs, and every large service’s costs, are different than storing naively at S3, but remain significant.



Cost per byte have decreased, but bytes per image from iPhone-type platforms have increased. Cost per image hasn’t changed significantly.

Storage costs do drop over time. For example, S3 costs dropped from $0.15 per gigabyte month in 2009 to $0.03 per gigabyte-month in 2014, and cloud storage vendors have added low-cost options for data that is infrequently accessed. NAS vendors have also delivered large price reductions.

Unfortunately, these lower costs per byte are counteracted by other forces. On iPhones, increasing camera resolution, burst mode and the addition of short animations (Live Photos) have increased bytes-per-image rapidly enough to keep storage cost per image roughly constant. And iPhone images are far from the largest.

In response to these costs, photo storage services have pursued a variety of product options. To name a few: storing lower quality images or re-compressing, charging users for their data usage, incorporating advertising, selling associated products such as prints, and tying storage to purchases of handsets.

There are also a number of engineering approaches to controlling storage costs. We sketched out a few and cover three that we implemented below: adjusting thresholds on our storage systems, rolling out existing savings approaches to more images, and deploying lossless JPG compression.

Adjusting Storage Thresholds

As we dug into the problem, we looked at our storage systems in detail. We discovered that our settings were based on assumptions about high write and delete loads that didn’t hold. Our storage is pretty static. Users only rarely delete or change images once uploaded. We also had two distinct areas of just-in-case space. 5% of our storage was reserved space for snapshots, useful for undoing accidental deletes or writes, and 8.5% was held free in reserve. This resulted in about 13% of our storage going unused. Trade lore states that disks should remain 10% free to avoid performance degradation, but we found 5% to be sufficient for our workload. So we combined our our two just-in-case areas into one and reduced our free space threshold to that level. This was our simplest approach to the problem (by far), but it resulted in a large gain. With a couple simple configuration changes, we freed up more than 8% of our storage.



Adjusting storage thresholds

Extending Existing Approaches

In our earlier posts, we have described dynamic generation of thumbnail sizes and perceptual compression. Combining the two approaches decreased thumbnail storage requirements by 65%, though we hadn’t applied these techniques to many of our images uploaded prior to 2014. One big reason for this: large-scale changes to older files are inherently risky, and require significant time and engineering work to do safely.

Because we were concerned that further rollout of dynamic thumbnail generation would place a heavy load on our resizing infrastructure, we targeted only thumbnails from less-popular images for deletes. Using this approach, we were able to handle our complete resize load with just four GPUs. The process put a heavy load on our storage systems; to minimize the impact we randomized our operations across volumes. The entire process took about four months, resulting in even more significant gains than our storage threshold adjustments.



Decreasing the number of thumbnail sizes

Lossless JPG Compression

Flickr has had a long-standing commitment to keeping uploaded images byte-for-byte intact. This has placed a floor on how much storage reduction we can do, but there are tools that can losslessly compress JPG images. Two well-known options are PackJPG and Lepton, from Dropbox. These tools work by decoding the JPG, then very carefully compressing it using a more efficient approach. This typically shrinks a JPG by about 22%. At Flickr’s scale, this is significant. The downside is that these re-compressors use a lot of CPU. PackJPG compresses at about 2MB/s on a single core, or about fifteen core-years for a single petabyte worth of JPGs. Lepton uses multiple cores and, at 15MB/s, is much faster than packJPG, but uses roughly the same amount of CPU time.

This CPU requirement also complicated on-demand serving. If we recompressed all the images on Flickr, we would need potentially thousands of cores to handle our decompress load. We considered putting some restrictions on access to compressed images, such as requiring users to login to access original images, but ultimately found that if we targeted only rarely accessed private images, decompressions would occur only infrequently. Additionally, restricting the maximum size of images we compressed limited our CPU time per decompress. We rolled this out as a component of our existing serving stack without requiring any additional CPUs, and with only minor impact to user experience.

Running our users’ original photos through lossless compression was probably our highest-risk approach. We can recreate thumbnails easily, but a corrupted source image cannot be recovered. Key to our approach was a re-compress-decompress-verify strategy: every recompressed image was decompressed and compared to its source before removing the uncompressed source image.

This is still a work-in-progress. We have compressed many images but to do our entire corpus is a lengthy process, and we had reached our zero-new-storage-gear goal by mid-year.

On The Drawing Board

We have several other ideas which we’ve investigated but haven’t implemented yet.

In our current storage model, we have originals and thumbnails available for every image, each stored in two datacenters. This model assumes that the images need to be viewable relatively quickly at any point in time. But private images belonging to accounts that have been inactive for more than a few months are unlikely to be accessed. We could “freeze” these images, dropping their thumbnails and recreate them when the dormant user returns. This “thaw” process would take under thirty seconds for a typical account. Additionally, for photos that are private (but not dormant), we could go to a single uncompressed copy of each thumbnail, storing a compressed copy in a second datacenter that would be decompressed as needed.

We might not even need two copies of each dormant original image available on disk. We’ve pencilled out a model where we place one copy on a slower, but underutilized, tape-based system while leaving the other on disk. This would decrease availability during an outage, but as these images belong to dormant users, the effect would be minimal and users would still see their thumbnails. The delicate piece here is the placement of data, as seeks on tape systems are prohibitively slow. Depending on the details of what constitutes a “dormant” photo these techniques could comfortably reduce storage used by over 25%.

We’ve also looked into de-duplication, but we found our duplicate rate is in the 3% range. Users do have many duplicates of their own images on their devices, but these are excluded by our upload tools.  We’ve also looked into using alternate image formats for our thumbnail storage.    WebP can be much more compact than ordinary JPG but our use of perceptual compression gets us close to WebP byte size and permits much faster resize.  The BPG project proposes a dramatically smaller, H.265 based encoding but has IP and other issues.

There are several similar optimizations available for videos. Although Flickr is primarily image-focused, videos are typically much larger than images and consume considerably more storage.

Conclusion



Optimization over several releases

Since 2013 we’ve optimized our usage of storage by nearly 50%.  Our latest efforts helped us get through 2016 without purchasing any additional storage,  and we still have a few more options available.

Peter Norby, Teja Komma, Shijo Joy and Bei Wu formed the core team for our zero-storage-budget project. Many others assisted the effort.

Perceptual Image Compression at Flickr

Archie Russell, Peter Norby, Saeideh Bakhshi

At Flickr our users really care about image quality.  They also care a lot about how responsive our apps are.  Addressing both of these concerns simultaneously is challenging;  higher quality images have larger file sizes and are slower to transfer.   Slow transfers are especially noticeable on mobile devices.   Flickr had historically aimed for high quality at the expense of larger files, but in late 2014 we implemented a method to both maintain image quality and decrease the byte-size of the images we serve to users.   As image appearance is very important to our users,  we performed an extensive user test before rolling this change out.   Here’s how we did it.

Background:  JPEG Quality Settings

Fig 1.    JPEG settings vs file size for a test image.

JPEG compression has several tuneable knobs.   The q-value is the best known of these; it adjusts the level of spatial detail stored for fine details;  a higher q-value typically keeps more detail.    However,  as q-value gets very close to 100,  file size increases dramatically,  usually without improving image appearance.

If file size and app performance isn’t an issue,  dialing up q-value is an easy way to get really nice-looking images; this is what Flickr has done in the past.    And if appearance isn’t very important,  dialing down q-value is a viable option.    But if you want both,  you’re kind of stuck.   Additionally,  q-value isn’t one-size-fits-all,  some images look great at q-value 80 while others don’t.

Another commonly adjusted setting is chroma-subsampling,  which alters the amount of color information stored in a JPEG file.    With a setting of 4:4:4,  the two chroma (color) channels in a JPG have as much information as the luminance channel.   In an image with a setting of 4:2:0, each chroma channel has only a quarter as much information as in an a 4:4:4 image.

 q=96,  chroma=4:4:4 (125KB) q=70, chroma=4:4:4 (67KB)
q=96, chroma=4:2:0 (62KB)  q=70, chroma=4:2:0 (62KB)

Table 1:   JPEG stored at different quality and chroma levels.   The upper left image is saved at high quality and chroma level; notice the color and detail in the folds of the red flag.   The lower right image has the lowest quality;  notice artifacts along the right edges of the red flag.

Perceptual JPEG Compression

Ideally we’d have an algorithm which automatically tuned all JPEG parameters to make a file smaller, but which would limit perceptible changes to the image.  Technology exists that attempts to do this and can decrease image file size by 30-50%. This compression ratio is highly dependent on image content and dimensions.

compressed: 112KB non-compressed: 224KB

Fig 2. Compressed cropped JPEG is 50% smaller than not-compressed cropped JPEG, above, with no obvious defects.  Compression ratio is similar for a compressed 2048-pixel wide JPEG (475KB) of the entire scene and its corresponding not-compressed JPEG (897KB). 

We were pleased with perceptually compressed images in non-structured examinations.  The compressed images were smaller and nearly indistinguishable from their sources.   But we wanted to really quantify how well the technology worked before considering incorporating it into Flickr.  The standard computational tools for evaluating compression, such as SSIM, are fairly simplistic and don’t do a great job at modeling how a user sees things.  To really evaluate this technology had to use a better measure of perceptibility:  human minds.

The Gamified Taste Test

To test whether our image compression would impact user perception of image quality, we put together a “taste test.”  The taste test is constructed as a game with multiple rounds where users look at both compressed and uncompressed images.  Users accumulate points the longer they play, and get more points for doing well at the game.  We maintained a leaderboard to encourage participation and used only internal testers.The game’s test images came from a diverse collection of 250 images contributed by Flickr staff.  The images came from a variety of cameras and included a number of subjects from photographers with varying skill levels.

sampling of images used in taste test
Fig 3. A sampling of images used in our taste test.

In each round, our test code randomly select a test image, and present two variants of this image side by side.  50% of the time we present the user two identical images; the rest of the time we present one compressed image and one uncompressed image.  We ask the tester if the two images look the same or different and we’d expect a user choosing randomly OR a user unable to distinguish the two cases would answer correctly about half the time.  We randomly swap the location of the compressed images to compensate for user bias to the left or the right.  If testers choose correctly, they are presented with a second question: “Which image did you prefer, and why?”

two kittens in a video game
Fig 4. Screenshot of taste test.

Our test displays images simultaneously to prevent testers noticing a longer load time for the larger, non-compressed image.  The images are presented with either 320, 640, or 1600 pixels on their longest side.  The 320 & 640px images are shown for 12 seconds before being dimmed out.  The intent behind this detail is to represent how real users interact with our images.  The 1600px images stay on screen for 20 seconds, as we expect larger images to be viewed for longer periods of time by real users.   We award 100 points per round, regardless of whether a tester chose correctly and also award a bonus of 400 points when a tester correctly identifies whether images were identical or different.  We update the tester’s score every five tests so that the user perceives an increasing score without being rewarded immediately for any particular behavior.

Taste Test Outcome and Deployment

We ran our taste test for two weeks and analyzed our results.    Although we let users play as long as they liked,  we skipped the first result per user as a “warm-up” and considered only the subsequent ten results,  this limited the potential for users training themselves to spot compression artifacts.   We disregarded users that had fewer than eleven results.

images total results # labeled “identical” by tester % labeled “identical” by tester
two identical images 368 253 68.8%
one compressed, one non-compressed 352 238 67.6%

Table 2.   Taste test results.   Testers select “identical” at nearly the same rate, whether the input is identical or not.

When our testers were presented with two identical images, they thought the images were identical only 68.8% of the time(!), and when presented with a compressed image next to a non-compressed image,  our testers thought the images were identical slightly less often:  67.6% of the time.  This difference was small enough for us,  and our statisticians told us it was statistically insignificant.  Our image pairs were so similar that multiple testers thought all images were identical and reported that the test system was buggy. We inspected the images most often labeled different, and found no significant artifacts in the compressed versions.

So even in this side-by-side test,  perceptual image compression is just barely noticeable when images are presented side-by-side.  As the Flickr website wouldn’t ever show compressed and uncompressed images at the same time, and the use of compression had large benefits in storage footprint and site performance, we elected to go forward.

At the beginning of 2014 we silently rolled out perceptual-based compression on our image thumbnails (we don’t alter the “original” images uploaded by our users).  The slight changes to image appearance went unnoticed by users, but user interactions with Flickr became much faster,  especially for users with slow connections, while our storage footprint became much smaller.  This was a best-case scenario for us.

Evaluating perceptual compression was a considerable task,  but it gave the confidence we needed to apply this compression in production to our users.    This marked the first time Flickr had adjusted image settings in years, and, it was fun.
High Score List
Fig 5.  Taste test high score list

Epilogue

After eighteen months of perceptual compression at Flickr,  we adjusted our settings slightly to shrink images an additional 15%.   For our users on mobile devices,  15% fewer bytes per image makes for a much more responsive experience.We had run a taste test on this newer setting and users were were able to spot our compression slightly more often than with our original settings.   When presented a pair of identical images, our testers declared these images identical 65.2% of the time,  when presented with different images,  of our testers declared the images identical 62% of the time.   It wasn’t as imperceptible as our original approach, but, we decided it was close enough to roll out.

Boy were we wrong!   A few very vocal users spotted the compression and didn’t like it at all.    The Flickr Help Forum had a very lively thread which Petapixel picked up.  We beat our heads against the wall considered our options and came up with a middle path between our initial and follow-on approaches,  giving us smaller, faster-to-load files while still maintaining the appearance our users expect.

Through our use of perceptual compression,  combined with our use of on-the-fly resize and COS,  we’ve been able to decrease our storage footprint dramatically, while simultaneously improving user experience. It’s a win all around but we’re not done yet — we still have a few tricks up our sleeves.

Real-time Resizing of Flickr Images Using GPUs

At Flickr we work with a huge number of photos. Our users upload over 27 million photos a day, and our total collection has over 12 billion photos. This is fantastic! As usage grows, we are always looking for ways to use our storage more efficiently. Recently our storage team wrote about some new commodity storage technology now in use at Flickr which increases efficiency. But we also looked into how much data we store for each photo. In the past we stored many sizes of every photo to make serving fast. We wanted to challenge that model and find the minimal set of data to store.

Thumbnail Footprint Reduction

One of our biggest opportunities for byte per photo improvement is through reduction in the footprint of Flickr’s “thumbnails”. Thumbnail is a bit of a misnomer at Flickr; our thumbnails are as large as 2048 pixels on their longest side, so at Flickr we usually refer to these as resizes.  We create these resizes in order to provide a consistent,  fast experience for our users over a variety of use cases.



Different sizes used in different contexts. From left to right: Cameraroll uses small thumbnails, to enable fast navigation through many sizes. Our Photo Page uses our largest, most detailed sizes. Search uses sizes in between these two extremes. Red panda photos by Mathias Appel.

The selection of sizes has grown semi-organically over the years, and all told, we serve eleven different resizes per photo which, in sum, use nearly as much storage as the original photo. Almost 90% of this storage is held in the handful of resizes 640px and larger, so we targeted our efforts at eliminating some of these sizes.



Left: Distribution of byte size by resize dimension. Storage is concentrated in images with largest dimensions. Right: size distribution after largest sizes eliminated.

A Few Approaches

A simple approach to this problem would be just to cease offering some of the larger sizes.    For instance, we could drop the 1600px image from our API and require the design to adjust.   However, this requires compromises that we didn’t want to take on. Instead we took on a pretty ambitious goal: maintain our largest resize, usually 2048px wide, as a source image and create any other moderate or large-sized resizes on-the-fly from this source, without sacrificing image quality or significantly affecting performance. Using the original uploaded photo as a resize source image was impractical, as these can be very large and exist in a variety of formats.

Sounds easy, right? We already resize images when users upload, so why not just use that same technology on serving. Well, almost. The problem with the naive approach is that high-quality resizing of JPEGs is a lot slower than is widely known. A tool we use frequently, GraphicsMagick, produces beautiful images but takes over 225ms to resize a 2048px JPEG down to 1600px, depending on quality settings. This is slow enough that this method would impact user experience, and would require many CPUs to handle our load. Ymagine,  a high-performance CPU-based tool we’ve open sourced,  is twice as fast as GraphicsMagick(!). We use Ymagine extensively on smaller images, but for the large sizes we’re targeting we needed even more performance. A GPU-based solution ultimately filled our needs.

Our GPU-based Solution

We created a tier of dedicated resize servers, each with an GPU co-processor. Each of these boards has two GPUs, each with 1500+ “cores”, running at just under 1GHz. These cores aren’t anywhere near as performant as a CPU core, but there are many of them. We tested a range of server-grade boards to find the best performing type for our workload. Many manufacturers offer consumer-grade boards with incredible specifications and lower price points, but these lack server-grade cooling and other features such as ECC RAM. One member of our team had experience using these lower grade boards in a previous application and recommended against it.



Resize system architecture

On these resize servers we run a fairly vanilla Apache with a plugin written in C++.  This server responds to resize requests, reads our source image from disk into shared memory,  and hands off requests off to persistent resize daemons that do all communication with our  GPUs.  A daemon-type approach is necessary due to a somewhat lengthy initialization process with our GPUs.

Our resize daemons transfer JPEGs from shared memory to GPU device memory. Once here,  the real image processing takes place. The JPEGs are decoded, cropped, sharpened, resized, re-sharpened as needed, re-encoded as JPEGs, and finally transferred back to shared memory.    From shared memory, our Apache module returns the resized JPEG to the caller.



A simple resize pipeline. Post-sharpening overcomes fuzziness introduced when downscaling.

There are several accepted resize algorithms, but to retain the Flickr “look”, we implemented the same Lanczos resize and kernel sharpening algorithms that we’ve used for years in CUDA.     This had the added benefit of being able to directly compare images generated through GraphicsMagick and our GPU-based code.

Performance

With significant optimization, this code is able to resize our 2048px JPEGs to 1600px in under 16ms. This is more than 15x faster than GraphicsMagick and nearly 10x faster than Ymagine.  Resizes from 2048px to 640px take under 10ms. Equally noteworthy,  at peak load,  each resize server can perform over 300 resizes per second.



Performance of different resize approaches.

Although these timings are quite fast,  the source image for our resizes  is larger, byte-wise, than the images it is resizing to,  requiring additional I/O. For example,  a typical 2048px source JPEG is roughly 600kB and our typical 1024px JPEGs are just under 200kB. This difference in size leads to roughly 35ms additional I/O time per resize.

Taking it slow

As our GPU code is new and images are our most important product, this change carries some risk. We’ve addressed this with extensive testing, progressive rollout and provisions for rollback. We also used some insights into our user behavior to roll this solution out in a very controlled manner.

Conclusion

This system is currently in production and as we roll it out more fully, has the potential to cut the resize footprint of the majority of our photos by 50%, with negligible impact on performance and image appearance. We also have the ability to apply this same footprint reduction technique to images uploaded in the past, which has the potential to reduce our storage growth to zero for a significant period of time.

Credits

This project would not have been possible with hard work of Peter Norby, Tague Griffith, John Ko and many others.

Performance improvements for photo serving

We’ve been working to make Flickr faster for our users around the world. Since the primary photo storage locations are in the US, and information on the internet travels at a finite speed, the farther away a Flickr user is located from the US, the slower Flickr’s response time will be. Recently, we looked at opportunities to improve this situation. One of the improvements involves keeping temporary copies of recently viewed photos in locations nearer to users.  The other improvement aims to get a benefit from these caches even when a user views a photo that is not already in the cache.

Regional Photo Caches

For a few years, we’ve deployed regional photo caches located in Switzerland and Singapore. Here’s how this works. When one of our users in Vietnam requests a photo, we copy it temporarily to Singapore. When a second user requests the same photo, from, say, Kuala Lumpur, the photo is already present in Singapore. Flickr can respond much faster using this copy (only a few hundred kilometers away) instead of using the original file back in the US (over 8,000 km away).

The first piece of our solution has been to create additional caches closer to our users. We expanded our regional cache footprint around two months ago. Our Australian users, among others, should now see dramatically faster load times. Australian users will now see the average image load about twice as fast as it did in March.

We’re happy with this improvement and we’re planning to add more regional caches over the next several months to help users in other regions.

Cache Prefetch

When users in locations far from the US view photos that are already in the cache, the speedup can be up to 10x, but only for the second and subsequent viewers. The first viewer still has to wait for the file to travel all the way from the US. This is important because there are so many photos on Flickr that are viewed infrequently. It’s likely that a given photo will not be present in the cache. One example is a user looking at their Auto Upload album. Auto uploaded photos are all private initially. Scrolling through this album, it’s likely that very few of the photos will be in their regional cache, since no other users would have been able to see them yet.

It turns out that we can even help the first viewer of a photo using a trick called cache warming.

To understand how caching warming works, you need to understand a bit about how we serve images. For example, say that I’m a user in Spain trying to access the photostream of a user, Martin Brock, in the US. When my request for Martin Brock’s Photostream at https://www.flickr.com/photos/martinbrock/ hits our backend servers, our code quickly determines the most recent photos Martin has uploaded that are visible to me, which sizes will fit best in my browser, and the URLs of those images. It then sends me the list of those URLs in an HTML response. The user’s web browser reads the HTML, finds the image URLs and starts loading them from the closest regional cache.


Standard image fetch

So you’re probably already guessing how to speed things up.  The trick is to take advantage of the time in between when the server knows which images will be needed and the time when the browser starts loading them from the closest cache. This period of time can be in the range of hundreds of milliseconds. We saw an opportunity during this time to send the needed images over to the viewer’s regional cache in advance of their browser requesting the images. If we can “win the race” to do this, the viewer’s experience will be much faster, since images will load from the local cache instead of loading from the US.

To take advantage of this opportunity, we created a new “cache warming” process called The Warmer. Once we’ve determined which images will be requested (the first few photos in Martin’s photostream) we send  a message from the API servers to The Warmer.

The Warmer listens for messages and, based on the user’s location, it determines from which of the Flickr regional caches the user will likely request the image. It then pushes the image out to this cache.



Optimized image fetch, with cache warming path indicated in red

Getting this to work well required a few optimizations.

Persistent connections

Yahoo encrypts all traffic between our data centers. This is great for security, but the time to set up a secure connection can be considerable. In our first iteration of The Warmer, this set up time was so long that we rarely got the photo to the cache in time to benefit a user. To eliminate this cost, we used an Nginx proxy which maintains persistent connections to our remote data centers. When we need to push an image out – a secure connection is already set up and waiting to be used.

Transport layer

The next optimization we made helped us reduce the cost of sending messages to The Warmer.  Since the data we’re sending always fits in one datagram, and we also don’t care too much if a small percentage of these messages are never received, we don’t need any of the socket and connection features of TCP. So instead of using HTTP, we created a simple JSON format for sending messages using UDP datagrams. Another reason we chose to use UDP is that if The Warmer is not available or is reacting slowly, we don’t want that to cause slowdowns in the API.

Queue management

Naturally, some images are quite popular and it would waste resources to push them to the same cache repeatedly. So, the third optimization we applied was to maintain a list of recently pushed images in The Warmer. This simple “de-deduplication” cut the number of requests made by The Warmer by 60%. Similarly, The Warmer drops any incoming requests that are more than fifty milliseconds old. This “time-to-live” provides a safety valve in case The Warmer has fallen behind and can’t catch up.


def warm_up_url(params):
  requested_jpg = params['jpg']

  colo_to_warm = params['colo_to_warm']
  curl = "curl -H 'Host: " + colo_to_warm + "' '" + keepalive_proxy + "/" + requested_jpg + "'"
  os.system(curl)

if __name__ == '__main__':

# create the worker pool

  from multiprocessing.pool import ThreadPool
  worker_pool = ThreadPool(processes=100)

  while True:

    # receive requests
    json_data, addr = sock.recvfrom(2048)

    params = json.loads(json_data)

    requested_jpg = warm_params['jpg']
    colo_to_warm =
      determine_colo_to_warm(params['http_endpoint'])

    if recently_warmed(colo_to_warm, requested_jpg) :
      continue

    if request_too_old(params) :
      continue

    # warm up urls
    params['colo_to_warm'] = colo_to_warm

    warm_result = worker_pool.apply_async(warm_up_url,(params,))

Cache Warmer pseudocode

Java

Our initial implementation of the Warmer was in Python, using a ThreadPool. This allowed very rapid prototyping and worked great — up to a point. Profiling the Python code, we found a large portion of time spent in socket calls. Since there is so little code in The Warmer, we tried porting to Java. A nearly line-for-line translation resulted in a greater than 10x increase in capacity.

Results

When we began this process, we weren’t sure whether The Warmer would be able to populate caches before the user requests came in. We were pleasantly surprised when we first enabled it at scale. In the first region where we’ve deployed The Warmer (Western Europe), we observed a reduced median latency of more than 200 ms, 95% of photos requests sped up by at least 100 ms, and for a small percentage of photos we see over 400 ms reduction in latency. As we continue to deploy The Warmer in additional regions, we expect to see similar improvements.

Next Steps

In addition to deploying more regional photo caches and continuing to improve prefetching performance, we’re looking at a few more techniques to make photos load faster.

Compression

Overall Flickr uses a light touch on compression. This results in excellent image quality at the cost of relatively large file sizes. This translates directly into longer load times for users. With a growing number of our users connecting to Flickr with wireless devices, we want to make sure we can give users a good experience regardless of whether they have a high-speed LTE connection or two-bars of 3G in the countryside. An important goal will be to make these changes with little or no loss in image quality.

We are also testing alternative image encoding formats (like WebP). Under certain conditions WebP compression may offer better image quality at the same compression ratio than JPEG can achieve.

Geolocation and routing

It turns out it’s not straightforward to know which photo cache is going to give the best performance for a user. It depends on a lot of factors, many of which change over time — sometimes suddenly. We think the best way to do this is with a system that adapts dynamically to “Internet weather.”

Cache intelligence

Today, if a user needs to see a medium sized version of an image, and that version is not already present in the cache, the user will need to wait to retrieve the image from the US, even if a larger version of the image is already in the cache. In this case, there is an opportunity to create the smaller version at the cache layer and avoid the round-trip to the US.

Overall we’re happy with these improvements and we’re excited about the additional opportunities we have to continue to make the Flickr experience super fast for our users. Thanks for following along.

Flickr flamily floto

Like what you’ve read and want to make the jump with us? We’re hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.