Real-time Resizing of Flickr Images Using GPUs

At Flickr we work with a huge number of photos. Our users upload over 27 million photos a day, and our total collection has over 12 billion photos. This is fantastic! As usage grows, we are always looking for ways to use our storage more efficiently. Recently our storage team wrote about some new commodity storage technology now in use at Flickr which increases efficiency. But we also looked into how much data we store for each photo. In the past we stored many sizes of every photo to make serving fast. We wanted to challenge that model and find the minimal set of data to store.

Thumbnail Footprint Reduction

One of our biggest opportunities for byte per photo improvement is through reduction in the footprint of Flickr’s “thumbnails”. Thumbnail is a bit of a misnomer at Flickr; our thumbnails are as large as 2048 pixels on their longest side, so at Flickr we usually refer to these as resizes.  We create these resizes in order to provide a consistent,  fast experience for our users over a variety of use cases.



Different sizes used in different contexts. From left to right: Cameraroll uses small thumbnails, to enable fast navigation through many sizes. Our Photo Page uses our largest, most detailed sizes. Search uses sizes in between these two extremes. Red panda photos by Mathias Appel.

The selection of sizes has grown semi-organically over the years, and all told, we serve eleven different resizes per photo which, in sum, use nearly as much storage as the original photo. Almost 90% of this storage is held in the handful of resizes 640px and larger, so we targeted our efforts at eliminating some of these sizes.



Left: Distribution of byte size by resize dimension. Storage is concentrated in images with largest dimensions. Right: size distribution after largest sizes eliminated.

A Few Approaches

A simple approach to this problem would be just to cease offering some of the larger sizes.    For instance, we could drop the 1600px image from our API and require the design to adjust.   However, this requires compromises that we didn’t want to take on. Instead we took on a pretty ambitious goal: maintain our largest resize, usually 2048px wide, as a source image and create any other moderate or large-sized resizes on-the-fly from this source, without sacrificing image quality or significantly affecting performance. Using the original uploaded photo as a resize source image was impractical, as these can be very large and exist in a variety of formats.

Sounds easy, right? We already resize images when users upload, so why not just use that same technology on serving. Well, almost. The problem with the naive approach is that high-quality resizing of JPEGs is a lot slower than is widely known. A tool we use frequently, GraphicsMagick, produces beautiful images but takes over 225ms to resize a 2048px JPEG down to 1600px, depending on quality settings. This is slow enough that this method would impact user experience, and would require many CPUs to handle our load. Ymagine,  a high-performance CPU-based tool we’ve open sourced,  is twice as fast as GraphicsMagick(!). We use Ymagine extensively on smaller images, but for the large sizes we’re targeting we needed even more performance. A GPU-based solution ultimately filled our needs.

Our GPU-based Solution

We created a tier of dedicated resize servers, each with an GPU co-processor. Each of these boards has two GPUs, each with 1500+ “cores”, running at just under 1GHz. These cores aren’t anywhere near as performant as a CPU core, but there are many of them. We tested a range of server-grade boards to find the best performing type for our workload. Many manufacturers offer consumer-grade boards with incredible specifications and lower price points, but these lack server-grade cooling and other features such as ECC RAM. One member of our team had experience using these lower grade boards in a previous application and recommended against it.



Resize system architecture

On these resize servers we run a fairly vanilla Apache with a plugin written in C++.  This server responds to resize requests, reads our source image from disk into shared memory,  and hands off requests off to persistent resize daemons that do all communication with our  GPUs.  A daemon-type approach is necessary due to a somewhat lengthy initialization process with our GPUs.

Our resize daemons transfer JPEGs from shared memory to GPU device memory. Once here,  the real image processing takes place. The JPEGs are decoded, cropped, sharpened, resized, re-sharpened as needed, re-encoded as JPEGs, and finally transferred back to shared memory.    From shared memory, our Apache module returns the resized JPEG to the caller.



A simple resize pipeline. Post-sharpening overcomes fuzziness introduced when downscaling.

There are several accepted resize algorithms, but to retain the Flickr “look”, we implemented the same Lanczos resize and kernel sharpening algorithms that we’ve used for years in CUDA.     This had the added benefit of being able to directly compare images generated through GraphicsMagick and our GPU-based code.

Performance

With significant optimization, this code is able to resize our 2048px JPEGs to 1600px in under 16ms. This is more than 15x faster than GraphicsMagick and nearly 10x faster than Ymagine.  Resizes from 2048px to 640px take under 10ms. Equally noteworthy,  at peak load,  each resize server can perform over 300 resizes per second.



Performance of different resize approaches.

Although these timings are quite fast,  the source image for our resizes  is larger, byte-wise, than the images it is resizing to,  requiring additional I/O. For example,  a typical 2048px source JPEG is roughly 600kB and our typical 1024px JPEGs are just under 200kB. This difference in size leads to roughly 35ms additional I/O time per resize.

Taking it slow

As our GPU code is new and images are our most important product, this change carries some risk. We’ve addressed this with extensive testing, progressive rollout and provisions for rollback. We also used some insights into our user behavior to roll this solution out in a very controlled manner.

Conclusion

This system is currently in production and as we roll it out more fully, has the potential to cut the resize footprint of the majority of our photos by 50%, with negligible impact on performance and image appearance. We also have the ability to apply this same footprint reduction technique to images uploaded in the past, which has the potential to reduce our storage growth to zero for a significant period of time.

Credits

This project would not have been possible with hard work of Peter Norby, Tague Griffith, John Ko and many others.

Much Photos!

Introducing the New! Shiny! Photolist framework

photolist-sky
Blue skies. Mostly.

Here at Flickr, we have photos. Lots of photos. Like, billions and billions of photos. So, it’s pretty important for us to be able to show you more than one at once.

We have used what we call the “justified algorithm” to lay out photos for a while now, but as we move more and more pages onto our new-ish isomorphic node.js stack, we determined it was time to revisit the algorithm and create an updated implementation.

A few of us here in Frontend-landia got together to figure out all the things this new shiny should be able to do. With a lot of projects in full swing and on the near horizon, we came up with a pretty significant list, including but not limited to:
– Easy for developers to use
– Fit into any kind of container
– Support pagination (in both directions!) and infinite scroll
– Jank-free, butter-silky-baby-smooth scrolling
– Support layouts other than justified, like square thumbnails and grid layout with native aspect ratio

After some brainstorming, drawing of diagrams, and gummi bear consumption, we got to work building out the framework and the underlying algorithm.

Rejustification

photolist-sky
Drawing of diagrams

The basics of the justified algorithm aren’t too complex. The goal is for the layout module to accept a list of photo aspect ratios, and return a list of rectangles. A layout consists of a number of rows of items (photos), each with a target height and allowable height deviation above and below. This, along with the container width, gives us a minimum and maximum row aspect ratio.

Photolist: variable row height
Fig. 1: The justified algorithm: dimensions

We push each photo into a row; once the row is filled up, we move on to the next. It goes a little something like this:

  1. Iterate over each photo in the list to display
  2. Create a new row if there’s not currently an open row
  3. Attempt to add the photo to the current row at its native aspect ratio and at the target row height
  4. If the new row aspect ratio is less than the minimum row aspect ratio, continue adding photos until the aspect ratio is greater than the maximum aspect ratio
  5. Either keep or drop the last added photo, depending on which generates a row aspect ratio closer to the target row aspect ratio; adjust the row height as needed, and seal the row
  6. Repeat until all the photos have been laid out.

Photolist: row filling
Fig. 2: The justified algorithm: row filling

It’s Never That Easy…

The justified algorithm described above is the primary responsibility of the layout module. In practice, however, there are a number of other things the layout must handle to get good results for all use cases, and to communicate the results to other parts of the framework.

Diffs

One key feature of the layout module is how it organizes its results. To minimize the amount of processing required to update photos as the layout changes, the layout module returns pre-sorted diffs, each with a specific purpose:

  • new items, used to create new photos and put them in place
  • layout-changed items, used to resize/reposition existing photos
  • visibility-changed items, used to wake/sleep existing photos
  • widows and orphans (leading and trailing items) (read on!)

The container view can process only the parts of the layout response that are necessary, given the current state of the whole framework, to keep processing time down and keep performance up.

Widows and orphans


Annie the Musical,
Annie the Musical, by Eva Rinaldi

Some photolist pages on Flickr use infinite scrolling, and some display results one page at a time. Regardless of how a page shows its photos, it starts to feel messy when there is an incomplete row of photos hanging off the end of the page. If there is more content in the set, the last row should be full. However, since we fetch photos from the API in fixed batch sizes, things don’t always work out so nicely, leaving “leftovers” in the bottom row. Borrowing from typesetting terminology, we call these leftover photos orphans. (We can also paginate backwards; leftovers at the top are technically widows but we’ll just keep using the term orphans for simplicity.)

The layout notes these incomplete rows and hides them from the rest of the framework until the next page of content loads in. (This led to frequent and questionable metaphors about “orphan suppression” and “orphan rehydration.”) When orphans are to be hidden, the layout simply keeps them out of the diff. When the orphans are brought back in as the next page loads, the layout prepends them to the next diff. The container view is none the wiser.

This logic gets even more fun when you consider that it must perform in all of these use cases:

  • fixed page size (book-style) pagination
  • downward-scrolling infinite pagination
  • upward-scrolling infinite pagination (enter into an “infinite” content set somewhere other than the beginning); this requires right-to-left layout!

There’s also the case of the end of an “infinite” content set (scrolling down to the end or up to the beginning); in these cases, we still want the row to appear complete, and still must maintain the native aspect ratio of the photo. Therefore, we allow the row height to grow as much as it needs in this case only.

Bonus Round!

You might have noticed that Flickr is kind of a big site with, like, lots of photos. And we display photos in lots of different ways, with lots of different use cases. The photolist framework bends over backwards to support all of those, including:

  • forward and backward pagination
  • infinite scroll and fixed page-size pagination
  • specific aspect ratios (e.g. squares)
  • fixed number of rows
  • fast relayout (only a few milliseconds for thousands of photos)

Going into detail on each of those features is way beyond the scope of this blog post, but suffice it to say the framework is built to handle just about anything Flickr can throw at it. The one exception is the upcoming Camera Roll (coming soon to those of you who don’t yet have it!), which is Too Extreme for this framework, so we devised something special just for that page.

The whole enchilada


Mmm... enchiladas,
Mmm… enchiladas, by jeffreyww

The layout is at the heart of the photolist framework, but wait — there’s more! The main components of the framework are the layout (dissected above), the container view / controller, and the subviews (usually containing photos).

Photolist: the whole enchilada
Fig. 3: Relationship of view/controller, layout, and subviews, and changing subview states during downward scroll

The container view does a lot of fancy things, like:

  • loading in photos as you scroll down or paginate
  • triggering a relayout when the container size changes (i.e. when you resize your window)
  • matching up server-rendered HTML with clientside JavaScript objects (see isomorphic JavaScript, and an upcoming blog post about the Hermes stack at Flickr).

Its primary job, though, is to act as the conduit between the layout module and the individual subviews.

Every time a layout is processed or changes, it returns a “layout response” to the container view. The layout response contains a list of rectangles and wake/sleep flags (actually, a list of lists; see Diffs above); the container view relays that new information on to each individual subview to determine position and visibility. The container view doesn’t even need to know about the layout details — each subview adjusts itself to its layout data all on its own.

The subviews each have a decent amount of intelligence of their own, performing such tasks as:

  • choosing the most appropriate photo file size to fit the layout rectangle
  • adding/removing itself to/from the DOM as instructed by the layout to maintain good scroll performance
  • providing an annotation and interaction layer for titles, faves, comments, etc.

Coming soon to a webpage near you

The new photolist framework is certainly not a one-size-fits-all solution; it’s tailored for Flickr’s specific use cases. However, we tried to design and build it to be as broadly useful for Flickr as possible; as we continue to move parts of the site onto the new frontend stack and innovate new features, it’s critical to have solid components upon which we can build Flickr’s future. The layout algorithm is probably useful for many applications though, and we hope you gained some insight into how you might implement your own.

The photolist framework is already live in a number of places on the site, including the new Unified Search pages (currently in Beta), the Create / Wall art pages, the Group pool preview, and is coming soon to a number of other pages.

As always, if you’re interested in helping with that “and more” part, we’d love to have you! Stop by our jobs page and drop us a line.

33 Browser Stats You Just Might Believe

We care an awful lot about the kinds of browsers and computers visiting Flickr. As people update to the latest versions of their browsers, the capabilities we can build against improve, which lets us build cool new things. At the same time, if lots of people continue using older browsers then we have to do extra work to gracefully support them.

These days we not only have incredibly capable browsers, but thanks to the transparent and rapid update process of Chrome, Firefox, and soon Internet Explorer (hooray!), we can rely on new features rapidly showing up en masse. This is crazy great, but it doesn’t mean that we can stop paying attention to our usage statistics. In fact, as people spend more time on their phones, there’s as much of a need for a watchful eye as ever.

We’ve never really shared our internal numbers, but we thought it would be interesting to take a look at the browsers Flickr visitors used in 2014. We use these numbers constantly to inform our project planning. Since limitations in older browsers take time to support we have to be judicious in picking which battles to fight. As you’ll see below, these numbers can be quite dynamic with a popular browser dropping to nearly 0% market-share in just a year. Let’s dive in and see some specifics.

Fort Vancouver
Fort Vancouver by Kate Dickerson

Top level OSes and browsers

At the highest level we learn a lot by looking at our OS family data. Probably the most notable thing here is how much of our traffic is coming from mobile devices. Moreover, the rate of growth is eye-popping. And this is just our website – this data doesn’t include our iOS or Android clients at all. A quarter of our traffic is from mobile devices.

OSes in use on Flickr.com
2013 Q4 2014 Q4 Y/Y
Windows 56.55% 50.61% -5.94
Macintosh 21.49% 21.42% -0.07
iOS 11.09% 17.61% 6.52
Android 5.39% 7.82% 2.43
Other 5.48% 2.54% -2.94

Let’s slice things slightly differently and look at browser families. We greatly differ from internet-wide traffic in that IE isn’t the outright majority browser. In fact, it clocks in at only the #4 position. More than half of Flickr visitors use a Webkit/Webkit-heritage browser (Safari and Chrome, respectively). Chrome rapidly climbed into its leadership position over the last few years and it’s stabilized there. Safari is hugely buoyed by iOS’s incredible growth numbers, while IE has been punished by Windows’s Flickr market-share decline.

Browsers in use on Flickr.com
2013 Q4 2014 Q4 Y/Y
Chrome 35.71% 35.42% -0.29
Safari 24.11% 27.50% 3.39
Firefox 17.94% 18.29% 0.35
Internet Explorer 13.98% 10.31% -3.67
Other 8.26% 8.48% 0.22

Fine-grained details

We can go a step further and see many details in the individual versions of OSes and browsers out there. It’s one thing to say “Windows is down 6% over the year” but another to say “the growth rate for the latest version of Windows is 350% year over year.” When we look at the individual versions we can infer quite a bit of detail around update rates and changes in the landscape.

OS version details

A few highlights:

  • Windows 7 is on the decline, XP and Vista fell by roughly 50% each, and Windows 8 and 8.1 are surging ahead.
  • iOS 8.1 and Android 5.0 don’t appear in the list due to their late appearance in Q4. Our current monthly numbers have iOS 8.1 far outpacing every other iOS version.
  • OS X 10.10 has accelerated Mac user upgrades; since its launch 10.9 has shed over a percent per month, and the legacy versions have sharply accelerated their decline.
OS versions in use on Flickr.com
2013 Q4 2014 Q4 Y/Y
Windows NT 3.39% 0% -3.39
Windows XP 10.12% 4.49% -5.63
Windows Vista 3.56% 2.41% -1.15
Windows 7 36.29% 33.14% -3.15
Windows 8 2.01% 2.31% 0.30
Windows 8.1 1.06% 8.22% 7.16
Macintosh OS X 10.5* 0.65% 0.65
Macintosh OS X 10.6* 2.90% 2.90
Macintosh OS X 10.7* 1.91% 1.91
Macintosh OS X 10.8* 1.83% 1.83
Macintosh OS X 10.9* 8.26% 8.26
Macintosh OS X 10.10 0% 5.69% 5.69
iOS 4.3 0.19% 0% -0.19
iOS 5.0 0.12% 0% -0.12
iOS 5.1 0.59% 0% -0.59
iOS 6.0 0.42% 0% -0.42
iOS 6.1 2.02% 0.61% -1.41
iOS 7.0 7.36% 1.54% -5.82
iOS 7.1 0% 5.76% 5.76
iOS 8.0 0% 3.27% 3.27
Android 2.3 0.77% 0% -0.77
Android 4.0 0.82% 0% -0.82
Android 4.1 2.11% 1.22% -0.89
Android 4.2 0.84% 1.16% 0.32
Android 4.3 0.39% 0.56% 0.17
Android 4.4 0% 3.80% 3.80
Linux 4.37% 1.94% -2.43

* We didn’t start breaking out individual versions of OS X until Q1 2014. So unfortunately for this post we don’t have great info breaking down the versions of OS X, but we will in the future. OS X 10.10 did not exist in Q1 2014 so it’s counted as a natural 0% in our Q1 data.

Browser version details

These are the most dynamic numbers of the bunch. If there’s one thing they prove, it’s how incredibly effective the upgrade policies of Chrome and Firefox are. Where Safari and IE have years-old versions still hanging on (I’m looking at you Safari 5.1 and IE 8.0), virtually every Chrome and Firefox user is using a browser released within the last six weeks. That’s a hugel powerful thing. The IE team has suggested that Windows 10’s Project Spartan will adopt this policy, which is absolutely fantastic news. A few highlights:

  • Despite not being on a continuous upgrade cycle, Safari and IE were able to piggyback on successful OS launches to consolidate their users on their latest releases.
  • IE 8.0 is the only non-latest version of IE still holding on, thanks to its status as the latest version available for the still somewhat popular Windows XP.
OS versions in use on Flickr.com
2013 Q4 2014 Q4 Y/Y
Chrome 22.0.1229 1.67% 0% -1.67
Chrome 29.0.1547.76 1.39% 0% -1.39
Chrome 30.0.1599.101 8.94% 0% -8.94
Chrome 30.0.1599.69 3.74% 0% -3.74
Chrome 31.0.1650.57 6.08% 0% -6.08
Chrome 31.0.1650.63 6.91% 0% -6.91
Chrome 37.0.2062.124 0% 4.59% 4.59
Chrome 38.0.2125.104 0% 3.05% 3.05
Chrome 38.0.2125.111 0% 6.65% 6.65
Chrome 39.0.2171.71 0% 4.09% 4.09
Chrome 39.0.2171.95 0% 4.51% 4.51
Safari 5.0 1.96% 0% -1.96
Safari 5.1 5.60% 2.50% -3.10
Safari 6.0 6.21% 0.86% -5.35
Safari 7.0 7.29% 7.25% -0.04
Safari 7.1 0% 3.12% 3.12
Safari 8.0 0% 10.10% 10.10
Firefox 22.0 1.62% 0% -1.62
Firefox 24.0 5.50% 0% -5.50
Firefox 25.0 6.46% 0% -6.46
Firefox 26.0 1.90% 0% -1.90
Firefox 32.0 0% 4.92% 4.92
Firefox 33.0 0% 7.10% 7.10
Firefox 34.0 0% 3.52% 3.52
MSIE 8.0 3.69% 1.00% -2.69
MSIE 9.0 3.04% 1.22% -1.82
MSIE 10.0 5.94% 0% -5.94
MSIE 11.0 0% 6.69% 6.69
Generic WebKit 4.0* 3.18% 2.46% -0.72
Mozilla 5.0* 3.18% 4.80% 1.62
Opera 9.80 1.46% 0% -1.46

* These are catch-all versions of Mozilla-based and Webkit-based browsers that aren’t themselves Firefox, Safari, or Chrome.

A word on methodology

These numbers were anonymously collected using Yahoo’s in-house metrics libraries. The numbers here are aggregated over the course of three months each, making these numbers lagging indicators. This is why the latest releases, like Android 5.0 and iOS 8.1, are under-represented – they hadn’t yet enjoyed one full quarter when 2014 came to a close.

Further reading

There are a number of excellent sites out there watching similar browser statistics on a continuing basis. A few of them are:

  • Ars Technica – on a monthly basis they analyze raw data from Net Market Share with insightful commentary.
  • Net Market Share – while Ars does a bang-up job, it’s helpful to sift the data yourself to find the answers to your questions.
  • Peter-Paul Koch – No one shines a sharper light on the state of browsers than PPK, with just one example being his attention to disambiguating the various versions of Chromium out there (part two).

 
 

Flickr September 2014

Like this post? Have a love of online photography? Want to work with us? Flickr is hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.

Introducing: Flickr PARK or BIRD

park OR bird
Zion National Park Utah by Les Haines Creative Commons License Secretary Bird by Bill Gracey Creative Commons License

tl;dr: Check it out at parkorbird.flickr.com!

We at Flickr are not ones to back down from a challenge. Especially when that challenge comes in webcomic form. And especially when that webcomic is xkcd. So, when we saw this xkcd comic we thought, “we’ve got to do that”:

xkcd-1425
Creative Commons License

In fact, we already had the technology in place to do these things.  Like the woman in the comic says, determining whether a photo with GPS info embedded into it was taken in a national park is pretty straightforward. And, the Flickr Vision team has been working for the last year or so to be able to recognize more than 1000 things in images using deep convolutional neural nets. Incidentally, one of the things we’re pretty good at recognizing is birds!

We put those things together, and thus was born parkorbird.flickr.com!

Recognizing Stuff in Images with Deep Networks

The thing we’re really excited to show off with PARK or BIRD is our image recognition technology. To recognize 1000+ things, we employ a deep convolutional neural network similar to the one depicted below.

conv-net

This model transforms an input image into a representation in which different objects and scenes are easily distinguishable by a simple binary classification algorithm, like an SVM. It does this by passing the image through a series of layers, where each layer computes a function of the output of the layer below it.

Each successive one of these layers, after training on millions of images, has learned to recognize higher- and higher-level features of images and the ways these features go together to form different objects and scenes. For example, the first layer might recognize the most basic image features, such as short straight lines, corners, and small circular arcs. The next layer might recognize higher level combinations of those features, such as circles or other basic shapes. Further layers might recognize higher-level concepts, like eyes and beaks, and even further ones might recognize heads and wings. For an example of what this looks like, check out Figure 2 in this paper by Matt Zeiler and Rob Fergus.

As the image passes through these layers, they are “activated” in different ways depending on the features they’ve seen in the input image, and at the top of this network—after the image is transformed by the bottom layer, and that transformation of the image is transformed by the next layer, and that transformation of the transformation of the image is transformed by the next layer, and so on— a short floating-point vector summarizing all of the various activations at each layer is output. We pass this floating-point vector into more than 1000 binary classifiers, each of which is trained to give us a yes/no answer to identify a specific object/scene class. And, of course, one of those classes is birds!

The Flickr Vision team is already applying this deep network to Flickr photos to help people more more easily find what they’re looking for via Flickr search, and we plan to integrate it into Flickr in other cool ways in the future. We’re also working on other innovative computer vision and image recognition technologies that will make it easier for Flickr members to find and organize their photos.

Acknowledgements

The Flickr Vision and Search team is awesome and PARK or BIRD is built upon technologies that we all pitched in on. Here we all are (at least most of us), in all our beautiful glory. Thanks Vision/Search! Thanks also to Stephen Woods, Bart Thomee, John Ko, Mike Shema, and Sean Perkins, all of whom provided a lot of help getting PARK or BIRD off the ground.

Flickr flamily floto

If this all sounds like a challenge you’re interested in helping out with, you should join us! Flickr is hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.

The Ins and Outs of the Yahoo Flickr Creative Commons 100 Million Dataset

This past summer we (Yahoo Labs and Flickr) released the YFCC100M dataset that is the largest and most ambitious collection of Flickr photos and videos ever, containing 99,206,564 photos and 793,436 videos from 581,099 different photographers. We’re super excited about the dataset, because it is a reflection of how Flickr and photography have evolved over the past 10 years. And it contains photos and videos of almost everything under the sun (and yes, loads of cats).

We’ve received a lot of emails and tweets asking for more details about the dataset, so in this blog post, we’ll gladly tell you. Each of the 100 million photos and videos is associated with a Creative Commons license that indicates how it may be used by others. The table below shows the complete breakdown of licenses in our dataset. Approximately 31.8% is marked for commercial use, while 17.3% has the most liberal license, which only requires attribution to the photographer.

License Photos Videos
17,210,144 137,503
9,408,154 72,116
4,910,766 37,542
12,674,885 102,288
28,776,835 235,319
26,225,780 208,668

The photos and videos themselves are very diverse. We’ve found photos showing street scenes captured as part of photographer Andy Nystrom‘s life-logging activities, photos of real-world events like protests and rallies, as well as photos of natural phenomena.

Five years of Iraq war die-in IMG_9793 851-Aurora Borealis Northern Lights from Lodge near Fairbanks 1 Sep 28, 2011 1-11 AM 1600x1060
Steve Rhodes
Andy Nystrom
BJ Graf

To understand more about the visual content of the photos in the dataset, the Flickr Vision team used a deep-learning approach to find the presence of visual concepts, such as people, animals, objects, events, architecture, and scenery across a large sample of the corpus. There’s a diverse collection of visual concepts present in the photos and videos, ranging from indoor to outdoor images, faces to food, nature to automobiles.

Concept Count
outdoor 32,968,167
indoor 12,522,140
face 8,462,783
people 8,462,783
building 4,714,916
animal 3,515,971
nature 3,281,513
landscape 3,080,696
tree 2,885,045
sports 2,817,425
architecture 2,539,511
plant 2,533,575
house 2,258,396
groupshot 2,249,707
vehicle 2,064,329
water 2,040,048
mountain 2,017,749
automobile 1,351,444
car 1,340,751
food 1,218,207
concert 1,174,346
flower 1,164,607
game 1,110,219
text 1,105,763
night 1,105,296

There are 68,971,123 photos and videos in the set that have user-annotated tags. If we look at specific tags used, we see it is very common for people to use the year of capture, the camera brand, place names, scenery, and activities as tags. The top 25 tags (excluding the years of capture) and how often they were used are listed below, as well as the tag frequency distribution for the 100 most-frequently used tags.

User Tag Count
nikon 1,195,576
travel 1,195,467
usa 1,188,344
canon 1,101,769
london 996,166
japan 932,294
france 917,578
nature 872,029
art 854,669
music 826,692
europe 782,932
beach 758,799
united states 743,470
england 739,346
wedding 728,240
city 689,518
italy 688,743
canada 686,254
new york 685,311
vacation 680,142
germany 672,819
party 663,968
park 651,717
people 641,285
water 640,234

User tag distribution in the YFCC100M Dataset

Some photos and videos (3,350,768 to be exact) carry machine tags. Noteworthy machine tags are those having the “siwild” namespace, referring to photos uploaded by scientists of the Smithsonian, and the “taxonomy” namespace, which refers to photos in which flora and fauna have been carefully classified. The most frequently occurring namespace, “uploaded,” refers to the applications used to share the photos on Flickr, which are principally the Flickr and Instagram iOS apps. Other interesting machine tags are those referring to the different filters that can be applied to a photo, or roughly 750,000 photos. Overall, most machine tags are related to food and drink, events, camera and application metadata, as well as locations.

Machine Tag Count
uploaded 1,917,650
siwild 1,169,957
taxonomy 1,067,857
foursquare 894,265
exif 617,287
flickriosapp 538,829
geo 443,762
sequence 429,948
lastfm 313,379
flickrandroidapp 222,238

In terms of locations, the photos and videos in the dataset have been taken all over the world. In total, 48,366,323 photos and 103,506 videos were geotagged. The most popular cities where photos and videos were shot are concentrated in the United States, principally New York City, San Francisco, Los Angeles, Chicago, and Seattle; in Europe, they were principally London, Berlin, Barcelona, Rome and Amsterdam. There are also photos that have been taken in remote locations like Kiribati, icy places like Svalbard, and exotic places like Comoros. In fact, photos and videos from this dataset have been taken in 249 different territories (countries, islands, etc) around the world, and even in international waters or airspace.

One Million Creative Commons Geo-tagged Photos

Our dataset further reveals that there are many different cameras in use within the Flickr community. The Canon EOS 400D and 350D have a lead over the Nikon D90 (calm down…we’re not starting anything by saying that). Apple’s iPhones form the most popular type of cameraphone.

Make Camera Count
Canon EOS 400D 2,539,571
Canon EOS 350D 2,140,722
Nikon D90 1,998,637
Canon EOS 5D Mark II 1,896,219
Nikon D80 1,719,045
Canon EOS 7D 1,526,158
Canon EOS 450D 1,509,334
Nikon D40 1,358,791
Canon EOS 40D 1,334,891
Canon EOS 550D 1,175,229
Nikon D7000 1,068,591
Nikon D300 1,053,745
Nikon D50 1,032,019
Canon EOS 500D 1,031,044
Nikon D700 942,806
Apple iPhone 4 922,675
Nikon D200 919,688
Canon EOS 20D 843,133
Canon EOS 50D 831,570
Canon EOS 30D 820,838
Canon EOS 60D 772,700
Apple iPhone 4S 761,231
Apple iPhone 743,735
Nikon D70 742,591
Canon EOS 5D 699,381

Our collection of 100 million photos and videos marks a new milestone in the history of datasets. The collection is one of the largest released for academic use, and it’s incredibly varied—not just in terms of the content shown in the photos and videos, but also the locations where they were taken, the photographers who took them, the tags that were applied, the cameras that were used, etc. The best thing about the dataset is that it is completely free to download by anyone, given that all photos and videos have a Creative Commons license. Whether you are a researcher, a developer, a hobbyist or just plain curious about online photography, the dataset is the best way to study and explore a wide sample of Flickr photos and videos.  Happy researching and happy hacking!

Performance improvements for photo serving

We’ve been working to make Flickr faster for our users around the world. Since the primary photo storage locations are in the US, and information on the internet travels at a finite speed, the farther away a Flickr user is located from the US, the slower Flickr’s response time will be. Recently, we looked at opportunities to improve this situation. One of the improvements involves keeping temporary copies of recently viewed photos in locations nearer to users.  The other improvement aims to get a benefit from these caches even when a user views a photo that is not already in the cache.

Regional Photo Caches

For a few years, we’ve deployed regional photo caches located in Switzerland and Singapore. Here’s how this works. When one of our users in Vietnam requests a photo, we copy it temporarily to Singapore. When a second user requests the same photo, from, say, Kuala Lumpur, the photo is already present in Singapore. Flickr can respond much faster using this copy (only a few hundred kilometers away) instead of using the original file back in the US (over 8,000 km away).

The first piece of our solution has been to create additional caches closer to our users. We expanded our regional cache footprint around two months ago. Our Australian users, among others, should now see dramatically faster load times. Australian users will now see the average image load about twice as fast as it did in March.

We’re happy with this improvement and we’re planning to add more regional caches over the next several months to help users in other regions.

Cache Prefetch

When users in locations far from the US view photos that are already in the cache, the speedup can be up to 10x, but only for the second and subsequent viewers. The first viewer still has to wait for the file to travel all the way from the US. This is important because there are so many photos on Flickr that are viewed infrequently. It’s likely that a given photo will not be present in the cache. One example is a user looking at their Auto Upload album. Auto uploaded photos are all private initially. Scrolling through this album, it’s likely that very few of the photos will be in their regional cache, since no other users would have been able to see them yet.

It turns out that we can even help the first viewer of a photo using a trick called cache warming.

To understand how caching warming works, you need to understand a bit about how we serve images. For example, say that I’m a user in Spain trying to access the photostream of a user, Martin Brock, in the US. When my request for Martin Brock’s Photostream at https://www.flickr.com/photos/martinbrock/ hits our backend servers, our code quickly determines the most recent photos Martin has uploaded that are visible to me, which sizes will fit best in my browser, and the URLs of those images. It then sends me the list of those URLs in an HTML response. The user’s web browser reads the HTML, finds the image URLs and starts loading them from the closest regional cache.


Standard image fetch

So you’re probably already guessing how to speed things up.  The trick is to take advantage of the time in between when the server knows which images will be needed and the time when the browser starts loading them from the closest cache. This period of time can be in the range of hundreds of milliseconds. We saw an opportunity during this time to send the needed images over to the viewer’s regional cache in advance of their browser requesting the images. If we can “win the race” to do this, the viewer’s experience will be much faster, since images will load from the local cache instead of loading from the US.

To take advantage of this opportunity, we created a new “cache warming” process called The Warmer. Once we’ve determined which images will be requested (the first few photos in Martin’s photostream) we send  a message from the API servers to The Warmer.

The Warmer listens for messages and, based on the user’s location, it determines from which of the Flickr regional caches the user will likely request the image. It then pushes the image out to this cache.



Optimized image fetch, with cache warming path indicated in red

Getting this to work well required a few optimizations.

Persistent connections

Yahoo encrypts all traffic between our data centers. This is great for security, but the time to set up a secure connection can be considerable. In our first iteration of The Warmer, this set up time was so long that we rarely got the photo to the cache in time to benefit a user. To eliminate this cost, we used an Nginx proxy which maintains persistent connections to our remote data centers. When we need to push an image out – a secure connection is already set up and waiting to be used.

Transport layer

The next optimization we made helped us reduce the cost of sending messages to The Warmer.  Since the data we’re sending always fits in one datagram, and we also don’t care too much if a small percentage of these messages are never received, we don’t need any of the socket and connection features of TCP. So instead of using HTTP, we created a simple JSON format for sending messages using UDP datagrams. Another reason we chose to use UDP is that if The Warmer is not available or is reacting slowly, we don’t want that to cause slowdowns in the API.

Queue management

Naturally, some images are quite popular and it would waste resources to push them to the same cache repeatedly. So, the third optimization we applied was to maintain a list of recently pushed images in The Warmer. This simple “de-deduplication” cut the number of requests made by The Warmer by 60%. Similarly, The Warmer drops any incoming requests that are more than fifty milliseconds old. This “time-to-live” provides a safety valve in case The Warmer has fallen behind and can’t catch up.


def warm_up_url(params):
  requested_jpg = params['jpg']

  colo_to_warm = params['colo_to_warm']
  curl = "curl -H 'Host: " + colo_to_warm + "' '" + keepalive_proxy + "/" + requested_jpg + "'"
  os.system(curl)

if __name__ == '__main__':

# create the worker pool

  from multiprocessing.pool import ThreadPool
  worker_pool = ThreadPool(processes=100)

  while True:

    # receive requests
    json_data, addr = sock.recvfrom(2048)

    params = json.loads(json_data)

    requested_jpg = warm_params['jpg']
    colo_to_warm =
      determine_colo_to_warm(params['http_endpoint'])

    if recently_warmed(colo_to_warm, requested_jpg) :
      continue

    if request_too_old(params) :
      continue

    # warm up urls
    params['colo_to_warm'] = colo_to_warm

    warm_result = worker_pool.apply_async(warm_up_url,(params,))

Cache Warmer pseudocode

Java

Our initial implementation of the Warmer was in Python, using a ThreadPool. This allowed very rapid prototyping and worked great — up to a point. Profiling the Python code, we found a large portion of time spent in socket calls. Since there is so little code in The Warmer, we tried porting to Java. A nearly line-for-line translation resulted in a greater than 10x increase in capacity.

Results

When we began this process, we weren’t sure whether The Warmer would be able to populate caches before the user requests came in. We were pleasantly surprised when we first enabled it at scale. In the first region where we’ve deployed The Warmer (Western Europe), we observed a reduced median latency of more than 200 ms, 95% of photos requests sped up by at least 100 ms, and for a small percentage of photos we see over 400 ms reduction in latency. As we continue to deploy The Warmer in additional regions, we expect to see similar improvements.

Next Steps

In addition to deploying more regional photo caches and continuing to improve prefetching performance, we’re looking at a few more techniques to make photos load faster.

Compression

Overall Flickr uses a light touch on compression. This results in excellent image quality at the cost of relatively large file sizes. This translates directly into longer load times for users. With a growing number of our users connecting to Flickr with wireless devices, we want to make sure we can give users a good experience regardless of whether they have a high-speed LTE connection or two-bars of 3G in the countryside. An important goal will be to make these changes with little or no loss in image quality.

We are also testing alternative image encoding formats (like WebP). Under certain conditions WebP compression may offer better image quality at the same compression ratio than JPEG can achieve.

Geolocation and routing

It turns out it’s not straightforward to know which photo cache is going to give the best performance for a user. It depends on a lot of factors, many of which change over time — sometimes suddenly. We think the best way to do this is with a system that adapts dynamically to “Internet weather.”

Cache intelligence

Today, if a user needs to see a medium sized version of an image, and that version is not already present in the cache, the user will need to wait to retrieve the image from the US, even if a larger version of the image is already in the cache. In this case, there is an opportunity to create the smaller version at the cache layer and avoid the round-trip to the US.

Overall we’re happy with these improvements and we’re excited about the additional opportunities we have to continue to make the Flickr experience super fast for our users. Thanks for following along.

Flickr flamily floto

Like what you’ve read and want to make the jump with us? We’re hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.

Exploring Life Without Compass

Compass is a great thing. At Flickr, we’re actually quite smitten with it. But being conscious of your friends’ friends is important (you never know who they’ll invite to your barbecue), and we’re not so sure about this “Ruby” that Compass is always hanging out with. Then there’s Ruby’s friend Bundler who, every year at the Christmas Party, tells the same stupid story about the time the police confused him with a jewelry thief. Enough is enough! We’ve got history, Compass, but we just feel it might be time to try seeing other people.

Solving for Sprites

In order to find a suitable replacement (and for a bit of closure), we had to find out what kept us relying on Compass for so long. We knew the big one coming in to this great experiment: sprites. Flickr is a huge site with many different pages, all of which have their own image folders that need to be sprited together. There are a few different options for compiling sprites on your own, but we liked spritesmith for its multiple image rendering engines. This gives us some flexibility in dependencies.

A grunt task is available for spritesmith, but it assumes you are generating only one sprite. Our setup is a bit more complex and we’d like to keep our own sprite mixin intact so we don’t actually have to change a line of code. With spritesmith and our own runner to iterate over our sprite directories, we can easily create the sprites and output the dimensions and urls via a simple Handlebars template to a Sass file.

{{#each sprites}}
    {{#each images}}
        %{{../dir}}-{{name}}-dimensions {
            width: {{coords.width}}px;
            height: {{coords.height}}px;
        }
        %{{../dir}}-{{name}}-background {
            background: image-url('{{../url}}') -{{coords.x}}px -{{coords.y}}px no-repeat;
        }
    {{/each}}
{{/each}}

You could easily put all three of these rules in the same declaration, but we have some added flexibility in mind for our mixin.

It’s important to note that, because we’re using placeholders (the % syntax in Sass), nothing is actually written out unless we use it. This keeps our compiled CSS nice and clean (just like Compass)!

@import 'path/to/generated/sprite/file'

@mixin background-sprite($icon, $set-dimensions: false) {
    @extend %#{$spritePath}-#{$icon}-background;

    @if $set-dimensions == true {
        @extend %#{$spritePath}-#{$icon}-dimensions;
    }
}

Here, our mixin uses the Sass file we generated to provide powerful and flexible sprites. Note: Although retina isn’t shown here, adding support is as simple as extending the Sass mixin with appropriate media queries. We wanted to keep the example simple for this post, but it gives you an idea of just how extensible this setup is!

Now that the big problem is solved, what about the rest of Compass’s functionality?

Completing the Package

How do we account for the remaining items in the Compass toolbox? First, it’s important to find out just how many mixins, functions, and variables are used. An easy way to find out is to compile with Sass and see how much it complains!


sass --update assets/sass:some-temp-dir

Depending on the complexity of your app, you may see quite a lot of these errors.


error assets/css/base.scss (Line 3: Undefined mixin 'font-face'.)

In total, we’re missing 16 mixins provided by Compass (and a host of variables). How do we replace all the great mixin functionality of Compass? With mixins of the same name, node-bourbon is a nice drop-in replacement.

What is the point of all this work again?

The Big Reveal

Now that we’re comfortably off Compass, how exactly are we going to compile our Sass? Well try not to blink, because this is the part that makes it all worthwhile.

Libsass is a blazing-fast C port of the Sass compiler that exposes bindings to modules like node-sass.

Just how fast? With Compass, our compile times were consistently around a minute and a half to two minutes. Taking care of spriting ourselves and using libsass for Sass compilation, we’re down to 5 seconds. When you deploy as often as we do at Flickr (in excess of 10 times a day), that adds up and turns into some huge savings!

What’s the Catch?

There isn’t one! Oh, okay. Maybe there are a few little ones. We’re pretty willing to swallow them though. Did you see that compile time?!

There are some differences, particularly with the @extend directive, between Ruby Sass and libsass. We’re anticipating that these small kinks will continue to be ironed out as the port matures. Additionally, custom functions aren’t supported yet, so some extensibility is lost in coming from Ruby (although node-sass does have support for the image-url built-in which is the only one we use, anyway).

With everything taken into account, we’re counting down the days until we make this dream a reality and turn it on for our production builds.

Flickr flamily floto

Like what you’ve read and want to make the jump with us? We’re hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.

Redis Sentinel at Flickr

HIGH FIVE!

We recently implemented Redis Sentinel at Flickr to provide automated Redis master failover for an important subsystem and we wanted to share our experience with it. Hopefully, we can provide insight into our experience adopting this relatively new technology and some of the nuances we encountered getting it up and running. Although we try to provide a basic explanation of what Sentinel is and how it works, anyone who is new to Redis or Sentinel should start with the excellent Redis and Sentinel documentation.

At Flickr we use an offline task processing system that allows us to execute heavyweight operations asynchronously from our API and web pages. This prevents these operations from making users wait needlessly for pages to render or API methods to return. Our task system handles millions of tasks per day which includes operations like photo uploads, user notifications and metadata edits. In this system, code can push a task onto one of several Redis-backed queues based on priority and operation, then forget about the task. Many of these operations are critical and we need to make sure we process at least 99.9999% of them (less than 1 in 1 million dropped). Additionally, we need to make sure this system is available to insert and process tasks at least 99.995% of the time – no more than about 2 minutes a month downtime.

Until a few months ago our Redis BCP consisted of:

Upon master failure, the recovery plan included several manual steps: reconfiguring code to take the Redis master(s) offline and manually promoting a Redis slave (a mildly time consuming activity). Then we would rebuild and backfill unprocessed data from AOF files and error logs — a very time consuming activity. We knew if we lost a master we would have hours and hours of less-than-satisfying work to run the recovery plan, and there was potential for user impact and even a small amount of data loss. We had never experienced a Redis master failure, but we all know that such events are simply a matter of time. Overall, this fell far short of our durability and availability goals.

Configuring Sentinel

We started by installing and testing Sentinel in a development environment and the first thing we noticed was how simple Sentinel is to use and how similar the syntax is to Redis. We read Aphyr’s article and his back-and-forth blog duel with Salvatore and verified Aphyr’s warning about the “split brain” scenario. Eventually we decided the benefits outweighed the risks in our specific use case. During testing we learned about some Sentinel nuances and got a better feel for appropriate configuration values, many of which have little or no community guidance yet.

One such example was choosing a good value for the level-of-agreement setting, which is the number of Sentinels simultaneously reporting a host outage before automatic failover starts. If this value is too high then you’ll miss real failures and if it’s too low you are more susceptible to false alarms. (*thanks to Aleksey Asiutin(@aasiutin) for the edit!) In the end, we looked at the physical topology of our hosts over rack and switches and chose to run a relatively large number of Sentinel instances to ensure good coverage. Based on tuning in production we chose a value for level-of-agreement equal to about 80% of the Sentinel instances.

The down-after-milliseconds configuration setting is the time the Sentinels will wait with no response to their ping requests before declaring a host outage. Sentinels ping the hosts they monitor approximately every second, so by choosing a value of 3,100 we expect Sentinels to miss 3 pings before declaring host outage. Interestingly, because of Sentinel’s ping frequency we found that setting this value to less than 1,000 results in an endless stream of host outage notifications from the Sentinels, so don’t do that.  We also added an extra 100 milliseconds (3,100ms rather than 3,000ms) to allow for some variation in Redis response time.

We chose a parallel-syncs value of 1.  This item dictates the number of slaves that are reconfigured simultaneously after a failover event.  If you serve queries from the read-only slaves you’ll want to keep this value low.

For an explanation of the other values we refer you to the self-documented default sentinel.conf file.

An example of the Sentinel configuration we use:

sentinel monitor cluster_name_1 redis_host_1 6390 35
sentinel down-after-milliseconds cluster_name_1 3100
sentinel parallel-syncs cluster_name_1 1

sentinel monitor cluster_name_2 redis_host_2 6391 35
sentinel down-after-milliseconds cluster_name_2 3100
sentinel parallel-syncs cluster_name_2 1

port 26379

pidfile [path]/redis-sentinel.pid
logfile [path]logs/redis/redis-sentinel.log
daemonize yes

An interesting nuance of Sentinels is that they write state to their configuration file. This presented a challenge for us because it conflicted with our change management procedures. How do we maintain a dependably consistent startup configuration if the Sentinels are modifying the config files at runtime? Our solution was to create two Sentinel config files. One is strictly maintained in Git and not modified by Sentinel. This “permanent” config file is part of our deployment process and is installed whenever we update our Sentinel system configuration (i.e.: “rarely”). We then wrote a startup script that first duplicates the “permanent” config file to a writable “temporary” config file, then starts Sentinel and passes it the “temporary” file via command-line params. Sentinels are allowed to modify the “temporary” files as they please.

sunset

Interfacing with Sentinel

A common misconception about Sentinel is that it resides in-band between Redis and Redis clients. In fact, Sentinel is out-of-band and is only contacted by your services on startup. Sentinel then publishes notifications when it detects a Redis outage. Your services subscribe to Sentinel, receive the initial Redis host list, and then carry on normal communication directly with the Redis host.

The Sentinel command syntax is very similar to Redis command syntax. Since Flickr has been using Redis for a long time the adaptation of pre-existing code was pretty straightforward for us. Code modifications consisted of adding a few Java classes and modifying our configuration syntax. For Java-to-Redis interaction we use Jedis, and for PHP we use Predis and libredis.

Using Sentinel from Jedis is not documented as well as it could be. Here’s some code that we hope will save you some time:

// Verify that at least one Sentinel instance in the Set is available and responding.
// sentinelHostPorts: String format: [hostname]:[port]
private boolean jedisSentinelPoolAvailable(Set<String> sentinelHostPorts, String clusterName){
   log.info("Trying to find master from available Sentinels...");
   for ( String sentinelHostPort : sentinelHostPorts ) {
      List<String> hostPort = Arrays.asList( sentinelHostPort.split(":") );
      String hostname = hostPort.get(0);
      int port = Integer.parseInt( hostPort.get(1) );
      try {
         Jedis jedis = new Jedis( hostname, port );
         jedis.sentinelGetMasterAddrByName( clusterName );
         jedis.disconnect();
         log.info("Connected to Sentinel host:%s port:%d", hostname, port);
         return true;
      } catch (JedisConnectionException e) {
         log.warn("Cannot connect to Sentinel host:%s port:%d”, hostname, port);
      }
   }
   return false;
}

private Pool<Jedis> getDefaultJedisPool() {
   // Create and return a default Jedis Pool object…
   // ...
}

// ConfigurationMgr configMgr ⇐ your favorite way of managing system configuration (up to you)
public Pool<Jedis> getPool(ConfigurationMgr configMgr) {
   String clusterName = configMgr.getRedisClusterName();
   Set<String> sentinelHostPorts = configMgr.getSentinelHostPorts();

   if(sentinels.size()>0) {
      if(jedisSentinelPoolAvailable( sentinelHostPorts, clusterName )) {
         return new JedisSentinelPool(clusterName, sentinelHostPorts);
      } else {
         log.warn(“All Sentinels unreachable.  Using default Redis hosts.”);
         return getDefaultJedisPool();
      }
   } else {
      log.warn(“Sentinel config empty.  Using default Redis hosts.”);
      return getDefaultJedisPool();
   }
}

running with the bulls

Testing Sentinel at Flickr

Before deploying Sentinel to our production system we had several questions and concerns:

  • How will the system react to host outages?
  • How long does a failover event last?
  • How much of a threat is the split-brain scenario?
  • How much data loss can we expect from a failover?

We commandeered several development machines and installed a few Redis and Sentinel instances. Then we wrote some scripts that insert or remove data from Redis to simulate production usage.

Screen Shot 2014-07-23 at 3.19.56 PM

We ran a series of tests on this setup, simulating a variety of Redis host failures with some combination of the commands: kill -9, the Sentinel failover command, and Linux iptables. This resulted in “breaking” the system in various ways.

Screen Shot 2014-07-23 at 3.20.16 PM Figure: Redis master failure

Screen Shot 2014-07-23 at 3.20.41 PM

Figure: Network partition producing a ‘split-brain’ scenario

How will the system react to host outages?

For the most part we found Sentinel to behave exactly as expected and described in the Sentinel docs. The Sentinels detect host outages within the configured down-after-milliseconds duration, then send “subjective down” notifications, then send “objective down” notifications if the level-of-agreement threshold is reached. In this environment we were able to quickly and accurately test our response to failover events. We began with small test scripts, but eventually were able to run repeatable integration tests on our production software. Adding Redis to a Maven test phase for automated integration testing is a backlog item that we haven’t implemented yet.

How long does a failover event last?

The Sentinel test environment was configured with a down-after-milliseconds value of 3,100ms (just like production, see above). With this value Sentinels would produce a host outage notification after approximately 3 unsuccessful pings (one ping per second). In addition to the 3,100ms delay, we found there were 1-3 seconds in overhead for processing the failover event and electing a new Redis master, resulting in 4-6 seconds of total downtime. We are pretty confident we’ll see the same behavior in production (verified — see below).

How much of a threat is the “split-brain” scenario?

We carefully read Aphyr and Salvatore’s blog articles debating the threat of a “split brain scenario.” To summarize: this is a situation in which network connectivity is split, with some nodes still functioning on one side and other nodes continuing to function independently on the other side. The concern is the potential for the data set to diverge with different data being written to masters on both sides of the partition. This could easily create data that is either impossible or very difficult to reconcile.

We recreated this situation and verified that a network partition could create disjoint concurrent data sets. Removing the partition resulted in Sentinel arbitrarily (from our perspective) choosing a new master and losing all data written (post-partitioning) to the other master. So the question is: given our production architecture, what is the probability of this happening and is it acceptable given the significant benefit of automatic failover?

We looked at this scenario in detail considering all the potential failure modes in our deployment. Although we believe our production environment is not immune from split-brain, we are pretty sure that the benefits outweigh the risks.

How much data loss can we expect from a failover event?

After testing we were confident that Redis host outages could produce 4-6 seconds of downtime in this system. Rapid Sentinel automated failover events combined with reasonable backoff and retry techniques in the code logic were expected to further reduce data loss during a failover event. With Sentinel deployed and considering a long history of a highly stable Redis operation, we believed we could achieve 99.995% or more production availability – a few minutes of downtime per year.

Sentinel in Production

So how has Sentinel performed in production? Mostly it has been silent, which is a good thing. A month after finishing our deployment we had a hardware failure in a network switch that had some of our Redis masters behind it. Instead of having a potential scenario involving  tens of minutes of user impact with human-in-the-loop actions to restore service, automatic failover allowed us to limit impact to just seconds with no human intervention. Due to the quick master failover and other reliability features in the code, only 270 tasks failed to insert due to the outage — all of which were captured by logging. Based on the volume of tasks in the system, this met our 99.9999% task durability goal. We did however decide to re-run a couple tasks manually and for certain critical and low-volume tasks we’re looking at providing even more reliability.

One more note from production experience. We occasionally see Sentinels reporting false “subjective down” events. Our Sentinel instances cohabitate host machines with other services. Occasionally these hosts get busy and we suspect these occasional load spikes affect the Sentinels’ ability to send and receive ping requests. But because our level-of-agreement is high, these false alarms do not trigger objective down events and are relatively harmless. If you’re deploying Sentinel on hosts that share other workloads, make sure that you consider potential impact of load patterns on those hosts and make sure you take some time to tune your level-of-agreement.

Conclusion

We have been very happy with Sentinel’s ease of use, relatively simple learning curve and brilliant production execution. So far the Redis/Sentinel combination is working great for us.

Flickr flamily floto

Like this post? Have a love of online photography? Want to work with us? Flickr is hiring engineers, designers and product managers in our San Francisco office. Find out more at flickr.com/jobs.

References

  1. Redis Sentinel Documentation – http://redis.io/topics/sentinel
  2. Redis Command Reference – http://redis.io/commands
  3. Aphyr: Call me maybe: Redis – http://aphyr.com/posts/283-call-me-maybe-redis
  4. Antirez: Reply to Aphyr Attack on Sentinel – http://antirez.com/news/55
  5. Jedis Project Page – https://github.com/xetorthio/jedis

Computer vision at scale with Hadoop and Storm

Recently, the team at Flickr has been working to improve photo search. Before our work began, Flickr only knew about photo metadata — information about the photo included in camera-generated EXIF data, plus any labels the photo owner added manually like tags, titles, and descriptions. Ironically, Flickr has never before been able to “see” what’s in the photograph itself.

Over time, many of us have started taking more photos, and it has become routine — especially with the launch last year of our free terabyte* — for users to have many un-curated photos with little or no metadata. This has made it difficult in some cases to find photos, either your own or from others.

So for the first time, Flickr has started looking at the photo itself**. Last week, the Flickr team presented this technology at the May meeting of the San Francisco Hadoop User’s Group at our new offices in San Francisco. The presentation focuses on how we scaled computer vision and deep learning algorithms to Flickr’s multi-billion image collection using technologies like Apache Hadoop and Storm. (In a future post here, we’ll describe the learning and vision systems themselves in more detail.)

Slides available here: Flickr: Computer vision at scale with Hadoop and Storm

Thanks very much to Amit Nithian and Krista Wiederhold (organizers of the SFHUG meetup) for giving us a chance to share our work.

If you’d like to work on interesting challenges like this at Flickr in San Francisco, we’d like to talk to you! Please look here for more information: http://www.flickr.com/jobs

* Today is the first anniversary of the terabyte!

** Your photos are processed by computers – no humans look at them. The automatic tagging data is also protected by your privacy settings.

Flickr API Going SSL-Only on June 27th, 2014

If you read my recent post, you know that the Flickr API fully supports SSL. We’ve already updated our web and mobile apps to use HTTPS, and you no longer need to use “secure.flickr.com” to access the Flickr API via SSL. Simply update your code to call:

https://api.flickr.com/

We want communication with Flickr to be secure, all the time. So, we are tightening things up. Effective this week, all new API keys will work via HTTPS only. On June 27th, we will deprecate non-SSL access to the API. If you haven’t already made the change to HTTPS, now is the time!

Blackout Tests

In preparation for the June 27th cut-off date, we will run two “blackout” tests, each for 2 hours, so that you can ensure that API calls in your app no longer use HTTP. If you have changed your code to use HTTPS, your app should function normally during the blackout window. If you have not changed your code to use HTTPS, then during the 2-hr blackout window all API calls from your application will fail. The API will return a 403 status code for non-SSL requests.

Important Dates and Times

  • Change in new API keys:  6 May 2014 (If you request a new API key after 6 May, it will be issued for HTTPS only)
  • First blackout window: 3 June 2014, 10:00-12:00 Pacific Daylight Time (PDT) / 17:00-19:00 GMT
  • Second blackout window: 17 June 2014, 18:00-20:00 (PDT) / 18 June 2014, 01:00-03:00 GMT
  • Non-SSL calls deprecated: 27 June 2014, 10:00 (PDT) / 17:00 GMT

More About the Flickr API Endpoints

In the API documentation, all of the endpoints have been updated to HTTPS. While OAuth adds security by removing the need to authenticate by username and password, sending all traffic over SSL further protects our users’ data while in transit.

The SSL endpoints for the Flickr API are:

https://api.flickr.com/services/rest/

https://api.flickr.com/services/soap/

https://api.flickr.com/services/xmlrpc/

And for uploads:

https://up.flickr.com/services/upload/

https://up.flickr.com/services/replace/

For applications that use well-established HTTP client libraries, this switch should only require updating the protocol and (maybe) some updated configuration.

We realize that this change might be more difficult for some. We will follow the Developer Support Group closely, so please let us hear your questions. We will respond to them there, and will collect questions of general interest in the below FAQ.

FAQs About the Transition to SSL-Only for the Flickr API

Question:  I only have a Flickr API key because I use an application or plugin that calls the Flickr API.  Will that application or plugin continue to work after June 27th?  Do I need to do something?
Answer:  An application or plugin that calls the Flickr API will stop working on June 27th if its owner does not make the changes we’ve described above.  There are many, many providers of such services and plugins.  We have notified them about this transition via email, blog post, developer lists, and on Twitter.  As a user of such a service or plugin, you have no action to do for the transition unless the application or plugin owner asks you to upgrade to a new version.  You also have the option to reach out to the application or plugin owner to assure yourself of their plans to handle this transition.

Question: Are all http://www.flickr.com urls going to be HTTPS from now on?
Answer: Yes, all http://www.flickr.com urls returned by the API are now HTTPS, and all requests to HTTP in the browser are redirected to HTTPS.

Question: Should I switch my code to HTTPS right away or should I wait a bit?
Answer: Switch now. The important thing is for your app to be changed to HTTPS before the first blackout on 3 June 2014. In fact, if your app is a mobile app, the earlier the better, so that your users will be more likely to upgrade before the first blackout.

Question:  Do I need a new API key to replace my old one for this transition?
Answer:  No, you don’t need a new API key for this transition.  You keep your existing key, and you change the code where you call the Flickr API, so that you call it with the HTTPS protocol, instead of HTTP. Change it to this:

https://api.flickr.com/

Question:  What do I do with the new API key that is being issued by Flickr as of May 6?
Answer:  We are not automatically issuing a new key to you.  What happened on 6 May was a change to how we handle new API keys.  From now on, if you submit a request for a new key, that new key will only support calls to the Flickr API over HTTPS; it will not support calls to the API over HTTP.  You do not need to request a new API key for this transition.

Question:  I received your email about the API going SSL-only on June 27th.  Do I need to do something?
Answer:  Maybe not.  If you already call the API over HTTPS, then you’re good.  No action needed.  But if your code currently calls the API over HTTP, then, YES, you do need to do something.  In your code you need to change the protocol to HTTPS.  Like this:

https://api.flickr.com/

Question: If I use a protocol-less call to the API or match the protocol of the page that is making the call, do I need to change anything?  
Answer:  
Yes, you should change your calls to specifically use HTTPS. During the blackout period and starting on June 27th, protocol-less calls to the Flickr API from non-SSL pages will fail.

Question:  Will the old https://secure.flickr.com endpoints continue to be supported in addition to the new https://api.flickr.com endpoints, or will only the latter be supported?
Answer:  Yes, https://secure.flickr.com endpoints will still be supported.  If you use that today, your application will continue to work during the blackout windows and after 27 June.

Posted in API