Powering Flickr’s Magic view by fusing bulk and real-time compute

Try it for yourself!

You can try out Flickr’s Magic View on your own photos here, and you can download a working code sample of the simplified lambda architecture here: https://github.com/yahoo/simplified-lambda

Introduction

In this post we’re going to talk about how we came up with a novel revision of the Lambda Architecture for fusing large-scale bulk compute with streaming compute to power Flickr’s Magic View. We were able to create a responsive, real time database operating at a scale of tens of billions of records, with tens to hundreds of millions of records updated per day. We turned to Yahoo’s Hadoop stack to find a way to build this at the massive scale we needed.

Magic View

Figure 1. Magic View in action

Motivation: the Magic View

Flickr’s Magic View takes the hassle out of organizing your photos by applying our computer-vision technology to automatically recognize objects or styles in your photos and present them to you in the Camera Roll’s scrolling view. This all happens in real time as soon as a photo is uploaded, it is categorized and placed into the Magic View.

Aggregating computer vision tags

When a photo is uploaded, it is processed by a computer vision pipeline to generate a set of computer vision tags, which are text labels of the contents of the image. We already had an existing architecture for stream computation of tags on upload, but to implement the Magic View, we needed to maintain per-user reverse indexes and some aggregations of the tags. And we needed to make sure all the data was consistent if a photo was added, removed or updated these indexes and aggregations would have to be updated to reflect this. Finally, we needed to initialize the system with tags for 12 billion photos and videos and run periodic backfills (every time we improved our computer vision algorithms and to cover cases where the stream compute missed images).

The Problem

We initially computed a snapshot of the Magic View indexes and aggregations using map-reduce (via Apache Oozie and Apache Pig), and we were happy with the quick turnaround time (about 7 hours). We considered updating Magic View as a daily batch job, but soon realized this would not give our users the responsive, “live” experience we wanted. So, we built a streaming data layer using Apache Storm and were soon able to update the categories in Magic View in real-time.

The next time we needed to run a backfill, we explored using this streaming layer to load the data. Unfortunately, the overhead of the read-modify-write process was simply too much for a load of this size — after kicking off the process we estimated it would take 28 days this way — much longer than the seven hours we had achieved with a bulk load.

Twenty-eight days was a non-starter – we realized we needed a way to update our bulk aggregations independently of the real-time data streaming in. Solving this problem is how we arrived at our revision to Lambda Architecture. Before digging into the solution, let’s do a quick review of the Lambda Architecture.  If you’re already familiar with it, you can skip this next section.

The Lambda Architecture

We’ll start with Nathan Marz’s book ‘Big Data’, which proposes the database concept of  ‘Lambda Architecture.’ In his analysis, he states that a database query can be represented as a function – Query – which operates on all the data:

result = Query(all data)

In the Lambda architecture, a traditional database is replaced with both a real time and a bulk database. Then query function becomes a “combiner” function of independent queries to each database:

result = Combiner(Query(real time data) + Query(bulk data))

An example of a typical Lambda Architecture is shown in figure 2. It is powered by an append-only queue for its system of record, which is fed by a real time stream of events. Periodically, all the data in the queue is fed into a bulk computation which pre-processes the data to optimize it for queries, and stores these aggregations in a bulk compute database. The real time event stream drives a stream computer, which processes the incoming events into real time aggregations. A query then goes via a query combiner, which queries both the bulk and real time databases, computes the combination, and stores the result.

Typical Lambda Architecture

Figure 2. Typical Lambda Architecture

While relatively new, Lambda Architecture has enjoyed popularity and a number of concrete implementations have been built. Some significant examples are the distributed analytics platform druid, Twitter’s Summingbird, and FiloDB. These implementations conveniently abstract away the databases behind the query combiner.

A significant advantage with this style of architecture is robustness and fault-tolerance via eventual consistency. If a piece of data is skipped in the real time compute there is a guarantee that it will eventually appear in the bulk compute database.

Criticism of the Lambda Architecture has centred around the complicated nature of the combiner. The combiner incurs a developer and systems cost from the need to maintain two different databases. It can be challenging to make sure both systems give the same result. Merging the two queries can become complicated, and finally, more points of failure may be introduced.

The “Ah-ha” Moment

Back to the problem. The data access layer we used for streaming compute uses the atomic read-modify-write pattern to ensure we write consistent data, one record-at-a-time to Apache HBase (a BigTable-style, non-relational database). Again, since this pattern was so much slower in the backfill case we needed to figure out how to get both consistent updates for streaming and  fast loads of the full dataset. Since our bulk data was static, we realized that if we relaxed the consistency constraint we could just run a fast, streaming, write-only load of the bulk data, bringing the load time back down to hours instead of days.

But how could we get around the consistency requirements? We didn’t want a bulk load to clobber data being written from the real time compute process. The insight was that we could just write bulk and streaming data to different column families in the same HBase row. So we added the concept of real time columns and bulk columns in a single row. Basically, bulk loads write to one set of columns and real time writes go to a different set of columns. Since HBase columns are sparse and data is updated relatively slowly we don’t pay much in storage or IO.

We could  now simplify the equation back to:

result = Combiner(Query(data))

The two sets of columns are managed separately by the real time and bulk subsystems. At query time, we perform a single fetch using the HBase API to get both the bulk and real time data. A separate combiner process assembles the final result.

Implementation

 Magic View backend system overview

Figure 3. Magic View Architecture

Figure 3 shows an overview of the system and our enhanced Lambda architecture. For the purposes of this discussion, a convenient abstraction is to consider that each row in the HBase table represents the current state of a given photo. The combiner stage is abstracted into a single Java process, which collects data from HBase and runs transformations on the data and sends it to a Redis cache which is used by the serving layer for the site.

Consistency on read in HBase — the combiner

We have two sets of columns to go with each row in HBase: bulk and real time. The combiner determines the final value for each attribute at read. In the case where data exists for real time but not for bulk (or vice versa) then there is only one value to choose. In the case where they both exist we always choose the real time value. This keeps the combiner very simple and fast.

There is a trick though – whenever we do a backfill, we may need to repair the row since the backfill data may be newer than any real time data that is already present. It turns out this slows down the backfill from seven hours to about 14 — still far faster than loading with read-modify-write.

Production throughput

At scale, this architecture has been able to keep up very comfortably with production load. We can simultaneously run backfills to HBase and serve user information at the same time without impacting latency or the user experience.

User experience

An important measure for how the system works is how the viewer perceives it. The slowest part of the system is paging data from HBase into the serving cache; median time for above-the-fold latency – i.e. enough data is available to render the page – is around 10ms.

Future directions

Our experience has been very positive so far with Magic View and we’re looking at how we might enable users to browse their photos in other dimensions (location or color for example). Early tests have shown that building an OLAP or data cube in this architecture is certainly possible but it’s less clear that it will scale well.

Contributors: Peter Welch, Bhautik Joshi, Hugo Haas, Srinivasan Singanallur, Ayan Ray, Pierre Garrigues, Ben Firestone, Sai Madhavan, Tim Miller

Thanks to Nathan Marz for reviewing this post.

Flickr September 2014

Like this post? Have a love of online photography? Want to work with us? Flickr is hiring mobile, back-end and front-end engineers, in our San Francisco office. Find out more at flickr.com/jobs.

Kitten Tuesday, ASCII Edition

Flickr Asciified - Nihilogic

Jason over at Nihilogic has a post up called Getting your ASCII on with Flickr

“You simply enter the search query of your choice and click the “Asciify” button. A request is then sent off to the Flickr server and any photos that match the query are returned. The image data is analyzed and turned into ASCII characters.”

Obviously I made a Kitten and you too can get flickrasciified over here.

Orginal Photo by pontman.

Cal Henderson Tuesday

We’re taking a break from Kittens this week. Mainly because I’m on Paternity Leave (see Kitten Tuesday for details) and therefore don’t have all week at work to select a suitable kitten, and no-one else on the engineering team is kitten qualified enough.

So instead I give you our very own Cal Henderson (the fellow who has kept Flickr from crashing all these years) presenting at djangocon, on the subject of “Why I Hate Django”.

We don’t use Django here at Flickr (because Cal hates it of course) but we do do a lot of scaling, and not only does Cal give a bit of insight into scaling and how things work here at Flickr, but he also talks about kittens a bit, which is nice.

So if you have an hour to spare (and frankly if you work with the internets you probably do) this is worth a watch…

[Link for RSS readers: http://www.youtube.com/watch?v=i6Fr65PFqfk]

Kitten Tuesday – Office Hygiene Edition

Our very own frontend engineering, scrumjaxing Scott (tagged in our backend system as; "dj, flickr, javascript, super star") sez …

And for my next trick, a keyboard in the dishwasher!

"Correcting many months of breakfast bagel poppyseeds, crumbs and the occasional coffee drip; these things were sustainable events, but a beer spill was the thing that finally put it out of commission."

Setting aside the whole how do you get beer spilt into your office keyboard question* , Scott, if you’re going to clean your keyboard perhaps we should get one of these for the office instead …

Keyboard cleaning

… for a far more thorough solution?

Photos by .schill and Mrs eNil

* Answer, because that’s how we roll at flickrhq.

Who’s On First?

Untitled World View #1215585993

Normally we don’t talk about upcoming features but in recent weeks we’ve been hard-pressed to keep a lid on our renewed excitement for kittens. We’re betting 2009 will be a big year for kittens and it is why Dan and Heather are taking so much time to hash out the details for the new flickr.kittens API framework which we’re confident will re-embiggen-ize the how, the what and the why of photo-sharing!

To pass the time, until then, I’m going to talk about some of the geo-related API methods that have been released in the last few months, but perhaps not properly explained in detail and introduce a new minty-fresh method which hasn’t been discussed at all!

Cake for breakfast

First, the new stuff.

Places for a user

We’ve added a new authenticated method to the flickr.places namespace called flickr.places.placesForUser which returns the top 100 unique places, scoped by place type, where a user has geotagged photos. For example, I’ve geotagged photos in the following countries:

# ?method=flickr.places.placesForUser&place_type=country

<places total="7">
	<place place_id="4KO02SibApitvSBieQ" woeid="23424977"
		latitude="48.890" longitude="-116.982"
		place_url="/United+States" place_type="country"
		photo_count="1264">United States</place>
	<place place_id="EESRy8qbApgaeIkbsA" woeid="23424775"
		latitude="62.358" longitude="-96.582"
		place_url="/Canada" place_type="country"
		photo_count="307">Canada</place>

	<place place_id="3s63vaibApjQipWazQ" woeid="23424950"
		latitude="39.895" longitude="-2.988"
		place_url="/Spain" place_type="country"
		photo_count="67">Spain</place>
	<place place_id="6immEPubAphfvM5R0g" woeid="23424819"
		latitude="46.712" longitude="1.718"
		place_url="/France" place_type="country"
		photo_count="60">France</place>
	<place place_id="DevLebebApj4RVbtaQ" woeid="23424975"
		latitude="54.313" longitude="-2.232"
		place_url="/United+Kingdom" place_type="country"
		photo_count="34">United Kingdom</place>
	<place place_id="mSCQNWWbAphdLH6WDQ" woeid="23424812"
		latitude="64.950" longitude="26.064"
		place_url="/Finland" place_type="country"
		photo_count="24">Finland</place>

	<place place_id="mpa01jWbAphICsyCsA" woeid="23424853"
		latitude="42.502" longitude="12.573"
		place_url="/Italy" place_type="country"
		photo_count="8">Italy</place>
</places>

The response format is (almost) like all the other places methods, which I guess makes it a standard places response though we haven’t gotten around to standardizing it like we have with photos. Places responses will always contain a “place ID” and a “WOE ID”, a “latitude” and a “longitude”, a “place type” and a “place URL” attribute. They usually contain an “accuracy” attribute but it doesn’t make any sense in the a list of places for a user since the photos, clustered by place type, may have been geotagged at multiple zoom levels. In this example, we’ve also added a “photo count” attribute since that’s an interesting bit of information.

The list of place types with which to scope a query by is limited to a subset of the place types in the Flickr location hierarchy, specifically: neighbourhoods, localities (cities or towns), regions (states) and countries. While place_type is a required argument for the method, there are two other optional parameters you can use to filter your results.

Places for a user (and a place)

The first is woe_id and ensures that places of a given type also have a relationship with that WOE ID. For example, these are all the localities for my geotagged photos taken in Canada (WOE ID 23424775):

# ?method=flickr.places.placesForUser&place_type=locality&woe_id=23424775

<places total="5">
	<place place_id="4hLQygSaBJ92" woeid="3534"
		latitude="45.512" longitude="-73.554"
		place_url="/Canada/Quebec/Montreal" place_type="locality"
		photo_count="221">Montreal, Quebec</place>
	<place place_id="63v7zaqQCZxX" woeid="9807"
		latitude="49.260" longitude="-123.113"
		place_url="/Canada/British+Columbia/Vancouver" place_type="locality"
		photo_count="59">Vancouver, British Columbia</place>
	<place place_id="zrCws.mQCZj_" woeid="9848"
		latitude="48.428" longitude="-123.364"
		place_url="/Canada/British+Columbia/Victoria" place_type="locality"
		photo_count="9">Victoria, British Columbia</place>

	<place place_id="WzwcDQCdAJsL" woeid="4177"
		latitude="44.646" longitude="-63.573"
		place_url="/Canada/Nova+Scotia/Halifax" place_type="locality"
		photo_count="3">Halifax, Nova Scotia</place>
	<place place_id="XlQb2xedAJvr" woeid="4176"
		latitude="44.673" longitude="-63.575"
		place_url="/Canada/Nova+Scotia/Dartmouth" place_type="locality"
		photo_count="1">Dartmouth, Nova Scotia</place>
</places>

(Most) places for a user (and a place)

The second optional parameter is threshold which requires a place have a minimum number of photos associated with it in order to be included in the result set. Any place that falls below the threshold will be rolled-up in to its parent location. For example, if we search for localities in Canada but also a minimum threshold of 5 photos per location the towns of Halifax and Dartmouth are rolled up in to the province of Nova Scotia:

# ?method=flickr.places.placesForUser&place_type=locality&woe_id=23424775&threshold=5

<places total="4">
	<place place_id="4hLQygSaBJ92" woeid="3534"
		latitude="45.512" longitude="-73.554"
		place_url="/Canada/Quebec/Montreal" place_type="locality"
		photo_count="221">Montreal, Quebec</place>

	<place place_id="63v7zaqQCZxX" woeid="9807"
		latitude="49.260" longitude="-123.113"
		place_url="/Canada/British+Columbia/Vancouver" place_type="locality"
		photo_count="59">Vancouver, British Columbia</place>
	<place place_id="zrCws.mQCZj_" woeid="9848"
		latitude="48.428" longitude="-123.364"
		place_url="/Canada/British+Columbia/Victoria" place_type="locality"
		photo_count="9">Victoria, British Columbia</place>
	<place place_id="QpsBIhybAphCEFAm" woeid="2344921"
		latitude="44.727" longitude="-63.587"
		place_url="/Canada/Nova+Scotia" place_type="region"
		photo_count="4">Nova Scotia, CA</place>
</places>

Note that we only roll up a single level so if, like the Halifax and Dartmouth, a locality gets rolled up in to its parent region it may still have fewer photos associated with it than the threshold you passed with the method call. If you think this is crazy talk and/or have some other use case we haven’t considered with this model tell us why and we can revisit the decision.

Untitled Relationship #1219894402

Finally, as mentioned above the flickr.places.placesForUser method requires that you include an auth token with minimum read permissions. As always, please take extra care to respect people’s senstitivies when it comes to location data and, above all, don’t be creepy.

Meanwhile, back at the Ranch

About a week ago, I was asked whether it was possible to use the Flickr API to get a feed of geotagged photos (for a particular place/radius) sorted by interestingness, filtered by CC license to which I quickly replied: WOE IDs, the flickr.places APIs and the “radius” parameter in the flickr.photos.search method should get you what you need! (Also the API responses as syndication feeds support, if you’re being finnicky about the question.)

Which got me thinking that while we’ve told people about WOE IDs and the Places API we haven’t really made a lot of noise about the addition of radial queries and the has_geo flag to the flickr.photos.search method. We’ve mentioned it in passing, here and there, but never really tied it all together. So, let’s start with WOE IDs:

WOE (short for Where On Earth) IDs are the unique identifiers in the giant database of places that we (and FireEagle and the rest of Yahoo!) use to keep track of where stuff is. They are also available to you and the rest of the Internets via the public GeoPlanet API. In Flickr, every geotagged photo has up to (6) associated WOE IDs representing a neighbourhood, locality, county (if present), region, country and continent.

Geocoding

Using the example above, you could start with an API call to the flickr.places.find method which is like an extra-magic geocoder: It not only resolves places names to latitude and longitude coordinates but also returns the WOE ID for the place that contains it. Searching for “San Francisco CA” returns WOE ID 2487956 which you can use to call the flickr.photos.search method asking for all the photos geotagged in the city of San Francisco sorted by interestingness with a Creative Commons Attribution license. Something like this:

# ?method=flickr.places.find&query=San+Francisco+CA

<places query="San Francisco CA" total="1">
	<place place_id="kH8dLOubBZRvX_YZ" woeid="2487956" latitude="37.779"
       	longitude="-122.420" place_url="/United+States/California/San+Francisco"
       	place_type="locality">San Francisco, California, United States</place>
</places>

# ?method=flickr.photos.search&license=4&sort=interestingness-desc 
# 	&woe_id=2487956&extras=tags,geo,license


<photos page="1" pages="289" perpage="100" total="28809">
	<photo id="145874931" owner="37996593020@N01" secret="b695138626" server="52"
       	farm="1" title="bridge" ispublic="1" isfriend="0" isfamily="0"
       	tags="sanfrancisco bridge water night reflections geotagged lights
       	baybridge geolat3779274 geolon12239096 sfchronicle96hours
       	sanfranciscochronicle96hours" latitude="37.79274" longitude="-122.39096"
       	accuracy="16" license="4"
		place_id="kH8dLOubBZRvX_YZ" woeid="2487956"/>

	<!-- and so on -->
</photos>

Reverse geocoding

You can also lookup the WOE ID for any set of latitude and longitude coordinates. Imagine that you are standing in front the infamous Wall of Rant in San Francisco’s Mission district and you’d like to see photos for the rest of the neighbourhood.

Picture 1

If you know your geo coordinates you can call the flickr.places.findByLatLon method to reverse geocode a point to its nearest WOE ID which can then be used to call the trusty flickr.photos.search method. Like this (without the photos.search part since it’s basically the same as above):

# ?method=flickr.places.findByLatLon&lat=37.752969&lon=-122.420844

<places latitude="37.752969" longitude="-122.420844" accuracy="16" total="1">
	<place place_id="C.JdRBObBZkMAKSJ" woeid="2452334"
	    latitude="37.759" longitude="-122.418"
	    place_url="/United+States/California/San+Francisco/Mission"
	    place_type="neighbourhood"/>
</places>

Nearby

But what if you just want to see photos nearby a point? Radial queries, recently added to the photos.search method, allow you to pass lat and lon parameters and ask Flickr to “show me all the photos within an (n) km radius of a point”. You have always been able to do something like this using the bbox parameter but radial queries differ in two ways:

  1. They also save you from having to calculate a bounding box which is, you know, boring.
  2. Results are sorted by distance from the center point (you can override this by setting the sort parameter). Trying to do the same with a bounding box query would mean fetching all the results first and then sorting them which both expensive and boring and breaks the pagination model.

Radial queries are not meant for pulling all the photos at the country or even state level and as such the maximum allowable radius is 32 kilometers (or 20 miles). The default value is 5 and you can toggle between metric and imperial by assigning the radius_units parameter a value of “km” or “mi” (the default is “km”).

# ?method=flickr.photos.search&lat=37.752969&lon=-122.420844 
# 	&radius=1&extras=geo,tags&min_taken_date=2008-09-01+00%3A00%3A00


<photos page="1" pages="1" perpage="100" total="9">
	<photo id="2820548158" owner="29072902@N00" secret="b2fc694880" server="3288"
		farm="4" title="20080901130811" ispublic="1" isfriend="0" isfamily="0"
		latitude="37.751166" longitude="-122.418833" accuracy="16"
		place_id="C.JdRBObBZkMAKSJ" woeid="2452334"
		tags="moblog shinobu"/>

	<!-- and so on -->
</photos>

See that min_taken_date parameter? Like all geo-related query hooks in the flickr.photos.search method you need to pass some sort of limiting agent: a tag, a user ID, a date range, etc. If you insist on passing a pure geo query we will automagically assign a limiting agent of photos taken within the last 12 hours.

(I can) has geo

Finally (no, really) if you just want to keep things simple and use a tag search you can also call the flickr.photos.search method with the has_geo argument which will scope your query to only those photos which have been geotagged. For example, searching for all geotagged photos of kittens taken in tokyo:

# ?method=flickr.photos.search&tags=tokyo,kitten&tag_mode=all 
# 	&has_geo=1&extras=tags,geo

<photos page="1" pages="1" perpage="100" total="60">
	<photo id="2619847035" owner="27921677@N00" secret="0979aed596" server="3011"
       		farm="4" title="Kittens in Akiba" ispublic="1" isfriend="0" isfamily="0"
       		tags="japan cat tokyo kitten crowd akihabara unexpected"
		latitude="35.698263" longitude="139.771977" accuracy="16"
		place_id="aod14iaYAJ1rDE.R" woeid="1118370"/>

	<!-- and so on -->

</photos>

Maps are purrrr-ty

Which is a nice segueway in to telling you that on Tuesday we turned on the map-love for
the city of Tokyo, in addition to Beijing and Black Rock City (aka Burning Man), using the Creative Commons licensed tiles produced by the good people at Open Street Maps. We’re pretty excited about this but rather than just showing another before and after screenshot we decided that the best way to showcase the new tiles was with… well, kittens of course!

Picture 1

Sophie’s Choice by tenaciousme

Enjoy!

That’s a lot to digest in one go but we promise there won’t be a pop quiz on Monday. Hopefully there’s something useful for you in the twisty maze of possibilities, now or in an oh yeah, what about… moment in the future, and maybe even the seed for a brand new API application. Kittens!

Kitten Tuesday

Last Tuesday our "sister" site blog.flickr had "other ideas " about what Kitten Tuesday means. With a positive plethora of photos and videos they go totally to town with kittens, and why the heck not?

If you’re into an excess of artistic bourgeoisie I guess, meanwhile, over here on the Dev blog, we’re all about simplicity, efficiency and optimization. Where Kitten Tuesday is just that, a Kitten on Tuesday, otherwise we may as well have Furry Friday Fiesta.

So without further ado, A reader writes:

"We’re loving Kitten Tuesdays. Any chance you could consider JJ for a future appearance?"

Why of course, here’s JJ …

Photo by Chubby Bat