MongoDb is a real lifesaver when it comes to improving developer productivity in web applications, however, that’s only a small part of the power in MongoDb. To do a lot of the deep down data mining, we need to learn to use Map/Reduce to massage our data. Please note, some of this functionality can be accomplished using Mongo’s Aggregate functions, however, I’ve intentionally avoided it, as there are limitations with using aggregates on sharded environments, and I expect most of my Mongo apps will need to be sharded.
Since we just finished the 2012 All-Star Game here in Kansas City, a baseball statistics example seems appropriate.
Setting up your environment
You’ll need to have console access to a mongodb database. To set up mongo on your computer, see the Quick Start.
Loading some sample data
Lets create some realistic baseball stats. I’ll start with the real roster for the Kansas City Royals. However, instead of using their real stats, we’ll generate some random numbers using javascript’s Math object. For example, we know that the best players in the league will get 200 hits, the worst players get none. Math.floor(Math.random()*200)
will give us a random number between 0 and 200. We’ll make sure that the number of hits never exceeds the number of At-Bats, and we’ll keep the number of Home Runs capped at 50 (rather generous for the Royals).
To add a single player, we can run the following javascript:
Grab the script for the whole roster here, and run it in your mongo console.
Counting Home Runs
Confirm that you’ve got the data loaded. Your stats for Billy Butler will vary (my Billy Butler kind of sucks), but you should always have 43 players.
We now know how many home runs Billy Butler hit this season, but let us say we want to find the number of home runs that the combined Royals roster hit this season.
A more complex example
Cool huh? Lets take a slightly more complicated case. We’d like to take all players with more than 250 AB, and group them by batting average.
These examples are pretty simple, but we can still take away a few lessons:
- Do the heavy lifting in the map function. These are the tasks that get executed in parallel across your shards. For example, by pushing the batting average calculation, and the categorization into the map function, we ensure a fast runtime across a large dataset.
- Make use of the query arg for the map/reduce command. By filtering out the undesireable data, we save mapping operations and reduce the load on the db
Credits:
Thanks to several bloggers who helped me understand this concept:
Full source code is available on github.