Difference between revisions of "Open Problems:26"
|Line 1:||Line 1:|
Latest revision as of 01:50, 7 March 2013
|Suggested by||Paul Beame|
The original MapReduce paper [DeanG-04] gives two distributed models. First it only says that intermediate key/value pairs with the same key are combined and sent as batch jobs to workers. Then in Section 4.2, it additionally guarantees that the batch jobs received by a single worker are sorted according to the corresponding key values. There are algorithms that rely on this additional feature of MapReduce. Are these two models equivalent? For decision problems in the complexity world, we know strong time-space trade-offs for sorting, but no similar lower bounds are known for distinctness.