Difference between revisions of "Open Problems:26"
(updated header) |
|||
Line 1: | Line 1: | ||
{{Header | {{Header | ||
− | |||
|source=kanpur09 | |source=kanpur09 | ||
|who=Paul Beame | |who=Paul Beame |
Latest revision as of 01:50, 7 March 2013
Suggested by | Paul Beame |
---|---|
Source | Kanpur 2009 |
Short link | https://sublinear.info/26 |
The original MapReduce paper [DeanG-04] gives two distributed models. First it only says that intermediate key/value pairs with the same key are combined and sent as batch jobs to workers. Then in Section 4.2, it additionally guarantees that the batch jobs received by a single worker are sorted according to the corresponding key values. There are algorithms that rely on this additional feature of MapReduce. Are these two models equivalent? For decision problems in the complexity world, we know strong time-space trade-offs for sorting, but no similar lower bounds are known for distinctness.