Difference between revisions of "Open Problems:28"
(Created page with "{{DISPLAYTITLE:Problem 28: Randomness of Partially Random Streams}} {{Infobox |label1 = Proposed by |data1 = Sudipto Guha |label2 = Source |data2 = [[Workshops:Kanpur_2009|Kan...") |
m (updated header) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | {{ | + | {{Header |
− | + | |source=kanpur09 | |
− | | | + | |who=Sudipto Guha |
− | | | ||
− | |||
− | |||
− | |||
− | |||
}} | }} | ||
Many streaming algorithms are designed for worst-case inputs and the first step of analysis is conducted using truly random hash functions, which in the second step are replaced by hash functions that can be described using a small number of truly random bits. In practice, the input stream is often a result of some random process. Mitzenmacher and Vadhan {{cite|MitzenmacherV-08}} show that as long as it has sufficiently large entropy, hash functions from a restricted family are essentially as good as truly hash functions. On a related note, Gabizon and Hassidim {{cite|GabizonH-10}} show that algorithms for random-order streams need essentially no additional entropy apart from what can be extracted from the input. | Many streaming algorithms are designed for worst-case inputs and the first step of analysis is conducted using truly random hash functions, which in the second step are replaced by hash functions that can be described using a small number of truly random bits. In practice, the input stream is often a result of some random process. Mitzenmacher and Vadhan {{cite|MitzenmacherV-08}} show that as long as it has sufficiently large entropy, hash functions from a restricted family are essentially as good as truly hash functions. On a related note, Gabizon and Hassidim {{cite|GabizonH-10}} show that algorithms for random-order streams need essentially no additional entropy apart from what can be extracted from the input. |
Latest revision as of 01:50, 7 March 2013
Suggested by | Sudipto Guha |
---|---|
Source | Kanpur 2009 |
Short link | https://sublinear.info/28 |
Many streaming algorithms are designed for worst-case inputs and the first step of analysis is conducted using truly random hash functions, which in the second step are replaced by hash functions that can be described using a small number of truly random bits. In practice, the input stream is often a result of some random process. Mitzenmacher and Vadhan [MitzenmacherV-08] show that as long as it has sufficiently large entropy, hash functions from a restricted family are essentially as good as truly hash functions. On a related note, Gabizon and Hassidim [GabizonH-10] show that algorithms for random-order streams need essentially no additional entropy apart from what can be extracted from the input.
In these two cases, the input can be seen as a source of randomness for the algorithm. How can one quantify the randomness of the stream in a natural way? For instance, Mitzenmacher and Vadhan set a lower bound for the Renyi entropy of each element of the stream, conditioned on the previous elements of the stream. Are there measures that are likely to be useful in practice and that are possible to verify?
Once we fix a measure of randomness, how much randomness according to this measure is necessary to derandomize or simplify specific streaming algorithms?