Difference between revisions of "Open Problems:56"

From Open Problems in Sublinear Algorithms
Jump to: navigation, search
m (updated header)
 
Line 1: Line 1:
 
{{Header
 
{{Header
|title=Efficient Measures of “Surprisingness” of Sequences
 
 
|source=dortmund12
 
|source=dortmund12
 
|who=Rina Panigrahy
 
|who=Rina Panigrahy

Latest revision as of 02:00, 7 March 2013

Suggested by Rina Panigrahy
Source Dortmund 2012
Short link https://sublinear.info/56

Consider a sequence of i.i.d. random bits $S\in\{0,1\}^n$.

Question: Find efficient measures of how surprising/unbelievable $S$ appears to be. (Good heuristic for measuring how probable/improbably a string is.)

For example, if we see $0,0,0,\ldots $, we won't believe it is random (i.e., it is surprising.)

One existing measure is the ($k^{th}$-order) Shannon entropy $H_k$ ($H_0$ would correspond to taking the entropy of the empirical frequencies of 0s and 1s). However, it fails to say that a string like $(0,0,\ldots,0,1,1,\ldots,1)$ is surprising (from the point of view of densities it looks pretty random).

Ideal solution is to consider the Kolmogorov complexity, but it is hard (impossible) to compute.

A particular setting of the strings to consider may be: suppose each bit is generated from a biased independent coin, but the bias of the coin changes (slowly?) over time. Is there a good compression here?