Difference between revisions of "Open Problems:33"
(updated header) |
|||
Line 1: | Line 1: | ||
{{Header | {{Header | ||
− | |||
|source=kanpur09 | |source=kanpur09 | ||
|who=Ely Porat | |who=Ely Porat |
Latest revision as of 01:51, 7 March 2013
Suggested by | Ely Porat |
---|---|
Source | Kanpur 2009 |
Short link | https://sublinear.info/33 |
Given a set $S\subset [n]$ of size at most $k$, we want to identify $S$ by the following 2-stage process.
- We choose a set of subsets $T_1, \ldots, T_m \subset [n]$. For each $T_i$ we learn whether or not $T_i\cap S=\emptyset$.
- Based on the outcomes of the first $m$ tests, we may choose $j_1, \ldots, j_{O(k)}\in [n]$. For each $j_i$ we learn whether or not $j_i\in S$.
The goal is to minimize $m$, the number of tests performed in the first stage. Without any further restrictions it has been shown that $m=O(k\log n/k)$ suffices [BonisGV-05]. However, for various pattern matching applications, we have the constraint that each $T_i$ needs to be an arithmetic progression, e.g., $T_i=\{2,8,14,20, \ldots \}$. In this case, $m=O(k\log^2 n)$ suffices. Is it possible to decrease this to $m=O(k\log n)$?