Difference between revisions of "Open Problems:54"

From Open Problems in Sublinear Algorithms
Jump to: navigation, search
(Created page with "{{Header |title=Faster JL Dimensionality Reduction |source=dortmund12 |who=Jelani Nelson }} The standard Johnson-Lindenstrauss lemma states the following: for any $0<\epsilon<...")
 
m (updated header)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
{{Header
 
{{Header
|title=Faster JL Dimensionality Reduction
 
 
|source=dortmund12
 
|source=dortmund12
 
|who=Jelani Nelson
 
|who=Jelani Nelson
Line 7: Line 6:
  
 
The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$:
 
The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$:
* Fast JL (FFT-based). Here, the runtime is of the form $O(d\log d + \hbox{poly}(k))$ to compute $Ax$ ($d\log d$ is usually the most significant term).
+
* Fast JL (FFT-based): Here, the runtime is of the form $O(d\log d + \hbox{poly}(k))$ to compute $Ax$ ($d\log d$ is usually the most significant term).
* Sparse JL. Here, the runtime is of the form $O(\epsilon k\|x\|_0+k)$, where $\|x\|_0$ is the number of non-zero coordinates of $x$ (i.e., it works well for sparse vectors).
+
* Sparse JL: Here, the runtime is of the form $O(\epsilon k\|x\|_0+k)$, where $\|x\|_0$ is the number of non-zero coordinates of $x$ (i.e., it works well for sparse vectors).
  
'''Question''': Can one obtain a JL matrix $A$, such that one can compute $Ax$ in time $\tilde O(\|x\|_0+k)$ ?
+
'''Question:''' Can one obtain a JL matrix $A$ such that one can compute $Ax$ in time $\tilde O(\|x\|_0+k)$ ?
  
One possible avenue would be by considering a "random" $k$ by $k$ submatrix of the FFT matrix. This may or may not lead to the desired result.
+
One possible avenue would be by considering a &ldquo;random&rdquo; $k$ by $k$ submatrix of the FFT matrix. This may or may not lead to the desired result.

Latest revision as of 01:59, 7 March 2013

Suggested by Jelani Nelson
Source Dortmund 2012
Short link https://sublinear.info/54

The standard Johnson-Lindenstrauss lemma states the following: for any $0<\epsilon<1/2$, any $x_1\ldots x_n\in \R^d$, there exists $A\in \R^{k\times d}$ with $k=O(1/\epsilon^2\cdot \log n)$, such that for any $i,j$ we have $\|Ax_i-Ax_j\|_2=(1\pm \epsilon)\|x_i-x_j\|_2$.

The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$:

  • Fast JL (FFT-based): Here, the runtime is of the form $O(d\log d + \hbox{poly}(k))$ to compute $Ax$ ($d\log d$ is usually the most significant term).
  • Sparse JL: Here, the runtime is of the form $O(\epsilon k\|x\|_0+k)$, where $\|x\|_0$ is the number of non-zero coordinates of $x$ (i.e., it works well for sparse vectors).

Question: Can one obtain a JL matrix $A$ such that one can compute $Ax$ in time $\tilde O(\|x\|_0+k)$ ?

One possible avenue would be by considering a “random” $k$ by $k$ submatrix of the FFT matrix. This may or may not lead to the desired result.