Problem 54: Faster JL Dimensionality Reduction

From Open Problems in Sublinear Algorithms
Jump to: navigation, search
Suggested by Jelani Nelson
Source Dortmund 2012
Short link https://sublinear.info/54

The standard Johnson-Lindenstrauss lemma states the following: for any $0<\epsilon<1/2$, any $x_1\ldots x_n\in \R^d$, there exists $A\in \R^{k\times d}$ with $k=O(1/\epsilon^2\cdot \log n)$, such that for any $i,j$ we have $\|Ax_i-Ax_j\|_2=(1\pm \epsilon)\|x_i-x_j\|_2$.

The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$:

  • Fast JL (FFT-based): Here, the runtime is of the form $O(d\log d + \hbox{poly}(k))$ to compute $Ax$ ($d\log d$ is usually the most significant term).
  • Sparse JL: Here, the runtime is of the form $O(\epsilon k\|x\|_0+k)$, where $\|x\|_0$ is the number of non-zero coordinates of $x$ (i.e., it works well for sparse vectors).

Question: Can one obtain a JL matrix $A$ such that one can compute $Ax$ in time $\tilde O(\|x\|_0+k)$ ?

One possible avenue would be by considering a “random” $k$ by $k$ submatrix of the FFT matrix. This may or may not lead to the desired result.