Editing Open Problems:54
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 6: | Line 6: | ||
The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$: | The main question is to construct $A$'s that admit faster computation time of $Ax$. There are several directions to try to obtain more efficient $A$: | ||
− | * Fast JL (FFT-based) | + | * Fast JL (FFT-based). Here, the runtime is of the form $O(d\log d + \hbox{poly}(k))$ to compute $Ax$ ($d\log d$ is usually the most significant term). |
− | * Sparse JL | + | * Sparse JL. Here, the runtime is of the form $O(\epsilon k\|x\|_0+k)$, where $\|x\|_0$ is the number of non-zero coordinates of $x$ (i.e., it works well for sparse vectors). |
− | '''Question | + | '''Question''': Can one obtain a JL matrix $A$, such that one can compute $Ax$ in time $\tilde O(\|x\|_0+k)$ ? |
− | One possible avenue would be by considering a | + | One possible avenue would be by considering a "random" $k$ by $k$ submatrix of the FFT matrix. This may or may not lead to the desired result. |