Editing Open Problems:62

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 7: Line 7:
 
We can define a convex relaxation:  
 
We can define a convex relaxation:  
  
βˆ’
\[ \max \operatorname{Tr}(WA) \quad \text{s.t.} \quad \operatorname{Tr}(W) = 1,\ \ W \ge 0,\ \ W \succeq 0. \]
+
\[ \max \operatorname{Tr}(WA) \quad \text{s.t} \quad \operatorname{Tr}(W) = 1,\ \ W \ge 0,\ \ W \succeq 0. \]
  
 
Suppose that $A$ is a random matrix. In particular, set $A_{ij}$ to be i.i.d $N(0,1)$. Then empirical results show that the resulting $W$ is a rank-1 matrix, which means that we recover the optimal $x$ exactly.  
 
Suppose that $A$ is a random matrix. In particular, set $A_{ij}$ to be i.i.d $N(0,1)$. Then empirical results show that the resulting $W$ is a rank-1 matrix, which means that we recover the optimal $x$ exactly.  
  
 
Is this true in general? Note that we can prove that the solution is rank 1 if $A = v v^T + (\text{small amount of noise})$.
 
Is this true in general? Note that we can prove that the solution is rank 1 if $A = v v^T + (\text{small amount of noise})$.

Please note that all contributions to Open Problems in Sublinear Algorithms may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see Open Problems in Sublinear Algorithms:Copyrights for details). Do not submit copyrighted work without permission!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)