Difference between revisions of "Open Problems:62"

From Open Problems in Sublinear Algorithms
Jump to: navigation, search
Line 1: Line 1:
 
{{Header
 
{{Header
|title=PCAs with Nonnegativity Constraints
+
|title=Principal Component Analysis with Nonnegativity Constraints
 
|source=bertinoro14
 
|source=bertinoro14
 
|who=Andrea Montanari
 
|who=Andrea Montanari
 
}}
 
}}
  
Given a symmetric matrix $A$, we can think of PCA as maximizing $x^\top A x$ subject to $\|x\|=1$. If we also add the condition $x \ge 0$, this problem becomes NP-hard.  
+
Given a symmetric matrix $A$, we can think of Principal Component Analysis (PCA) as maximizing $x^\top A x$ subject to $\|x\|=1$. If we also add the condition $x \ge 0$, this problem becomes NP-hard.  
 
We can define a convex relaxation:  
 
We can define a convex relaxation:  
  
\[ \max Tr(WA) \ \text{s.t}\ Tr(W) = 1, W \ge 0, W \succeq 0 \]
+
\[ \max \operatorname{Tr}(WA) \quad \text{s.t} \quad \operatorname{Tr}(W) = 1,\ \ W \ge 0,\ \ W \succeq 0. \]
  
Suppose that $A$ is a random matrix: in particular, set $A_{ij}$ to be i.i.d $N(0,1)$. Then empirical results show that the resulting $W$ is a rank-1 matrix, which means that we recover the optimal $x$ exactly.  
+
Suppose that $A$ is a random matrix. In particular, set $A_{ij}$ to be i.i.d $N(0,1)$. Then empirical results show that the resulting $W$ is a rank-1 matrix, which means that we recover the optimal $x$ exactly.  
  
Is this true in general ? Note that we can prove that the solution is rank 1 if $A = v v^T + $ small amounts of noise.
+
Is this true in general? Note that we can prove that the solution is rank 1 if $A = v v^T + (\text{small amount of noise})$.

Revision as of 08:41, 29 May 2014

Suggested by Andrea Montanari
Source Bertinoro 2014
Short link https://sublinear.info/62

Given a symmetric matrix $A$, we can think of Principal Component Analysis (PCA) as maximizing $x^\top A x$ subject to $\|x\|=1$. If we also add the condition $x \ge 0$, this problem becomes NP-hard. We can define a convex relaxation:

\[ \max \operatorname{Tr}(WA) \quad \text{s.t} \quad \operatorname{Tr}(W) = 1,\ \ W \ge 0,\ \ W \succeq 0. \]

Suppose that $A$ is a random matrix. In particular, set $A_{ij}$ to be i.i.d $N(0,1)$. Then empirical results show that the resulting $W$ is a rank-1 matrix, which means that we recover the optimal $x$ exactly.

Is this true in general? Note that we can prove that the solution is rank 1 if $A = v v^T + (\text{small amount of noise})$.