Skip to main content Skip to navigation

How Many Random Projections Does One Need to Recover a $k$-sparse Vector?

Thursday 28th February, 4pm - 5pm, CS1.01 Computer Science

Seminar Abstract from Jared Tanner:

The essential information contained in most large data sets is small when compared to the size of the data set. That is, the data can be well approximated using relatively few terms in a suitable transformation. This paradigm of Compressed Sensing suggests a revolution in how information is collected and processed.


In this talk we consider a stronger notion of compressibility, sparsity, which measures the number of non-zero entries. For data sets which are sparse (possibly following a transformation), the data can often be recovered efficiently, with relatively few randomized measurements by utilizing highly non-linear optimization based reconstruction techniques.

Specifically, consider an underdetermined system of linear equations $y=Ax$ with known y and $n\times N$, matrix A with $n<N$. We seek the sparsest solution, i.e., the x with fewest nonzeros satisfying $y=Ax$. In general this problem is NP-hard. However, for many matrices $A$ there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. Quantitative values for a strong and weak threshold will be presented. The strong threshold guarantees the recovery of the sparsest solution $x_o$, whereas a weaker sparsity constraint ensures the recovery of the sparsest solution for most $x_o$. Connections with high-dimensional geometry simply results about the structure of Gaussian point clouds and the neighborliness of polytopes.

Computer Science Department Homepage:

http://www2.warwick.ac.uk/fac/sci/dcs/