Talks & Lectures
20 Feb 2018
McLean 510

Support Recovery From Compressed Measurements: A Non-Convex Approach

CS Seminar Series: Jie Shen, Rutgers University

Jie Shen
ABSTRACT

In machine learning and high-dimensional statistics, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements, i.e. performs accurate feature selection.

In this work, we present a novel analysis on the support recovery performance for a family of non-convex algorithms. We show that under proper conditions, these algorithms recover the support of an arbitrary s-sparse signal within ``O(s c log(c))'' iterations where ``c'' is an appropriate condition number. This gives the best-known results for prominent non-convex algorithms. In addition, we characterize the trade-off between statistical accuracy and computational complexity among the members of the family. Compared to convex programs such as the Lasso, we prove that the non-convex algorithms are computationally more efficient, and are able to solve ill-conditioned problems. The empirical study perfectly matches our theoretical findings. Finally, we discuss several future works that involves deep leaning, stochastic optimization and active learning in the high-dimensional regime.

BIOGRAPHY

Jie Shen is currently a fourth year Ph.D. candidate in the Department of Computer Science at Rutgers University, co-advised by Ping Li and Pranjal Awasthi. He received the Bachelor's degree in Mathematics in 2011 and the Master's degree in Computer Science in 2014, both from Shanghai Jiao Tong University. During 2013 and 2014, he was a visiting scholar at National University of Singapore. His research interests mainly include machine learning, high-dimensional statistics and convex/non-convex optimization.

Back to Events