Matrix decomposition, where we wish to approximate a given matrix as a sum or product of two or more structured matrices, plays a fundamental role in many machine learning applications. Principal components analysis (PCA) and k-means clustering, which are ubiquitous in machine learning, are for instance specializations of this problem. Existing algorithms, that can provably solve these problems, are time and memory intensive, and consequently do not scale to large data sizes. In this talk, we will present new efficient algorithms with improved performance guarantees for two such problems: 1. Computation of the top singular vector of a highly asymmetric rectangular matrix, and 2. Robust PCA, where we wish to decompose a given matrix as the sum of a low rank matrix and a sparse matrix. A key theme in our work on Robust PCA is the use of alternating minimization, which is an empirically popular heuristic to solve a large class of non-convex optimization problems. Despite the empirical success of alternating minimization, prior to our work, there was very little understanding of when and why it works well. In addition to robust PCA, we also provide the first performance guarantees for alternating minimization for the problems of matrix completion, phase retrieval and learning sparsely used dictionaries.
Praneeth Netrapalli is currently a postdoctoral researcher at Microsoft Research New England in Cambridge MA. He obtained his MS and PhD from UT Austin and B-Tech from IIT Bombay, all in Electrical engineering. Before pursuing his PhD, he spent two years at Goldman Sachs, Bangalore as a quantitative analyst where he worked on pricing derivatives. His research focuses on designing efficient and provable algorithms for machine learning problems.