Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex inference: often, it is possible to partition the variables into two (or more) sets, so that the problem isconvex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this talk, he will present rigorous performance guarantees for AltMin in three statistical inference settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization.
Praneeth Netrapalli is a PhD student in the ECE department at UT Austin, where he is advised by Prof. Sujay Sanghavi. His research focuses on designing and understanding practical algorithms for solving machine learning problems. He obtained MS from UT Austin, and B. Tech from IIT Bombay, both in Electrical Engineering. His prior experience includes two years as a quantitative analyst at Goldman Sachs where he worked on pricing and evaluating risk on complex financial products.