There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e.g. Nesterov's acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d'Aspremont 2008 and Devolder, Glineur, and Nesterov 2014. This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes the conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems.
Dr. Praneeth Netrapalli is currently at Microsoft Research in Bengaluru. Prior to that he was a postdoctoral researcher at Microsoft Research New England in Cambridge MA. He obtained his M.S. and Ph.D. from University of Texas at Austin and B.Tech. from IIT Bombay, all in electrical engineering. Before pursuing his PhD, he spent two years at Goldman Sachs, Bengaluru as a quantitative analyst, where he worked on pricing derivatives. His research focuses on designing provably efficient algorithms for machine learning problems.