Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest workers (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work, we present the first theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). In the second part of the talk, I will discuss a unified convergence analysis of communication-efficient distributed SGD algorithms, which include federated, elastic and decentralized averaging. The novelty in our work is that our runtime analysis considers random gradient computation and communication delays, which helps us design and compare distributed SGD algorithms that achieve the fastest true convergence with respect to wall-clock time.
Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Prior to that, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016. She also received her B.Tech and M.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the IBM Faculty Research Award (2017, Best Thesis Prize in Computer science at MIT (2012), Institute Gold Medal of IIT Bombay (2010), and the Claude Shannon Research Assistantship (2015-16).