I am an Assistant Professor of Computer Science at UCLA. My research is in statistical machine learning, with a focus on developing and analyzing nonconvex optimization algorithms for machine learning to understand large-scale, dynamic, complex and heterogeneous data, and building the theoretical foundations of deep learning. I am leading the Statistical Machine Learning Lab. I received my Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2014.

I am very fortunate to have received a couple of awards for my work, including Simons Berkeley Research Fellowship in 2019, Salesforce Deep Learning Research Award and Adobe Data Science Research Award in 2018, NSF CAREER Award in 2017, and Yahoo! Academic Career Enhancement Award in 2015. Here is my latest CV.

News and Annoucement

  • [3/2019] I will give a talk on "Towards Understanding Overparameterized Deep Neural Networks: From Optimization To Generalization" in the Machine Learning Theory Workshop at PKU.
  • [3/2019] I will give a talk on "Towards Understanding Overparameterized Deep Neural Networks: From Optimization To Generalization" in the Statistics Semiar at UCLA.
  • [2/2019] I will give a talk on "Two facets of stochastic optimization: continuous-time dynamics and discrete-time algorithms" in the workshop of "Interplay between Control, Optimization, and Machine Learning" at ACC'19.
  • [2/2019] I'm serving as an area chair for ICML'19 and NeurIPS'19, and a senior program commitee member for IJCAI'19 and ACML'19.
  • [1/2019] I will participate the "Foundations of Deep Learning" program at Simons Institute as a research fellow in Summer 2019.
  • [12/2018] I (with Zhaoran Wang) will give a tutorial on "Nonconvex Optimization for Knowledge Discovery and Data Mining" at SDM'19.
  • [11/2018] I gave a talk on "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks: Algorithms and Theory" at USC ISI AI Seminar. [slides]
  • [11/2018] I gave a talk on "New Variance Reduction Algorithms for Nonconvex Finite-Sum Optimization" at USC Machine Learning Seminar. [slides]

For Prospective Students

I am actively looking for talented graduate and undergraduate students interested in theory of deep learning, adversarial machine learning, nonconvex optimization, reinforcement learning and their applications joining my lab. Please indicate your interest in working with me in your application. Due to time and lab space limit, I don't host visiting students or summer interns.

Recent Research Highlight


  • Address: EVI 282, 404 Westwood Plaza, Los Angeles, CA 90095

  • Email: qgu at cs dot ucla dot edu