![]() |
Chi Jin (金驰)Ph.D. Candidate 495 Soda Hall, Berkeley, CA 94720 Email: chijin (at) berkeley (dot) edu |
I am currently a 5th-year Ph.D. student in EECS at UC Berkeley advised by Michael I. Jordan. I am also a member of AMPLab and Berkeley Artificial Intelligence Research (BAIR). Prior to that, I received B.S. in Physics from Peking University, and did my undergraduate thesis with Liwei Wang.
My general research interests are in machine learning, statistics and optimization. My recent interests mostly focus on learning problems and optimization algorithms especially under non-convex setting. I am currently working on providing the general framework and theoretical tools in understanding how those popular fundamental optimization algorithm behave in presence of saddle points and local minima. Hope one day, we can make tuning deep nets transition from “black art” to “science”.
Education
2013 - Present | University of California, Berkeley |
Ph.D. student in Computer Sciences |
2012 - 2013 | University of Toronto |
Visiting student in Statistics |
2008 - 2012 | Peking University |
Bachelor of Science. in Physics |
Experience
Summer 2016 | Microsoft Research, Redmond |
Research intern with Dong Yu and Chris Basoglu |
Summer 2015 | Microsoft Research, New England |
Research intern with Sham Kakade |
Preprints
Minimizing Nonconvex Population Risk from Rough Empirical Risk [arXiv] | |
Chi Jin*, Lydia T. Liu*, Rong Ge, Michael I. Jordan | |
ArXiv Preprint |
Stability and Convergence Trade-off of Iterative Optimization Algorithms [arXiv] | |
Yuansi Chen, Chi Jin, Bin Yu | |
ArXiv Preprint |
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [arXiv] | |
Chi Jin, Praneeth Netrapalli, Michael I. Jordan | |
ArXiv Preprint |
Stochastic Cubic Regularization for Fast Nonconvex Optimization [arXiv] | |
Nilesh Tripuraneni*, Mitchell Stern*, Chi Jin, Jeffrey Regier, Michael I. Jordan | |
ArXiv Preprint |
Publications
Gradient Descent Can Take Exponential Time to Escape Saddle Points [arXiv] | |
Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabas Poczos, Aarti Singh | |
Neural Information Processing Systems (NIPS) 2017. |
How to Escape Saddle Points Efficiently [arXiv] [blog] | |
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan | |
International Conference on Machine Learning (ICML) 2017. |
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis [arXiv] | |
(α-β order) with Rong Ge, Yi Zheng | |
International Conference on Machine Learning (ICML) 2017. |
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot [arXiv] | |
(α-β order) with Prateek Jain, Sham M. Kakade, Praneeth Netrapalli | |
Artificial Intelligence and Statistics Conference (AISTATS) 2017. |
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences [arXiv] | |
Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I. Jordan | |
Neural Information Processing Systems (NIPS) 2016. |
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent [arXiv] | |
(α-β order) with Sham M. Kakade, Praneeth Netrapalli | |
Neural Information Processing Systems (NIPS) 2016. |
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [arXiv] | |
(α-β order) with Prateek Jain, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford | |
Conference of Learning Theory (COLT) 2016. |
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [arXiv] | |
(α-β order) with Rong Ge, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford | |
International Conference on Machine Learning (ICML) 2016. |
Faster Eigenvector Computation via Shift-and-Invert Preconditioning [arXiv] | |
(α-β order) with Dan Garber, Elad Hazan, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford | |
International Conference on Machine Learning (ICML) 2016. |
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition [arXiv] | |
(α-β order) with Rong Ge, Furong Huang, Yang Yuan | |
Conference of Learning Theory (COLT) 2015. |
Differentially Private Data Releasing for Smooth Queries [paper] | |
Ziteng Wang, Chi Jin, Kai Fan, Jiaqi Zhang, Junliang Huang, Yiqiao Zhong, Liwei Wang | |
Journal of Machine Learning (JMLR) 2015. |
Dimensionality Dependent PAC-Bayes Margin Bound [paper] | |
Chi Jin, Liwei Wang | |
Neural Information Processing Systems (NIPS) 2012. |