photo_me 

Chi Jin (金驰)

Ph.D. Candidate
Electrical Engineering and Computer Sciences,
University of California, Berkeley.

495 Soda Hall, Berkeley, CA 94720

Email: chijin (at) berkeley (dot) edu


I am currently a 5th-year Ph.D. student in EECS at UC Berkeley advised by Michael I. Jordan. I am also a member of AMPLab and Berkeley Artificial Intelligence Research (BAIR). Prior to that, I received B.S. in Physics from Peking University, and did my undergraduate thesis with Liwei Wang.

My general research interests are in machine learning, statistics and optimization. My recent interests mostly focus on learning problems and optimization algorithms especially under non-convex setting. I am currently working on providing the general framework and theoretical tools in understanding how those popular fundamental optimization algorithm behave in presence of saddle points and local minima. Hope one day, we can make tuning deep nets transition from “black art” to “science”.


Education

2013 - PresentUniversity of California, Berkeley
Ph.D. student in Computer Sciences

2012 - 2013University of Toronto
Visiting student in Statistics

2008 - 2012Peking University
Bachelor of Science. in Physics

Experience

Summer 2016 Microsoft Research, Redmond
Research intern with Dong Yu and Chris Basoglu

Summer 2015 Microsoft Research, New England
Research intern with Sham Kakade

Publications

Gradient Descent Can Take Exponential Time to Escape Saddle Points [arXiv]
Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabas Poczos, Aarti Singh
Neural Information Processing Systems (NIPS) 2017.

How to Escape Saddle Points Efficiently [arXiv] [blog]
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
International Conference on Machine Learning (ICML) 2017.

No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis [arXiv]
(α-β order) with Rong Ge, Yi Zheng
International Conference on Machine Learning (ICML) 2017.

Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot [arXiv]
(α-β order) with Prateek Jain, Sham M. Kakade, Praneeth Netrapalli
Artificial Intelligence and Statistics Conference (AISTATS) 2017.

Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences [arXiv]
Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I. Jordan
Neural Information Processing Systems (NIPS) 2016.

Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent [arXiv]
(α-β order) with Sham M. Kakade, Praneeth Netrapalli
Neural Information Processing Systems (NIPS) 2016.

Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [arXiv]
(α-β order) with Prateek Jain, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
Conference of Learning Theory (COLT) 2016.

Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [arXiv]
(α-β order) with Rong Ge, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.

Faster Eigenvector Computation via Shift-and-Invert Preconditioning [arXiv]
(α-β order) with Dan Garber, Elad Hazan, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.

Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition [arXiv]
(α-β order) with Rong Ge, Furong Huang, Yang Yuan
Conference of Learning Theory (COLT) 2015.

Differentially Private Data Releasing for Smooth Queries [paper]
Ziteng Wang, Chi Jin, Kai Fan, Jiaqi Zhang, Junliang Huang, Yiqiao Zhong, Liwei Wang
Journal of Machine Learning (JMLR) 2015.

Dimensionality Dependent PAC-Bayes Margin Bound [paper]
Chi Jin, Liwei Wang
Neural Information Processing Systems (NIPS) 2012.