photo_me 

Chi Jin

Ph.D. Candidate
Electrical Engineering and Computer Sciences,
University of California, Berkeley.

495 Soda Hall, Berkeley, CA 94720

Email: chijin (at) berkeley (dot) edu


I am currently a 4th-year Ph.D. student in EECS at UC Berkeley advised by Michael I. Jordan. I am also a member of AMPLab and Berkeley Artificial Intelligence Research (BAIR). Prior to that, I did my undergraduate thesis at Peking University with Liwei Wang.

My general research interests are in machine learning, statistics and optimization. My recent interests mostly focus on non-convex optimization. I aim to provide a general framework which simplifies or standardizes the analyses of most non-convex algorithms, and still gives tight guarantees at the same time.


Education

2013 - PresentUniversity of California, Berkeley
Ph.D. student in Computer Sciences

2012 - 2013University of Toronto
Visiting student in Statistics

2008 - 2012Peking University
Bachelor of Science. in Physics

Experience

Summer 2016 Microsoft Research, Redmond
Research intern with Dong Yu and Chris Basoglu

Summer 2015 Microsoft Research, New England
Research intern with Sham Kakade

Preprints

(♦ denotes alphabetical ordering)
How to Escape Saddle Points Efficiently [arXiv]
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
Arxiv preprint

No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis [arXiv]
Rong Ge, Chi Jin, Yi Zheng
Arxiv preprint

Publications

(♦ denotes alphabetical ordering)
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot [arXiv]
Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli
Artificial Intelligence and Statistics Conference (AISTATS) 2017.

Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences [arXiv]
Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I. Jordan
Neural Information Processing Systems (NIPS) 2016.

Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent [arXiv]
Chi Jin, Sham M. Kakade, Praneeth Netrapalli
Neural Information Processing Systems (NIPS) 2016.

Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [arXiv]
Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
Conference of Learning Theory (COLT) 2016.

Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [arXiv]
Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.

Faster Eigenvector Computation via Shift-and-Invert Preconditioning [arXiv]
Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.

Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition [arXiv]
Rong Ge, Furong Huang, Chi Jin, Yang Yuan
Conference of Learning Theory (COLT) 2015.

Differentially Private Data Releasing for Smooth Queries [paper]
Ziteng Wang, Chi Jin, Kai Fan, Jiaqi Zhang, Junliang Huang, Yiqiao Zhong, Liwei Wang
Journal of Machine Learning (JMLR) 2015.

Dimensionality Dependent PAC-Bayes Margin Bound [paper]
Chi Jin, Liwei Wang
Neural Information Processing Systems (NIPS) 2012.