"People who wish to analyze nature without using mathematics must settle for a reduced understanding." Richard Feynman



             






About

Amir Gholami is a research scientist jointly in RiseLab and BAIR at UC Berkeley. He received his PhD from UT Austin, working on large scale 3D image segmentation, a research topic which received UT Austin’s best doctoral dissertation award in 2018. He is a Melosh Medal finalist, the recipient of best student paper award in SC'17, Gold Medal in the ACM Student Research Competition, best student paper finalist in SC’14, as well as Amazon Machine Learning Research Award in 2020. He was also part of the Nvidia team that for the first time made low precision neural network training possible (FP16), enabling more than 10x increase in compute power through tensor cores. That technology has been widely adopted in GPUs today. Amir's current research focuses on efficient AI at the Edge, and scalable training of Neural Network models (resume).

Contact Email: "amirgh _at_ berkeley . edu".


Open Positions:

There is an internship opportunity for research in the area of Efficient Machine Learning (the position requires the student to be enrolled in UC Berkeley). Please email me your CV if you are interested with the subject of "Efficient ML Internship Application".

Recent News

Students

Over the years, I have been fortunate to have the opportunity to work with and mentor the following talented students.

Current Students:

  • Sean Lin: Undergraduate student at UC Berkeley
  • John So: Undergraduate student at UC Berkeley
  • Arvind Rajaraman: Undergraduate student at UC Berkeley
  • Shixing Yu: Vising undergraduate from PKU

Alumni (Gone but not forgotten):

  • Daiyaan Arfeen: Undergraduate (Now PhD Student at CMU)
  • Norman Mu: Undergraduate (Now PhD Student at Berkeley)
  • Zach Zheng: Masters student (Now at Apple)
  • Eric Tan: Masters student (Now at Google)
  • Naijing Zhang: Masters student (Now at Google Youtube)
  • Yifan Bai: Masters student (Now at Amazon)
  • Tianmu Lei: Masters student (Now at Google)
  • Xinran Rui: Masters student (Now at TensTorrent)
  • Sarvagya Singh: Masters student (Now at Forward Health)
  • Hanbing Zhan: Masters student (Now at ByteDance)
  • Xing Jin: Masters student (Now at Volume Hedge Fund)
  • Vyom Kavishwar: Masters student (Now at Hive)
  • Sheng Shen: Masters student (Now PhD Student at Berkeley)
  • Jiayu Ye: Masters student (Now at Google)
  • Gabe Montague: Masters student (Co-founder of "Park and Pedal")
  • Linjian Ma: Masters student (Now PhD student at UIUC)
  • Varun Shenoy: High school student (Now undergrad at Stanford)


Publications

Papers



Workshops

Selected Talks

  • Nvidia GTC Conference, Apr., 2021,
    Systematic Neural Network Quantization, Nvidia GTC Conference.

  • Opening Keynote, Intel System Architecture Summit (ISAS), Feb., 2021,
    Emerging AI Applications: Moving Beyond ResNet50 on ImageNet.

  • Google Research, Dec., 2020,
    Systematic Quantization and Pruning for Efficient Neural Network Inference.


  • Keynote in NSF Cyberinfrastructure workshop, Feb., 2020,
    An Integrated Approach for Efficient Neural Network Design, Training, and Inference.


  • UC Berkeley, RiseLab Retreat, Jan. 2020,
    HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks.


  • UC Berkeley, BLISS Seminar, Oct. 2019,
    Systematic Quantization of Neural Networks Through Second-Order Information.


  • Facebook, AI Systems Faculty Summit, Sep. 2019,
    Efficient Neural Networks through Systematic Quantization.


  • BSTARS'19, Berkeley Statistics Department, Mar. 2019,
    Neural Networks Through the Lens of the Hessian.


  • Berkeley Simons Institute, 5th Annual Industry Day, Feb. 2019,
    ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs.


  • Simons Randomized Numerical Linear Algebra and Applications Workshop, Sep. 2018,
    Large Scale Stochastic Training of Neural Networks.


  • Simons Data Science Finale Workshop, Dec. 2018,
    Towards Robust Second-order Training of Neural Networks.


  • Simons Weekly Optimization Reading Group, Oct. 2018,
    Second order optimization for convex and non-convex problems.


  • NERSC Data Seminar, Dec. 2018,
    Beyond SGD: Robust Optimization and Second-Order Information for Large-Scale Training of Neural Networks .


  • Stanford, CME 510: Linear Algebra and Optimization Seminar, Nov. 2018,
    Large-scale training of Neural Networks .


  • UCSF Radiology Department, Oct. 2018 ,
    A Domain Adaptation framework for Neural Network Based Medical Image Segmentation.


  • Intel AI Meeting, Oct. 2018,
    Autonomous Driving Challenges in Computer Vision Research.


  • Facebook AI Research, Sep. 2018,
    Challenges for Distributed Training of Neural Networks.


  • Microsoft Research, Aug. 2018,
    Large Scale Training of Neural Networks .


  • Berkeley Scientific Computing and Matrix Computations Seminar, Sep. 2017,
    A Framework for Scalable Biophysics-based Image Analysis .


  • Stanford, ICME Star Talk Series, 2017,
    Fast algorithms for inverse problems with parabolic pde constraints with application to biophysics-based image analysis,


  • SIAM Minisymposium on Imaging Sciences, Albuquerque, NM, USA, 2016,
    On preconditioning Newton method for PDE constrained optimization problems.


  • 13th U.S. National Congress on Computational Mechanics, San Diego, CA, USA, 2015,
    Challenges for exascale scalability of elliptic solvers using a model Poisson solver and comparing state-of-the art methods.


  • SIAM CSE Minisymposium, Salt Lake, Utah, USA, 2015,
    Parameter estimation for malignant brain tumors.


  • 12th U.S. National Congress on Computational Mechanics, Raleigh, NC, USA, 2013,
    A numerical algorithm for biophysically-constrained parameter estimation for tumor modeling and data assimilation with medical images.


  • SIAM Annual Meeting, San Diego, CA, USA, 2013,
    Image-driven inverse problem for estimating initial distribution of brain tumor modeled by advection-diffusion-reaction equation.




Patents

  • Dynamic directional rounding,
    A. Fit-Florea, A. Gholami, B. Ginsburg, and P. Davoodi.
    Approved by Nvidia Patent Office (US patent pending), 2018.


  • Tensor processing using low precision format,
    B. Ginsburg, S. Nikolaev, A. Kiswani, H. Wu, A. Gholami, S. Kierat, M. Houston, and A. Fit-Flores.
    United States patent application US 15/624,577. 2017 Dec 28.


  • High performance inplace transpose operations,
    A. Gholami and B. Natarajan,
    United States patent US 10,067,911, 2018.


Copyright © Amir Gholami 2014-2020