Dear prospective students: Click here for information.
Thank you for your interest in joining my group! Sorry I can't respond to your email if you ask me about PhD admissions, please apply directly through the department, thank you!
PhD applicants: I will be looking to hire one or two graduate students every year, so please apply through the EECS department, apply for the CS division, and if you want to work with me, the best category to apply is: CV-AI.
Graduate or undergraduate students at UCB: Please send me an email with your interests, resume and your transcript. For undergraduates, the requirement is a solid engineering internship and CS189 or CS182/282. I prefer students who've taken some of these classes: CS184/284, CS194-26/294-26, CS280, and advanced math courses. Experience with Unity/Blender/Unreal is a plus, art/photography experience is also a plus. Here are some more information to get you started:
If you're interested in NeRF research, use nerf.studio and make two cool captures! Be creative. Then, address at least two github issues. Then send me an email with your results and the PRs.
If you're interested in research to perceive 3D people, like SLAHMR, run SLAHMR on two of your videos. Then send me an email with the results.
Interns: We are not taking visitors or students who are not at UC Berkeley at this time.
My research lies at the intersection of computer vision, computer graphics, and machine learning. We live in a 3D world that is dynamic, full of life with people and animals interacting with the environment. How can we build a system that can capture, perceive, and understand this 4D world from everyday photograph and video? How can we learn priors on the 4D world from image and video observations?
The goal of my lab is to answer this question.
Humans in 4D: Reconstructing and Tracking Humans with Transformers Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa*, Jitendra Malik* ICCV 2023
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa ICCV 2023
LERF: Language Embedded Radiance Fields
Justin Kerr*, Chung Min Kim*, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik ICCV 2023
👻Nerfbusters🧹: Removing Ghostly Artifacts from Casually Captured NeRFs Frederik Warburg*, Ethan Weber*, Matt Tancik, Aleksander Holynski, Angjoo Kanazawa ICCV 2023
NerfAcc: Efficient Sampling Accelerates NeRFs Ruilong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
Nerfstudio: A Modular Framework for Neural Radiance Field Development Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa SIGGRAPH 2023 conference track
Decoupling Human and Camera Motion from Videos in the Wild
Vickie Ye, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa CVPR 2023
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Sara Fridovich-Keil*, Giacomo Meanti*, Frederik Rahbæk Warburg, Benjamin Recht, Angjoo Kanazawa CVPR 2023
On the Benefits of 3D Pose and Tracking for Human Action Recognition
Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Christoph Feichtenhofer, Jitendra Malik CVPR 2023
Studying Bias in GANs through the Lens of Race Vongani H. Maluleke*, Neerja Thakkar*, Tim Brooks, Ethan Weber, Trevor Darrell, Alexei A. Efros, Angjoo Kanazawa, Devin Guillory ECCV 2022
[Project Page] [Code] [Paper] [bibtex]
Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Ben Recht, Angjoo Kanazawa CVPR 2022 (Oral)
[Project Page] [Paper] [Code]
Human Mesh Recovery from Multiple Shots Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa CVPR 2022
[Project Page] [Paper]
Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image Shizhan Zhu, Sayna Ebrahimi, Angjoo Kanazawa, Trevor Darrell ICLR 2022
[Project Page] [Paper] [Code] [bibtex]
Tracking People with 3D Representations Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik NeurIPS 2021
[Project Page] [Paper] [Code] [bibtex]
Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image Andrew Liu*, Richard Tucker*, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa ICCV 2021 (Oral)
[Project Page with Demo] [Paper] [Code] [bibtex]
An Analysis of SVD for Deep Rotation Estimation Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, Ameesh Makadia NeurIPS 2020
[Github] [paper] [bibtex]
Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild Jason Y. Zhang*, Sam Pepose*, Hanbyul Joo, Deva Ramanan, Jitendra Malik, Angjoo Kanazawa ECCV 2020
[Github] [arXiv preprint] [bibtex]
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild" Silvia Zuffi, Angjoo Kanazawa, Tanya Berger-Wolf, Michael J. Black ICCV 2019
[Github] [arXiv preprint] [bibtex]
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
Shunsuke Saito*, Zeng Huang*, Ryota Natsume*, Shigeo Morishima, Angjoo Kanazawa, Hao Li
(* equal contribution) ICCV 2019
[project page] [arXiv preprint] [video] [bibtex]
SfSNet : Learning Shape, Reflectance and Illuminance of Faces ‘in the wild’
Soumyadip Sengupta, Angjoo Kanazawa, Carlos D. Castillo, David W. Jacobs CVPR 2018 (Spotlight)
[project page with code] [pdf] [arXiv preprint] [bibtex]
Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images
Silvia Zuffi, Angjoo Kanazawa, Michael J. Black CVPR 2018 (Spotlight)
[project page with 3D models] [pdf] [bibtex]
Towards Accurate Marker-less Human Shape and Pose Estimation over Time
Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Javier Romero, Ijaz Akhter, Michael J. Black International Conference on 3D Vision (3DV), 2017.
3D Menagerie: Modeling the 3D shape and pose of animals
Silvia Zuffi, Angjoo Kanazawa, David W. Jacobs, Michael J. Black Computer Vision and Pattern Recognition (CVPR) 2017. (Spotlight)
[project page with model and demo] [pdf] [arXiv] [bibtex]
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image
Federica Bogo*, Angjoo Kanazawa*, Christoph Lassner, Peter Gehler,
Javier Romero and Michael J. Black (* equal contribution) European Conference on Computer Vision (ECCV) 2016. (Spotlight)
[project page with
code] [Spotlight video] [bibtex]
Locally Scale-invariant Convolutional Neural Network
Angjoo Kanazawa, Abhishek Sharma, David W. Jacobs Deep Learning and Representation Learning Workshop: NIPS 2014.
Affordance of Object Parts from Geometric Features
Austin Myers, Angjoo Kanazawa, Cornelia Fermuller, Yiannis Aloimonos RGB-D: Advanced Reasoning with Depth Cameras: RSS 2014 [pdf] [bibtex]
[Part Affordance Dataset] [bibtex]
Dog Breed Classification Using Part Localization
Jiongxin Liu, Angjoo Kanazawa, Peter Belhumeur, David W. Jacobs European Conference on Computer Vision (ECCV), Oct. 2012.
try our iPhone app:
Dogsnap ! Columbia dogs with
parts dataset used in the
paper: zip file (2.43GB)
133 breeds recognized by the American Kennel Club
8,351 images of dogs from Google image search, Image-net, and
8 part locations annotated for each image
Single-View 3D Reconstruction of Animals
Doctoral Thesis, University of Maryland, August 2017