Dear prospective students: Click here for information.
Thank you for your interest in joining my group! Sorry I can't respond to your email if you ask me about PhD admissions, please apply directly through the department, thank you!
PhD applicants: I will be looking to hire one or two graduate students every year, so please apply through the EECS department, apply for the CS division, and if you want to work with me, the best category to apply is: CV-AI.
Graduate or undergraduate students at UCB: Please send me an email with your interests, resume and your transcript. For undergraduates, the requirement is a solid engineering internship and CS189 or CS182/282. I prefer students who've taken some of these classes: CS184/284, CS194-26/294-26, CS280, and advanced math courses. Experience with Unity/Blender/Unreal is a plus, art/photography experience is also a plus. Here are some more information to get you started:
If you're interested in NeRF research, use nerf.studio and make two cool captures! Be creative. Then, address at least two github issues. You can also look into the gsplat library, and resolve issues there as well. Then send me an email with your results and the PRs.
If you're interested in research to perceive 3D people, like SLAHMR, run SLAHMR on two of your videos. Then send me an email with the results and your thoughts.
Interns: We are not taking visitors or students who are not at UC Berkeley at this time.
9/30 Monday 4:50pm Foundation Models for 3D Humans: last but not least, I'll talk about hard problems that need to be solved for 3D human foundation models and our latest work that's going to be public soon.
June 2024: Received the PAMI Young Researcher Award, thank you!
Oct 2023: Talk on Creative Horizons with 3D Capture at Stanford HAI Fall Conference:
My research lies at the intersection of computer vision, computer graphics, and machine learning. We live in a 3D world that is dynamic, full of life with people and animals interacting with each other and the environment. How can we build a system that can capture, perceive, and understand this complex 4D world like humans can from everyday photographs and video? More generally, how can we develop a computational system that can continually learn a model of the world from visual observations? The goal of my lab is to answer these questions.
Generative Proxemics: A Prior for 3D Social Interaction from Images
Lea Müller, Vickie Ye, Georgios Pavlakos, Michael Black, Angjoo Kanazawa
CVPR 2024
[project page] [arXiv preprint] [code] [bibtex]
LERF-TOGO: Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping Adam Rashid*, Satvik Sharma*, Chung Min Kim, Justin Kerr, Lawrence Chen, Angjoo Kanazawa, Ken Goldberg CORL 2023Best Paper Finalist
[Project Page]
[Code]
[Paper] [bibtex]
Differentiable Blocks World:
Qualitative 3D Decomposition by Rendering Primitives Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, Mathieu Aubry NeurIPS 2023
[Project Page]
[Code]
[Paper] [bibtex]
Humans in 4D: Reconstructing and Tracking Humans with Transformers Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa*, Jitendra Malik* ICCV 2023
[Project Page]
[Code]
[🤗Demo]
[Paper] [bibtex]
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa ICCV 2023
[Project Page]
[Code]
[Paper] [bibtex]
LERF: Language Embedded Radiance Fields
Justin Kerr*, Chung Min Kim*, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik ICCV 2023
[Project Page]
[Code]
[Paper] [bibtex]
👻Nerfbusters🧹: Removing Ghostly Artifacts from Casually Captured NeRFs Frederik Warburg*, Ethan Weber*, Matt Tancik, Aleksander Holynski, Angjoo Kanazawa ICCV 2023
[Project Page]
[Code]
[Paper] [bibtex]
NerfAcc: Efficient Sampling Accelerates NeRFs Ruilong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
ICCV 2023
[Documentation]
[Github]
[Paper] [bibtex]
Nerfstudio: A Modular Framework for Neural Radiance Field Development Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa SIGGRAPH 2023 conference track
[Documentation]
[Code]
[Paper]
[bibtex]
Decoupling Human and Camera Motion from Videos in the Wild
Vickie Ye, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa CVPR 2023
[Project Page]
[Code]
[Paper]
[bibtex]
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Sara Fridovich-Keil*, Giacomo Meanti*, Frederik Rahbæk Warburg, Benjamin Recht, Angjoo Kanazawa CVPR 2023
[Project Page]
[Code]
[Paper] [bibtex]
On the Benefits of 3D Pose and Tracking for Human Action Recognition
Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Christoph Feichtenhofer, Jitendra Malik CVPR 2023
[Project Page]
The One Where They Reconstructed
3D Humans and Environments in TV Shows Georgios Pavlakos*, Ethan Weber*, Matthew Tancik, Angjoo Kanazawa ECCV 2022
[Project Page] [Code and Data] [Paper] [bibtex]
Studying Bias in GANs through the Lens of Race Vongani H. Maluleke*, Neerja Thakkar*, Tim Brooks, Ethan Weber, Trevor Darrell, Alexei A. Efros, Angjoo Kanazawa, Devin Guillory ECCV 2022
[Project Page] [Code] [Paper] [bibtex]
Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Ben Recht, Angjoo Kanazawa CVPR 2022 (Oral)
[Project Page] [Paper] [Code]
[bibtex]
Human Mesh Recovery from Multiple Shots Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa CVPR 2022
[Project Page] [Paper]
[bibtex]
Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image Shizhan Zhu, Sayna Ebrahimi, Angjoo Kanazawa, Trevor Darrell ICLR 2022
[Project Page] [Paper] [Code] [bibtex]
Tracking People with 3D Representations Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik NeurIPS 2021
[Project Page] [Paper] [Code] [bibtex]
Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image Andrew Liu*, Richard Tucker*, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa ICCV 2021 (Oral)
[Project Page with Demo] [Paper] [Code] [bibtex]
PlenOctrees for Real-time Rendering of Neural Radiance Fields
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa ICCV 2021 (Oral)
[Project Page with Demo] [Paper] [Code] [bibtex]
Reconstructing Hand-Object Interactions in the Wild
Zhe Cao*, Ilija Radosavovic*, Angjoo Kanazawa, Jitendra Malik ICCV 2021
[Project Page] [Paper] [MOW Dataset] [bibtex]
AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ Ruilong Li*, Shan Yang*, David A. Ross, Angjoo Kanazawa ICCV 2021
[Project Page] [Paper] [Dataset] [Code] [bibtex]
AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control Xue Bin Peng*, Ze Ma*, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa SIGGRAPH 2021
[Project Page] [Paper] [Code] [bibtex]
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Kanazawa CVPR 2021 (Oral)
[Project website][Paper] [Code] [bibtex]
pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa CVPR 2021
[Project Page/Code] [paper] [bibtex]
An Analysis of SVD for Deep Rotation Estimation Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, Ameesh Makadia NeurIPS 2020
[Github] [paper] [bibtex]
Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild Jason Y. Zhang*, Sam Pepose*, Hanbyul Joo, Deva Ramanan, Jitendra Malik, Angjoo Kanazawa ECCV 2020
[project page]
[Github] [arXiv preprint] [bibtex]
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild" Silvia Zuffi, Angjoo Kanazawa, Tanya Berger-Wolf, Michael J. Black ICCV 2019
[Github] [arXiv preprint] [bibtex]
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
Shunsuke Saito*, Zeng Huang*, Ryota Natsume*, Shigeo Morishima, Angjoo Kanazawa, Hao Li
(* equal contribution) ICCV 2019
[project page] [arXiv preprint] [video] [bibtex]
Predicting 3D Human Dynamics from Video
Jason Y. Zhang, Panna Felsen, Angjoo Kanazawa, Jitendra Malik ICCV 2019
[project page] [arXiv preprint] [video] [bibtex]
Learning 3D Human Dynamics from Video
Angjoo Kanazawa*, Jason Y. Zhang*, Panna Felsen*, Jitendra Malik CVPR 2019
[project page] [arXiv preprint] [video] [bibtex]
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine ICLR 2019
[project page] [code] [arXiv preprint] [video] [bibtex]
SFV: Reinforcement Learning of Physical Skills from Videos
Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, Sergey Levine ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2018)
[project page] [pdf] [BAIR Blog] [arXiv preprint] [video] [bibtex]
SfSNet : Learning Shape, Reflectance and Illuminance of Faces ‘in the wild’
Soumyadip Sengupta, Angjoo Kanazawa, Carlos D. Castillo, David W. Jacobs CVPR 2018 (Spotlight)
[project page with code] [pdf] [arXiv preprint] [bibtex]
Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images
Silvia Zuffi, Angjoo Kanazawa, Michael J. Black CVPR 2018 (Spotlight)
[project page with 3D models] [pdf] [bibtex]
Towards Accurate Marker-less Human Shape and Pose Estimation over Time
Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Javier Romero, Ijaz Akhter, Michael J. Black International Conference on 3D Vision (3DV), 2017.
[pdf] [bibtex]
3D Menagerie: Modeling the 3D shape and pose of animals
Silvia Zuffi, Angjoo Kanazawa, David W. Jacobs, Michael J. Black Computer Vision and Pattern Recognition (CVPR) 2017. (Spotlight)
[project page with model and demo] [pdf] [arXiv] [bibtex]
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image
Federica Bogo*, Angjoo Kanazawa*, Christoph Lassner, Peter Gehler,
Javier Romero and Michael J. Black (* equal contribution) European Conference on Computer Vision (ECCV) 2016. (Spotlight)
[pdf]
[project page with
code] [Spotlight video] [bibtex]
WarpNet: Weakly Supervised Matching for Single-View Reconstruction
Angjoo Kanazawa, David W. Jacobs, Manmohan Chandraker Computer Vision and Pattern Recognition (CVPR) 2016.
[pdf]
[supp]
[test set ids & our curves] [bibtex]
Learning 3D Deformation of Animals from 2D Images
Angjoo Kanazawa, Shahar Kovalsky, Ronen Basri, David W. Jacobs Eurographics 2016.Best Paper Award
[pdf] [code
on github] [fastforward]
[See the results here] [bibtex]
Locally Scale-invariant Convolutional Neural Network
Angjoo Kanazawa, Abhishek Sharma, David W. Jacobs Deep Learning and Representation Learning Workshop: NIPS 2014.
[pdf]
[code on
github] [bibtex]
Affordance of Object Parts from Geometric Features
Austin Myers, Angjoo Kanazawa, Cornelia Fermuller, Yiannis Aloimonos RGB-D: Advanced Reasoning with Depth Cameras: RSS 2014 [pdf] [bibtex]
[Part Affordance Dataset] [bibtex]
Dog Breed Classification Using Part Localization
Jiongxin Liu, Angjoo Kanazawa, Peter Belhumeur, David W. Jacobs European Conference on Computer Vision (ECCV), Oct. 2012.
[pdf]
[slides] [bibtex]
try our iPhone app:
Dogsnap ! Columbia dogs with
parts dataset used in the
paper: zip file (2.43GB)
133 breeds recognized by the American Kennel Club
8,351 images of dogs from Google image search, Image-net, and
Flickr.
8 part locations annotated for each image
Thesis
Single-View 3D Reconstruction of Animals
Angjoo Kanazawa
Doctoral Thesis, University of Maryland, August 2017
[pdf]
[slides]