Date: July 26, 2017
Time: 9am to 5:15pm
Venue: Hawaii Convention Center
Christian Häne, UC Berkeley, email@example.com
Jakob Engel, Oculus Research, Redmond, firstname.lastname@example.org
Srikumar Ramalingam, University of Utah, email@example.com
Sudipta Sinha, Microsoft Research, firstname.lastname@example.org
3D scene reconstruction from images is a fundamental topic in computer vision which has witnessed rapid progress in the last two decades. Reconstruction techniques typically involve solving parameter estimation problems that are inherently ill-posed, and therefore require appropriate regularization to handle noise and ambiguities present in the input data. While traditional methods mostly relied on low-level geometric priors, nowadays, mid and high-level scene information in the form of structured and semantics scene priors are increasingly being used. In this tutorial, we will discuss techniques for single image reconstruction, dense stereo correspondence in images and video, multi-view stereo, volumetric 3D reconstruction, mesh-based reconstruction and depth-map fusion approaches. This includes semantic depth map fusion techniques which utilize semantic information to guide the regularization and eventually output a semantically segmented 3D model. We will cover both batch 3D reconstruction methods as well as outline and demo recent capabilities in Visual SLAM (Simultaneous Localization and Mapping) that enable real-time 3D reconstructions. Additionally, we will discuss recent successful applications of deep learning to geometric tasks such as dense stereo matching, 3D shape prediction and camera pose estimation.