Supplementary Material for Occlusion-Aware Depth Estimation Using Light-Field Cameras

1. Algorithm Stage

In this part, we showcase results of different stages of our algorithm. We first run edge detection on the central input, run our depth estimation algorithm on the light-field image to get an initial depth and an occlusion response prediction, and finally use the occlusion to regularize the initial depth to get a final depth map. We can then run the occlusion predictor on this final depth again to get a refined occlusion. Below are examples on real scenes taken with Lytro Illum.


Light-field Input (Central View)

Edge Detection

Initial Depth

Initial Occlusion

Final Depth

Final Occlusion

2. Depth Map Comparisons

In this part, we compare our final depth map to current state-of-the-art methods on various real-world scenes taken with Lytro Illum.

Light-field Input (Central View)

Our Result

Wanner et al. [1]

Tao et al. [2]

Yu et al. [3]

Chen et al. [4]

Failure examples: as all other algorithms, if the background area is very textureless or very small, our method will fail since it cannot differentiate the background from the foreground.

[1] S. Wanner and B. Goldluecke. Globally consistent depth labeling of 4D light fields. In CVPR, 2012.
[2] M. Tao, S. Hadap, J. Malik, and R. Ramamoorthi. Depth from combining defocus and correspondence using light-field camera. In ICCV, 2013.
[3] Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu. Line assisted light field triangulation and stereo matching. In ICCV, 2013.
[4] C. Chen, H. Lin, Z. Yu, S. B. Kang, and J. Yu. Light field stereo matching using bilateral statistics of surface cameras. In CVPR, 2014.