Xuaner Zhang1, Ren Ng1, Qifeng Chen2
1UC Berkeley, 2HKUST
CVPR, 2018

cvpr 2018 paper figure Abstract

We present an approach to separating reflection from a single image. The approach uses a fully convolutional net- work trained end-to-end with losses that exploit low-level and high-level image information. Our loss function includes two perceptual losses: a feature loss from a visual perception network, and an adversarial loss that encodes characteristics of images in the transmission layers. We also propose a novel exclusion loss that enforces pixel-level layer separation. We create a dataset of real-world images with reflection and corresponding ground-truth transmission layers for quantitative evaluation and model training. We validate our method through comprehensive quantitative experiments and show that our approach outperforms state-of-the-art reflection removal methods in PSNR, SSIM, and perceptual user study. We also extend our method to two other image enhancement tasks to demonstrate the generality of our approach.

Material

Paper    Supplementary Material    Code   

Dataset

Both synthetic and real dataset are now available here

BibTeX

If you use our code or data, please cite:

                                    @inproceedings{zhang2018single,
                                      title={Single image reflection separation with perceptual losses},
                                      author={Zhang, Xuaner and Ng, Ren and Chen, Qifeng},
                                      booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
                                      pages={4786--4794},
                                      year={2018}
                                    }