Unsupervised Visual Attention and Invariance for Reinforcement Learning

Xudong Wang*Lian Long*Stella Yu
UC Berkeley / ICSI   
[Preprint]   [PDF]   [Code]   [Citation]
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)


Abstract

Vision-based reinforcement learning (RL) is successful, but how to generalize it to unknown test environments remains challenging. Existing methods focus on training an RL policy that is universal to changing visual domains, whereas we focus on extracting visual foreground that is universal, feeding clean invariant vision to the RL policy learner. Our method is completely unsupervised, without manual annotations or access to environment internals.
Given videos of actions in a training environment, we learn how to extract foregrounds with unsupervised keypoint detection, followed by unsupervised visual attention to automatically generate a foreground mask per video frame. We can then introduce artificial distractors and train a model to reconstruct the clean foreground mask from noisy observations. Only this learned model is needed during test to provide distraction-free visual input to the RL policy learner.
Our Visual Attention and Invariance (VAI) method significantly outperforms the state-of-the-art on visual domain generalization, gaining 15~49% (61~229%) more cumulative rewards per episode on DeepMind Control (our DrawerWorld Manipulation) benchmarks. Our results demonstrate that it is not only possible to learn domain-invariant vision without any supervision, but freeing RL from visual distractions also makes the policy more focused and thus far better.


Takeaway: Adapt the vision, not RL!

It is not only possible to learn domain-invariant vision without any supervision, but freeing RL from visual distractions also makes the policy more focused and thus far better.


Model

VAI is a fully unsupervised method to make visionbased RL more generalizable to unknown test environments. During the inference time, only the third module is used.
[Code]


Results

Our method significantly advances the state-of-the-art in vision-based RL on all experimented benchmarks. Current SOTA method, which learns to adapt at test time, almost always fails on challenging texture backgrounds, since CNNs' sensitivity to textures poses a big challenge for visual adaptation. However, VAI with foreground extraction and strong generic augmentation is robust to neverseen textures without sacrificing training performance. For more results, please check our paper.


Talk



Materials

Paper Poster

CITATION

If you find our work inspiring or use our codebase in your research, please cite our work:

@InProceedings{Wang_2021_CVPR,
    author={Wang, Xudong and Lian, Long and Yu, Stella X.},
    title={Unsupervised Visual Attention and Invariance for Reinforcement Learning},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month={June},
    year={2021},
    pages={6677-6687}
}


Acknowledgments

This work was supported, in part, by Berkeley Deep Drive.