VR-Goggles for Robots: Real-to-sim Domain Adaptation for Visual Control
Jingwei Zhang*1
Lei Tai*2
Peng Yun2
Yufeng Xiong1
Ming Liu2
Joschka Boedecker1
Wolfram Burgard1
* indicates equal contributions.
1Albert Ludwig University of Freiburg.
2The Hong Kong University of Science and Technology.
[Paper]
[Supplement]
[Code]

IEEE Robotics and Automation Letters, 2019




In this paper, we deal with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks. Instead of adopting the common solutions to the problem by increasing the visual fidelity of synthetic images output from simulators during the training phase, we seek to tackle the problem by translating the real-world image streams back to the synthetic domain during the deployment phase, to make the robot feel at home. We propose this as a lightweight, flexible, and efficient solution for visual control, as 1) no extra transfer steps are required during the expensive training of DRL agents in simulation; 2) the trained DRL agents will not be constrained to being deployable in only one specific realworld environment; 3) the policy training and the transfer operations are decoupled, and can be conducted in parallel. Besides this, we propose a simple yet effective shift loss that is agnostic to the downstream task, to constrain the consistency between subsequent frames which is important for consistent policy outputs. We validate the shift loss for artistic style transfer for videos and domain adaptation, and validate our visual control approach in both indoor and outdoor robotics experiments.


Demo Video



outputs of shifted single input image

Input

CycleGAN

VR-Goggles

outputs of sequential inputs

Input

CycleGAN

VR-Goggles


Paper

[Paper 5.7MB]
[Supplement file 2.0MB]