Curiosity-driven Exploration for Mapless Navigation with Deep Reinforcement Learning
Oleksii Zhelo1
Jingwei Zhang1
Lei Tai2
Ming Liu2
Wolfram Burgard1
1Albert Ludwig University of Freiburg
2The Hong Kong University of Science and Technology
[Download Paper]
[Github Code]

This paper investigates exploration strategies of Deep Reinforcement Learning (DRL) methods to learn navigation policies for mobile robots. In particular, we augment the normal external reward for training DRL algorithms with intrinsic reward signals measured by curiosity. We test our approach in a mapless navigation setting, where the autonomous agent is required to navigate without the occupancy map of the environment, to targets whose relative locations can be easily acquired via some low-cost solutions (e.g., visible light localization, Wi-Fi signal localization). We validate that the intrinsic motivation is crucial to improving DRL performance in tasks with challenging exploration requirements. Our experimental results show that our proposed method is able to more effectively learn navigation policies, and has better generalization capabilities in previously unseen environments.

Demo Video


[Paper 1.1MB]  [arXiv]