Jianhao Jiao, Hoayang Ye, Yilong Zhu, Ming Liu from RAM-LAB
The Hong Kong University of Science and Technology
[Preprint] [Supplementary Materials]
Combining multiple LiDARs enables a robot to maximize its perceptual awareness of environments and obtain sufficient measurements, which is promising for simultaneous localization and mapping (SLAM). This paper proposes a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for a multiple LiDARs. Our approach starts with measurement preprocessing to extract edge and planar features from raw measurements. After a motion and extrinsic initialization procedure, a sliding window-based multi-LiDAR odometry runs onboard to estimate poses with online calibration refinement and convergence identification. We further develop a mapping algorithm to construct a global map and optimize poses with sufficient features together with a method to model and reduce data uncertainty. We validate our approach’s performance with extensive experiments on ten sequences (4.60km total length) for the calibration and SLAM and compare them against the state-of-the-art. We demonstrate that the proposed work is a complete, robust, and extensible system for various multi-LiDAR setups.
Source code is available at Github.
If you find M-LOAM (Multi-LiDAR Odometry and Mapping) helpful for your academic research, please kindly cite our related paper [bib].
All sequences used in our paper can be downloaded here.
Our paper on multi-LiDAR object detection can be downloaded here
- [Preprint]
Our paper on M-LOAM with the greedy-based feature selection enhancement can be downloaded here: