Downloads
Extrinsic Calibration of 3D Range Finder and Camera
For the review of submission of IROS 2017
This is a extrinsic calibration demo for 3d lidar and camera which don’t need auxiliary object or human intervention. Since the simulation data (pointcloud and image) are quite large we don’t provide the data to download but it is easy to generate by yourself with the vrep sence and ros package. More detailed experiment result at report_extrinsic.pdf
Author | LIAO Qinghai |
File Type | compressed file(.zip) |
Download |
Reference:
- Qinghai Liao, Ming Liu, Lei Tai, Haoyang Ye, Extrinsic Calibration of 3D Range Finder and Camera without Auxiliary Object or Human Intervention
Visible Light Communication-based Localization
This dataset is taken in an environment with Visible Light Communication (VLC) light beacons, for the purpose of low-cost localization using VLC.
Author | Kejie Qiu, Fangyi Zhang |
File Type | rosbag |
Topic | /energy Raw light intensity signal |
/map Reference map (for visualization) | |
/tf Transforms as groundtruth | |
Download |
There are 70+ separate rosbags in the zip. The total length is over one hour.
Reference:
-
Ming Liu, Kejie Qiu, Shaohua Li, Fengyu Che, Liang Wu, C. Patrick Yue,
Towards Indoor Localization using Visible Light Communication for Consumer Electronic Devices, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pdf bibtex
-
Fangyi Zhang, Kejie Qiu, Ming Liu, Asynchronous Blind Signal Decomposition Using Tiny-Length Code for Visible Light Communication-Based Indoor Localization, IEEE International Conference on Robotics and Automation (ICRA), 2015, Link, pdf bibtex
-
Kejie Qiu, Fangyi Zhang, Ming Liu, Visible Light Communication-based Indoor Localization and Metric-free Path Planning, IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, Sweden, 2015
Use deep learning for exploration
Origin Rosbag
This dataset was taken with Microsoft Kinect on a Turtlebot.The rosbags include synchronized rgb information with depth information and control commands.
Author | Shaohua Li, Lei Tai |
File Type | rosbag |
Topic | /camera/depth/image_raw Depth images |
/camara/rgb/image_color Colour images | |
/joy Joystick control commands | |
Download |
Extracted Images with labels
This dataset was the extracted images from the rosbag listed above. The text file lists the labels of control comand for every image.
Author | Shaohua Li, Lei Tai |
File Type | depth image and labels text file |
Download | |
- Lei Tai, Shaohua Li, and Ming Liu, A Deep-network Solution Towards Model-less Obstacle Avoidence, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 2016 pdf bibtex
GPU Implementation of Tensor Voting for 3D data points (point-cloud)
The package provides an implementation used by ROS. It subscribes to Pointcloud2 raw point-cloud, and output PointCloud2 with featured “stick” “plate” “ball” tensor saliencies. It depends on the “ethzasl_mapping” packages, which is accessible by: https://github.com/ethz-asl
Demo codes for the basic usage and sample datasets are also provided. Download (14 Mb)
The example that independent of ROS (including CPU implementation):Download (2 Mb)
if you have problems please contact me directly: eelium@ust.hk The updated code is supposed to be online no earlier than May, 2013. Please cite the following paper if you are interested:
Ming Liu, Francois Pomerleau, Francis Colas and Roland Siegwart, Normal Estimation for Pointcloud using GPU based Sparse Tensor Voting, IEEE International Conference on Robotics and Biomimetics(ROBIO), 2012, pdf, bibtex
Dataset of Omnidirectional Camera with Vicon Ground Truth
The dataset is taken with an omnidirection camera. The Vicon system is utilized to provide groundtruth for position in the 2D motion plane and heading with sub-mm precision. We provide two forms of the data: ROS bag and sequence of images with log for poses.
The ROS bags include the following information:
- /camera/camera_info sensor_msgs/CameraInfo
- /camera/image_raw sensor_msgs/Image
- /unwrapper/unwrapped sensor_msgs/Image
- /vicon/MING_OMNI/MING_OMNI geometry_msgs/TransformStamped
The “seq+pose log” include: unwrapped images, raw panoramic images, poses.
Still Camera with Four People (146 MB)
Please use the following references for the dataset:
- Ming Liu, Cedric Pradalier, Francois Pomerleau, Roland Siegwart, Scale-only Visual Homing from an Omnidirectional Camera, IEEE International Conference on Robotics and Automation (ICRA), 2012, PDF , bibtex
- Ming Liu, Cedric Pradalier, Roland Siegwart, Visual Homing from Scale with an Uncalibrated Omnidirectional Camera, IEEE Transactions on Robotics, 2014.
- Ming Liu, Cedric Pradalier, Francois Pomerleau, Roland Siegwart, The Role of Homing in Visual Topological Navigation, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012, PDF , bibtex
LibCNN
Author:Shaohua Li
A library of Convolutional Neural Network for robotic real-time applications. GitHub Link
Omnidirectional Camera Calibration Toolbox
Author:Jonas Eichenberger
This is a calibration toolbox for omnidirectional cameras which can be used with MATLAB.
The most common omnidirectional camera models can be used for calibration and the toolbox has good corner extraction capabilities even for quite distorted omnidirectional images.
The toolbox is based on LIBOMNICAL.
This toolbox fixes some bugs, adds some additional features and was mainly used and tested with the Mei model. Therefore the other model implementations need probably some bug fixes too and everybody is welcome to contribute!
The idea is to have a general omnidirectional camera calibration toolbox for all the common models and a platform for future model implementations.
More Information see our GitHub
Other downloads
-
A patch which adds odometry calibration variable to p2os_driver ROS package, using “revcount”, “ticksmm”, “revcount” parameters. For further explanations on these parameters, please refer to the software manual. Patch Download Example launcher
-
A GUI for ROS developers. It allows you to observe ROSCORE status, load local launch files and run it remotely, shutdown a remote robot via ssh through a single click etc. Download Screenshot
-
A shell which helps to compare the current .tex file with revision on svn server. It generates a new .tex file with tags from {changebar} package. Download
Usage:
./auto-bar.sh 500 target.tex
Compare current version target.tex file with version number 500 and generate a new tex file target.diff.tex. (modified from the work by Matthew Johnson.)