2,492 research outputs found

    Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Full text link
    Robotic Operation System (ROS) is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR), which can be considered as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM)

    Benchmarking and Comparing Popular Visual SLAM Algorithms

    Full text link
    This paper contains the performance analysis and benchmarking of two popular visual SLAM Algorithms: RGBD-SLAM and RTABMap. The dataset used for the analysis is the TUM RGBD Dataset from the Computer Vision Group at TUM. The dataset selected has a large set of image sequences from a Microsoft Kinect RGB-D sensor with highly accurate and time-synchronized ground truth poses from a motion capture system. The test sequences selected depict a variety of problems and camera motions faced by Simultaneous Localization and Mapping (SLAM) algorithms for the purpose of testing the robustness of the algorithms in different situations. The evaluation metrics used for the comparison are Absolute Trajectory Error (ATE) and Relative Pose Error (RPE). The analysis involves comparing the Root Mean Square Error (RMSE) of the two metrics and the processing time for each algorithm. This paper serves as an important aid in the selection of SLAM algorithm for different scenes and camera motions. The analysis helps to realize the limitations of both SLAM methods. This paper also points out some underlying flaws in the used evaluation metrics.Comment: 7 pages, 4 figure

    RGB-D datasets using microsoft kinect or similar sensors: a survey

    Get PDF
    RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms

    Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering

    Full text link
    This paper investigates the use of depth images as localisation sensors for 3D map building. The localisation information is derived from the 3D data thanks to the ICP (Iterative Closest Point) algorithm. The covariance of the ICP, and thus of the localization error, is analysed, and described by a Fisher Information Matrix. It is advocated this error can be much reduced if the data is fused with measurements from other motion sensors, or even with prior knowledge on the motion. The data fusion is performed by a recently introduced specific extended Kalman filter, the so-called Invariant EKF, and is directly based on the estimated covariance of the ICP. The resulting filter is very natural, and is proved to possess strong properties. Experiments with a Kinect sensor and a three-axis gyroscope prove clear improvement in the accuracy of the localization, and thus in the accuracy of the built 3D map.Comment: Submitted to IROS 2012. 8 page

    Simultaneous Parameter Calibration, Localization, and Mapping

    Get PDF
    The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on changes in the environment or on the load of the robot. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the parameters of the platform. The proposed approach estimates the parameters online and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real-world data using different types of robotic platforms. (C) 2012 Taylor & Francis and The Robotics Society of Japa
    • …
    corecore