3 research outputs found

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Robotic Maintenance and ROS - Appearance Based SLAM and Navigation With a Mobile Robot Prototype

    Get PDF
    Robotic maintenance has been a topic in several master's theses and specialization projects at the Department of Engineering Cybernetics (ITK) at NTNU over many years. This thesis continues on the same topic, with special focus on camera-based mapping and navigation in conjunction with automated maintenance, and automated maintenance in general. The objective of this thesis is to implement one or more functionalities based on camera-based sensors in a mobile autonomous robot. This is accomplished by acquiring knowledge of existing solutions and future requirements within automated maintenance. A mobile robot prototype has been configured to run ROS (Robot Operating System), a middleware framework that is suited to the development of robotic systems. The system uses RTAB-Map (Real-Time Appearance Based Mapping) to survey the surroundings and a built navigation stack in ROS to navigate autonomously against easy targets in the map. The method uses a Kinect for Xbox 360 as the main sensor and a 2D laser scanner to the surveying and odometry. It is also developed functional concepts for two support functions, an Android application for remote control over Bluetooth and a remote central (OCS) developed in Qt. Remote Central is a skeletal implementation that is able to remotely control the robot via WiFi, as well as to display video from the robot's camera. Test results, obtained from both live and simulated trials, indicate that the robot is able to form 3D and 2D map of the surroundings. The method has weaknesses that are related to the ability to find visual features. Laser Based odometry can be tricked when the environment is changing, and when there are few unique features. Further testing has demonstrated that the robot can navigate autonomously, but there is still room for improvement. Better results can be achieved with a new movable platform and further tuning of the system. In conclusion, ROS works well as a development tools for robots, and the current system is suitable for further development. RTAB-Maps suitability for use on an industrial installation is still uncertain and requires further testing
    corecore