1,739 research outputs found

    Calibration of Kinect-type RGB-D sensors for robotic applications

    Get PDF
    The paper presents a calibration model suitable for software-based calibration of Kinect-type RGB-D sensors. Additionally, it describes a two-step calibration procedure assuming a use of only a simple checkerboard pattern. Finally, the paper presents a calibration case study, showing that the calibration may improve sensor accuracy 3 to 5 times, depending on the anticipated use of the sensor. The results obtained in this study using calibration models of different levels of complexity reveal that depth measurement correction is an important component of calibration as it may reduce by 50% the errors in sensor reading

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Flexible system of multiple RGB-D sensors for measuring and classifying fruits in agri-food Industry

    Get PDF
    The productivity of the agri-food sector experiences continuous and growing challenges that make the use of innovative technologies to maintain and even improve their competitiveness a priority. In this context, this paper presents the foundations and validation of a flexible and portable system capable of obtaining 3D measurements and classifying objects based on color and depth images taken from multiple Kinect v1 sensors. The developed system is applied to the selection and classification of fruits, a common activity in the agri-food industry. Being able to obtain complete and accurate information of the environment, as it integrates the depth information obtained from multiple sensors, this system is capable of self-location and self-calibration of the sensors to then start detecting, classifying and measuring fruits in real time. Unlike other systems that use specific set-up or need a previous calibration, it does not require a predetermined positioning of the sensors, so that it can be adapted to different scenarios. The characterization process considers: classification of fruits, estimation of its volume and the number of assets per each kind of fruit. A requirement for the system is that each sensor must partially share its field of view with at least another sensor. The sensors localize themselves by estimating the rotation and translation matrices that allow to transform the coordinate system of one sensor to the other. To achieve this, Iterative Closest Point (ICP) algorithm is used and subsequently validated with a 6 degree of freedom KUKA robotic arm. Also, a method is implemented to estimate the movement of objects based on the Kalman Filter. A relevant contribution of this work is the detailed analysis and propagation of the errors that affect both the proposed methods and hardware. To determine the performance of the proposed system the passage of different types of fruits on a conveyor belt is emulated by a mobile robot carrying a surface where the fruits were placed. Both the perimeter and volume are measured and classified according to the type of fruit. The system was able to distinguish and classify the 95% of fruits and to estimate their volume with a 85% of accuracy in worst cases (fruits whose shape is not symmetrical) and 94% of accuracy in best cases (fruits whose shape is more symmetrical), showing that the proposed approach can become a useful tool in the agri-food industry.This project has been supported by the National Commission for Science and Technology Research of Chile (Conicyt) under FONDECYT grant 1140575 and the Advanced Center of Electrical and Electronic Engineering - AC3E (CONICYT/FB0008)

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

    Get PDF
    This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is positioned inside the defined scanning area, a collection of reference parts on the bodywork are automatically recognized from a mosaic of color images collected by a network of Kinect sensors distributed around the vehicle and a global frame of reference is set up. Sections of the depth information on one side of the vehicle are then collected, aligned, and merged into a global RGB-D model. Finally, a 3D triangular mesh modelling the body panels of the vehicle is automatically built. The approach has applications in the intelligent transportation industry, automated vehicle inspection, quality control, automatic car wash systems, automotive production lines, and scan alignment and interpretation

    Real-time marker-less multi-person 3D pose estimation in RGB-Depth camera networks

    Get PDF
    This paper proposes a novel system to estimate and track the 3D poses of multiple persons in calibrated RGB-Depth camera networks. The multi-view 3D pose of each person is computed by a central node which receives the single-view outcomes from each camera of the network. Each single-view outcome is computed by using a CNN for 2D pose estimation and extending the resulting skeletons to 3D by means of the sensor depth. The proposed system is marker-less, multi-person, independent of background and does not make any assumption on people appearance and initial pose. The system provides real-time outcomes, thus being perfectly suited for applications requiring user interaction. Experimental results show the effectiveness of this work with respect to a baseline multi-view approach in different scenarios. To foster research and applications based on this work, we released the source code in OpenPTrack, an open source project for RGB-D people tracking.Comment: Submitted to the 2018 IEEE International Conference on Robotics and Automatio

    Intelligent collision avoidance system for industrial manipulators

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe new paradigm of Industry 4.0 demand the collaboration between robot and humans. They could help (human and robot) and collaborate each other without any additional security, unlike other conventional manipulators. For this, the robot should have the ability of acquire the environment and plan (or re-plan) on-the-fly the movement avoiding the obstacles and people. This work proposes a system that acquires the space of the environment, based on a Kinect sensor, verifies the free spaces generated by a Point Cloud and executes the trajectory of manipulators in these free spaces. The simulation system should perform the path planning of a UR5 manipulator for pick-and-place tasks, while avoiding the objects around it, based on the point cloud from Kinect. And due to the results obtained in the simulation, it was possible to apply this system in real situations. The basic structure of the system is the ROS software, which facilitates robotic applications with a powerful set of libraries and tools. The MoveIt! and Rviz are examples of these tools, with them it was possible to carry out simulations and obtain planning results. The results are reported through logs files, indicating whether the robot motion plain was successful and how many manipulator poses were needed to create the final movement. This last step, allows to validate the proposed system, through the use of the RRT and PRM algorithms. Which were chosen because they are most used in the field of robot path planning.Os novos paradigmas da Indústria 4.0 exigem a colaboração entre robôs e seres humanos. Estes podem ajudar e colaborar entre si sem qualquer segurança adicional, ao contrário de outros manipuladores convencionais. Para isto, o robô deve ter a capacidade de adquirir o meio ambiente e planear (ou re-planear) on-the-fly o movimento evitando obstáculos e pessoas. Este trabalho propõe um sistema que adquire o espaço do ambiente através do sensor Kinect. O sistema deve executar o planeamento do caminho de manipuladores que possuem movimentos de um ponto a outro (ponto inicial e final), evitando os objetos ao seu redor, com base na nuvem de pontos gerada pelo Kinect. E devido aos resultados obtidos na simulação, foi possível aplicar este sistema em situações reais. A estrutura base do sistema é o software ROS, que facilita aplicações robóticas com um poderoso conjunto de bibliotecas e ferramentas. O MoveIt! e Rviz são exemplos destas ferramentas, com elas foi possível realizar simulações e conseguir os resultados de planeamento livre de colisões. Os resultados são informados por meio de arquivos logs, indicando se o movimento do UR5 foi realizado com sucesso e quantas poses do manipulador foram necessárias criar para atingir o movimento final. Este último passo, permite validar o sistema proposto, através do uso dos algoritmos RRT e PRM. Que foram escolhidos por serem mais utilizados no ramo de planeamento de trajetória para robôs

    T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-less Objects

    Full text link
    We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp.felk.cvut.cz/t-less.Comment: WACV 201
    corecore