159 research outputs found

    Multi-thread impact on the performance of Monte Carlo based algorithms for self-localization of robots using RGBD sensors

    Get PDF
    Abstract—Using information from RGBD sensors requires huge amount of processing. To use these sensors improves the robustness of algorithms for object perception, self-localization and, in general, all the capabilities to be performed by a robot to improve its autonomy. In most cases, these algorithms are not computationally feasible using single-thread implementations. This paper describes two multi thread strategies proposed for self localize a mobile robot in a known environment using information from a RGBD sensor. The experiments will show the benefits obtained when different numbers of threads are compared, using different approaches: a pool of threads and creation/destruction scheme. The work has been carried out on a Kobuki mobile robot in the environment of the RoCKiN competition, similar to RoboCup@hom

    Autonomous Robotic Systems in a Variable World:A Task-Centric approach based on Explainable Models

    Get PDF

    Autonomous Robotic Systems in a Variable World:A Task-Centric approach based on Explainable Models

    Get PDF

    RGB-D datasets using microsoft kinect or similar sensors: a survey

    Get PDF
    RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms

    Biometric recognition through gait analysis

    Get PDF
    [EN] The use of people recognition techniques has become critical in some areas. For instance, social or assistive robots carry out collaborative tasks in the robotics field. A robot must know who to work with to deal with such tasks. Using biometric patterns may replace identification cards or codes on access control to critical infrastructures. The usage of Red Green Blue Depth (RGBD) cameras is ubiquitous to solve people recognition. However, this sensor has some constraints, such as they demand high computational capabilities, require the users to face the sensor, or do not regard users' privacy. Furthermore, in the COVID-19 pandemic, masks hide a significant portion of the face. In this work, we present BRITTANY, a biometric recognition tool through gait analysis using Laser Imaging Detection and Ranging (LIDAR) data and a Convolutional Neural Network (CNN). A Proof of Concept (PoC) has been carried out in an indoor environment with five users to evaluate BRITTANY. A new CNN architecture is presented, allowing the classification of aggregated occupancy maps that represent the people's gait. This new architecture has been compared with LeNet-5 and AlexNet through the same datasets. The final system reports an accuracy of 88%.SIInstituto Nacional de Ciberseguridad de Espana (INCIBE)The research described in this article has been funded by the Instituto Nacional de Ciberseguridad de España (INCIBE), under the grant ”ADENDA 4: Detección de nuevas amenazas y patrones desconocidos (Red Regional de Ciencia y Tecnología)”, addendum to the framework agreement INCIBE-Universidad de León, 2019-2021. Miguel Ángel González-Santamarta would like to thank Universidad de León for its funding support for his doctoral studies

    Hamiltonian Dynamics Learning from Point Cloud Observations for Nonholonomic Mobile Robot Control

    Full text link
    Reliable autonomous navigation requires adapting the control policy of a mobile robot in response to dynamics changes in different operational conditions. Hand-designed dynamics models may struggle to capture model variations due to a limited set of parameters. Data-driven dynamics learning approaches offer higher model capacity and better generalization but require large amounts of state-labeled data. This paper develops an approach for learning robot dynamics directly from point-cloud observations, removing the need and associated errors of state estimation, while embedding Hamiltonian structure in the dynamics model to improve data efficiency. We design an observation-space loss that relates motion prediction from the dynamics model with motion prediction from point-cloud registration to train a Hamiltonian neural ordinary differential equation. The learned Hamiltonian model enables the design of an energy-shaping model-based tracking controller for rigid-body robots. We demonstrate dynamics learning and tracking control on a real nonholonomic wheeled robot.Comment: 8 pages, 6 figure

    Design of Logistic Transporter Robot System

    Get PDF
    The diversity of technology in the robotics world is currently developing a lot, especially in logistics distribution. The distribution of logistics goods using robotic power continues to develop towards high artificial intelligence, ensuring warehouse delivery management and intelligence implementation with challenging tasks. Autonomous robots are a community of intelligent robotic systems that can be seen as prototypes. It is an intelligent management and service system of the future that can reveal some important traits of the next generation of smart robot communities. In the smart logistics industry, designing an efficient communication and management platform from logistics robots is one of the fundamental problems. This study aims to implement smart robots in assisting distribution / logistical activities by following humans in bringing goods to the intended area by following green objects

    HeteroFusion: Dense scene reconstruction integrating multi-sensors

    Get PDF
    We present a real-time approach that integrates multiple sensors for dense reconstruction of 3D indoor scenes. Existing algorithms are mainly based on a single RGBD camera and require continuous scanning on areas with sufficient geometric details. Failing to do so can lead to tracking loss due to the lack of frame registration hints. Inspired by the fact that utilizing multiple sensors can combine their strengths to form a more robust and accurate implementation, we incorporate multiple types of sensors, which are prevalently equipped in modern robots, including a 2D range sensor, an IMU, and wheel encoders to reinforce the tracking process and obtain better mesh construction. Specifically, we develop a feasible 2D TSDF volume representation for integrating and ray-casting laser frames, leading to a unified cost function in the pose estimation stage. Besides, for validating these estimated poses in the loop-closure optimization process, we train a classifier according to those features extracted from heterogeneous sensors and the registration progress. To evaluate our method on challenging robotic scanning scenarios, we assembled a scanning platform for acquiring real-world scans. We further simulated synthetic scans based on high-fidelity synthetic scenes for quantitative evaluation. Extensive experimental results demonstrate that our system is capable of robustly acquiring dense reconstructions and outperforms state-of-the-art systems
    corecore