29 research outputs found

    A Vision-based Quadrotor Swarm for the participation in the 2013 International Micro Air Vehicle Competition

    Get PDF
    This paper presents a completely autonomous solution to participate in the 2013 International Micro Air Vehicle Indoor Flight Competition (IMAV2013). Our proposal is a modular multi-robot swarm architecture, based on the Robot Operating System (ROS) software framework, where the only information shared among swarm agents is each robot's position. Each swarm agent consists of an AR Drone 2.0 quadrotor connected to a laptop which runs the software architecture. In order to present a completely visual-based solution the localization problem is simplified by the usage of ArUco visual markers. These visual markers are used to sense and map obstacles and to improve the pose estimation based on the IMU and optical data flow by means of an Extended Kalman Filter localization and mapping method. The presented solution and the performance of the CVG UPM team were awarded with the First Prize in the Indoors Autonomy Challenge of the IMAV2013 competition

    ViWiD: Leveraging WiFi for Robust and Resource-Efficient SLAM

    Full text link
    Recent interest towards autonomous navigation and exploration robots for indoor applications has spurred research into indoor Simultaneous Localization and Mapping (SLAM) robot systems. While most of these SLAM systems use Visual and LiDAR sensors in tandem with an odometry sensor, these odometry sensors drift over time. To combat this drift, Visual SLAM systems deploy compute and memory intensive search algorithms to detect `Loop Closures', which make the trajectory estimate globally consistent. To circumvent these resource (compute and memory) intensive algorithms, we present ViWiD, which integrates WiFi and Visual sensors in a dual-layered system. This dual-layered approach separates the tasks of local and global trajectory estimation making ViWiD resource efficient while achieving on-par or better performance to state-of-the-art Visual SLAM. We demonstrate ViWiD's performance on four datasets, covering over 1500 m of traversed path and show 4.3x and 4x reduction in compute and memory consumption respectively compared to state-of-the-art Visual and Lidar SLAM systems with on par SLAM performance

    An Indoor and Outdoor Navigation System for Visually Impaired People

    Get PDF
    In this paper, we present a system that allows visually impaired people to autonomously navigate in an unknown indoor and outdoor environment. The system, explicitly designed for low vision people, can be generalized to other users in an easy way. We assume that special landmarks are posed for helping the users in the localization of pre-defined paths. Our novel approach exploits the use of both the inertial sensors and the camera integrated into the smartphone as sensors. Such a navigation system can also provide direction estimates to the tracking system to the users. The success of out approach is proved both through experimental tests performed in controlled indoor environments and in real outdoor installations. A comparison with deep learning methods has been presented

    Computer Vision Based Indoor Navigation: A Visual Markers Evaluation

    Get PDF
    The massive diffusion of smartphones and the exponential rise of location based services (LBS) have made the problem of localization and navigation inside buildings one of the most important technological challenges of the last years. Indoor positioning systems have a huge market in the retail sector and contextual advertising; moreover, they can be fundamental to increase the quality of life for the citizens. Various approaches have been proposed in scientific literature. Recently, thanks to the high performances of the smartphones’ cameras, marker-less and marked-based computer vision approaches have been investigated. In a previous paper, we proposed a technique for indoor navigation using both Bluetooth Low Energy (BLE) and a 2D visual markers system deployed into the floor. In this paper, we present a qualitative performance evaluation of three 2D visual markers suitable for real-time applications

    PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision

    Get PDF
    We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP proble

    DepthLiDAR: active segmentation of environment depth map into mobile sensors

    Get PDF
    This paper presents a novel approach for creating virtual LiDAR scanners through the active segmentation of point clouds. The method employs top-view point cloud segmentation in virtual LiDAR sensors that can be applied to the intelligent behavior of autonomous agents. Segmentation is correlated with the visual tracking of the agent for localization in the environmentand point cloud. Virtual LiDARsensors with different characteristicsand positions can then be generated. Thismethod is referred to as the DepthLiDAR approach, and is rigorously evaluated to quantify its performance and determine its advantages and limitations. An extensive set of experiments is conducted using real and virtual LiDAR sensors to compare both approaches. The objective is to propose a novel method to incorporate spatial perception in warehouses, aiming to achieve Industry 4.0. Thus, it is tested in a low-scale warehouse to incorporate realistic features. The analysis of the experiments shows a measurement improvement of 52.24% compared to the conventional LiDAR.This work was supported in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES)–Finance Code 001 and in part by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).info:eu-repo/semantics/publishedVersio

    Drone deep reinforcement learning: A review

    Get PDF
    Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios

    Cooperative strategies for the detection and localization of odorants with robots and artificial noses

    Full text link
    En este trabajo de investigación se aborda el diseño de una plataforma robótica orientada a la implementación de estrategias de búsqueda cooperativa bioinspiradas. En particular, tanto el proceso de diseño de la parte electrónica como hardware se han enfocado hacia la validación en entornos reales de algoritmos capaces de afrontar problemas de búsqueda con incertidumbre, como lo es la búsqueda de fuentes de olor que presentan variación espacial y temporal. Este tipo de problemas pueden ser resueltos de forma más eficiente con el empleo de enjambres con una cantidad razonable de robots, y por tanto la plataforma ha sido desarrollada utilizando componentes de bajo coste. Esto ha sido posible por la combinación de elementos estandarizados -como la placa controladora Arduino y otros sensores integrados- con piezas que pueden ser fabricadas mediante una impresora 3D atendiendo a la filosofía del hardware libre (open-source). Entre los requisitos de diseño se encuentran además la eficiencia energética -para maximizar el tiempo de funcionamiento de los robots-, su capacidad de posicionamiento en el entorno de búsqueda, y la integración multisensorial -con la inclusión de una nariz electrónica, sensores de luminosidad, distancia, humedad y temperatura, así como una brújula digital-. También se aborda el uso de una estrategia de comunicación adecuada basada en ZigBee. El sistema desarrollado, denominado GNBot, se ha validado tanto en los aspectos de eficiencia energética como en sus capacidades combinadas de posicionamiento espacial y de detección de fuentes de olor basadas en disoluciones de etanol. La plataforma presentada -formada por el GNBot, su placa electrónica GNBoard y la capa de abstracción software realizada en Python- simplificará por tanto el proceso de implementación y evaluación de diversas estrategias de detección, búsqueda y monitorización de odorantes, con la estandarización de enjambres de robots provistos de narices artificiales y otros sensores multimodales.This research work addresses the design of a robotic platform oriented towards the implementation of bio-inspired cooperative search strategies. In particular, the design processes of both the electronics and hardware have been focused towards the real-world validation of algorithms that are capable of tackling search problems that have uncertainty, such as the search of odor sources that have spatio-temporal variability. These kind of problems can be solved more efficiently with the use of swarms formed by a considerable amount of robots, and thus the proposed platform makes use of low cost components. This has been possible with the combination of standardized elements -as the Arduino controller board and other integrated sensors- with custom parts that can be manufactured with a 3D printer attending to the open-source hardware philosophy. Among the design requirements is the energy efficiency -in order to maximize the working range of the robots-, their positioning capability within the search environment, and multiple sensor integration -with the incorporation of an artificial nose, luminosity, distance, humidity and temperature sensors, as well as an electronic compass-. Another subject that is tackled is the use of an efficient wireless communication strategy based on ZigBee. The developed system, named GNBot, has also been validated in the aspects of energy efficiency and for its combined capabilities for autonomous spatial positioning and detection of ethanol-based odor sources. The presented platform -formed by the GNBot, the GNBoard electronics and the abstraction layer built in Python- will thus simplify the processes of implementation and evaluation of various strategies for the detection, search and monitoring of odorants with conveniently standardized robot swarms provided with artificial noses and other multimodal sensors
    corecore