100 research outputs found

    Towards Multi-robot Exploration: A Decentralized Strategy for UAV Forest Exploration

    Full text link
    Efficient exploration strategies are vital in tasks such as search-and-rescue missions and disaster surveying. Unmanned Aerial Vehicles (UAVs) have become particularly popular in such applications, promising to cover large areas at high speeds. Moreover, with the increasing maturity of onboard UAV perception, research focus has been shifting toward higher-level reasoning for single- and multi-robot missions. However, autonomous navigation and exploration of previously unknown large spaces still constitutes an open challenge, especially when the environment is cluttered and exhibits large and frequent occlusions due to high obstacle density, as is the case of forests. Moreover, the problem of long-distance wireless communication in such scenes can become a limiting factor, especially when automating the navigation of a UAV swarm. In this spirit, this work proposes an exploration strategy that enables UAVs, both individually and in small swarms, to quickly explore complex scenes in a decentralized fashion. By providing the decision-making capabilities to each UAV to switch between different execution modes, the proposed strategy strikes a great balance between cautious exploration of yet completely unknown regions and more aggressive exploration of smaller areas of unknown space. This results in full coverage of forest areas of variable density, consistently faster than the state of the art. Demonstrating successful deployment with a single UAV as well as a swarm of up to three UAVs, this work sets out the basic principles for multi-root exploration of cluttered scenes, with up to 65% speed up in the single UAV case and 40% increase in explored area for the same mission time in multi-UAV setups

    KEYFRAME-BASED VISUAL-INERTIAL SLAM USING NONLINEAR OPTIMIZATION

    Get PDF
    Abstract—The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of ‘keyframes ’ we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness. I

    Percepción Activa multi-robot para la cobertura rápida y eficiente de escenas.

    Get PDF
    The efficiency of path-planning in robot navigation is crucial in tasks, such as search-and-rescue and disaster surveying, but this is emphasised even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this Master Thesis proposes an efficient, hierarchical planner to achieve a comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map in a set of tasks, we are able to generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach to an alternative state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.<br /

    Multi-robot active perception for fast andefficient scene reconstruction.

    Get PDF
    The efficiency of path-planning in robot navigation is crucial in tasks, such as search-and-rescue and disaster surveying, but this is emphasised even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this Master Thesis proposes an efficient, hierarchical planner to achieve a comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map in a set of tasks, we are able to generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach to an alternative state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.<br /

    BRISK: Binary Robust invariant scalable keypoints

    Full text link

    People Detection and Tracking from Aerial Thermal Views

    Get PDF

    Path Planning for Motion Dependent State Estimation on Micro Aerial Vehicles

    Get PDF
    Abstract — With navigation algorithms reaching a certain maturity in the field of mobile robots, the community now focuses on more advanced tasks like path planning towards increased autonomy. While the goal is to efficiently compute a path to a target destination, the uncertainty in the robot’s perception cannot be ignored if a realistic path is to be computed. With most state of the art navigation systems providing the uncertainty in motion estimation, here we propose to exploit this information. This leads to a system that can plan safe avoidance of obstacles, and more importantly, it can actively aid navigation by choosing a path that minimizes the uncertainty in the monitored states. Our proposed approach is applicable to systems requiring certain excitations in order to render all their states observable, such as a MAV with visual-inertial based localization. In this work, we propose an approach which takes into account this necessary motion during path planning: by employing Rapidly exploring Random Belief Trees (RRBT), the proposed approach chooses a path to a goal which allows for best estimation of the robot’s states, while inherently avoiding motion in unobservable modes. We discuss our findings within the scenario of vision-based aerial navigation as one of the most challenging navigation problem, requiring sufficient excitation to reach full observability. I

    A ROBUST AND MODULAR MULTI-SENSOR FUSION APPROACH APPLIED TO MAV NAVIGATION

    Get PDF
    Abstract — It has been long known that fusing information from multiple sensors for robot navigation results in increased robustness and accuracy. However, accurate calibration of the sensor ensemble prior to deployment in the field as well as coping with sensor outages, different measurement rates and delays, render multi-sensor fusion a challenge. As a result, most often, systems do not exploit all the sensor information available in exchange for simplicity. For example, on a mission requiring transition of the robot from indoors to outdoors, it is the norm to ignore the Global Positioning System (GPS) signals which become freely available once outdoors and instead, rely only on sensor feeds (e.g., vision and laser) continuously available throughout the mission. Naturally, this comes at the expense of robustness and accuracy in real deployment. This paper presents a generic framework, dubbed Multi-Sensor-Fusion Extended Kalman Filter (MSF-EKF), able to process delayed, relative and absolute measurements from a theoretically unlimited number of different sensors and sensor types, allowing self-calibration of the sensor-suite. The modularity of MSF-EKF allows seamless handling of additional/lost sensor signals online during operation while employing an state buffering scheme augmented with Iterated EKF (IEKF) updates to allow for efficient re-linearization of the propagation to get near optimal linearlization points for both absolute and relative state updates. We demonstrate our approach in outdoor navigation experiments using a Micro Aerial Vehicle (MAV) equipped with a GPS receiver as well as visual, inertial, and pressure sensors. I
    corecore