36 research outputs found

    Perception-aware receding horizon trajectory planning for multicopters with visual-inertial odometry

    Full text link
    Visual inertial odometry (VIO) is widely used for the state estimation of multicopters, but it may function poorly in environments with few visual features or in overly aggressive flights. In this work, we propose a perception-aware collision avoidance trajectory planner for multicopters, that may be used with any feature-based VIO algorithm. Our approach is able to fly the vehicle to a goal position at fast speed, avoiding obstacles in an unknown stationary environment while achieving good VIO state estimation accuracy. The proposed planner samples a group of minimum jerk trajectories and finds collision-free trajectories among them, which are then evaluated based on their speed to the goal and perception quality. Both the motion blur of features and their locations are considered for the perception quality. Our novel consideration of the motion blur of features enables automatic adaptation of the trajectory's aggressiveness under environments with different light levels. The best trajectory from the evaluation is tracked by the vehicle and is updated in a receding horizon manner when new images are received from the camera. Only generic assumptions about the VIO are made, so that the planner may be used with various existing systems. The proposed method can run in real-time on a small embedded computer on board. We validated the effectiveness of our proposed approach through experiments in both indoor and outdoor environments. Compared to a perception-agnostic planner, the proposed planner kept more features in the camera's view and made the flight less aggressive, making the VIO more accurate. It also reduced VIO failures, which occurred for the perception-agnostic planner but not for the proposed planner. The ability of the proposed planner to fly through dense obstacles was also validated. The experiment video can be found at https://youtu.be/qO3LZIrpwtQ.Comment: 12 page

    Concept and Feasibility Evaluation of Distributed Sensor-Based Measurement Systems Using Formation Flying Multicopters

    Get PDF
    Unmanned aerial vehicles (UAVs) have been used for increasing research applications in atmospheric measurements. However, most current solutions for these applications are based on a single UAV with limited payload capacity. In order to address the limitations of the single UAV-based approach, this paper proposes a new concept of measurements using tandem flying multicopters as a distributed sensor platform. Key challenges of the proposed concept are identified including the relative position estimation and control in wind-perturbed outdoor environment and the precise alignment of payloads. In the proposed concept, sliding mode control is chosen as the relative position controller and a gimbal stabilization system is introduced to achieve fine payload alignment. The characterization of the position estimation sensors (including global navigation satellite system and real-time kinematics) and flight controller is carried out using different UAVs (a DJI Matrice M600 Pro Hexacopter and Tarot X4 frame based Quadcopter) under different wind levels. Based on the experimental data, the performance of the sliding mode controller and the performance of the gimbal stabilization system are evaluated in a hardware-in-the-loop simulation environment (called ELISSA). Preliminary achievable control accuracies of the relative position and attitude of subsystems in the proposed concept are estimated based on experimental result

    Helipad detection for accurate UAV pose estimation by means of a visual sensor

    Get PDF
    In this article, we tackle the problem of developing a visual framework to allow the autonomous landing of an unmanned aerial vehicle onto a platform using a single camera. Specifically, we propose a vision-based helipad detection algorithm in order to estimate the attitude of a drone on which the camera is fastened with respect to target. Since the algorithm should be simple and quick, we implemented a method based on curvatures in order to detect the heliport marks, that is, the corners of character H. By knowing the size of H mark and the actual location of its corners, we are able to compute the homography matrix containing the relative pose information. The effectiveness of our methodology has been proven through controlled indoor and outdoor experiments. The outcomes have shown that the method provides high accuracies in estimating the distance and the orientation of camera with respect to visual target. Specifically, small errors lower than 1% and 4% have been achieved in the computing of measurements, respectively

    Improving the Robustness of Monocular Vision-Aided Navigation for Multirotors through Integrated Estimation and Guidance

    Get PDF
    Multirotors could be used to autonomously perform tasks in search-and-rescue, reconnaissance, or infrastructure-monitoring applications. In these environments, the vehicle may have limited or degraded GPS access. Researchers have investigated methods for simultaneous localization and mapping (SLAM) using on-board vision sensors, allowing vehicles to navigate in GPS-denied environments. In particular, SLAM solutions based on a monocular camera offer low-cost, low-weight, and accurate navigation indoors and outdoors without explicit range limitations. However, a monocular camera is a bearing-only sensor. Additional sensors are required to achieve metric pose estimation, and the structure of a scene can only be recovered through camera motion. Because of these challenges, the performance of monocular-based navigation solutions is typically very sensitive to the environment and the vehicle’s trajectory. This work proposes an integrated estimation and guidance approach for improving the robustness of monocular SLAM to environmental uncertainty. It is specifically intended for a multirotor carrying a monocular camera, downward-facing rangefinder, and inertial measurement unit (IMU). A guidance maneuver is proposed that takes advantage of the metric rangefinder measurements. When the environmental uncertainty is high, the vehicle simply moves up and down, initializing features with a confident and accurate baseline. In order to demonstrate this technique, a vision-aided navigation solution is implemented which includes a unique approach to feature covariance initialization that is based on consider least squares. Features are only initialized if there is enough information to accurately triangulate their position, providing an indirect metric of environmental uncertainty that could be used to signal the guidance maneuver. The navigation filter is validated using hardware and simulated data. Finally, simulations show that the proposed initialization maneuver is a simple, practical, and effective way to improve the robustness of monocular-vision-aided-navigation and could increase the amount of autonomy that GPS-denied multirotors are capable of achieving

    3D Active Metric-Semantic SLAM

    Full text link
    In this letter, we address the problem of exploration and metric-semantic mapping of multi-floor GPS-denied indoor environments using Size Weight and Power (SWaP) constrained aerial robots. Most previous work in exploration assumes that robot localization is solved. However, neglecting the state uncertainty of the agent can ultimately lead to cascading errors both in the resulting map and in the state of the agent itself. Furthermore, actions that reduce localization errors may be at direct odds with the exploration task. We propose a framework that balances the efficiency of exploration with actions that reduce the state uncertainty of the agent. In particular, our algorithmic approach for active metric-semantic SLAM is built upon sparse information abstracted from raw problem data, to make it suitable for SWaP-constrained robots. Furthermore, we integrate this framework within a fully autonomous aerial robotic system that achieves autonomous exploration in cluttered, 3D environments. From extensive real-world experiments, we showed that by including Semantic Loop Closure (SLC), we can reduce the robot pose estimation errors by over 90% in translation and approximately 75% in yaw, and the uncertainties in pose estimates and semantic maps by over 70% and 65%, respectively. Although discussed in the context of indoor multi-floor exploration, our system can be used for various other applications, such as infrastructure inspection and precision agriculture where reliable GPS data may not be available.Comment: Submitted to RA-L for revie

    Low computational SLAM for an autonomous indoor aerial inspection vehicle

    Get PDF
    The past decade has seen an increase in the capability of small scale Unmanned Aerial Vehicle (UAV) systems, made possible through technological advancements in battery, computing and sensor miniaturisation technology. This has opened a new and rapidly growing branch of robotic research and has sparked the imagination of industry leading to new UAV based services, from the inspection of power-lines to remote police surveillance. Miniaturisation of UAVs have also made them small enough to be practically flown indoors. For example, the inspection of elevated areas in hazardous or damaged structures where the use of conventional ground-based robots are unsuitable. Sellafield Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require frequent safety inspections. UAV inspections eliminate the current risk to personnel of radiation exposure and other hazards in tall structures where scaffolding or hoists are required. This project focused on the development of a UAV for the novel application of semi-autonomously navigating and inspecting these structures without the need for personnel to enter the building. Development exposed a significant gap in knowledge concerning indoor localisation, specifically Simultaneous Localisation and Mapping (SLAM) for use on-board UAVs. To lower the on-board processing requirements of SLAM, other UAV research groups have employed techniques such as off-board processing, reduced dimensionality or prior knowledge of the structure, techniques not suitable to this application given the unknown nature of the structures and the risk of radio-shadows. In this thesis a novel localisation algorithm, which enables real-time and threedimensional SLAM running solely on-board a computationally constrained UAV in heavily cluttered and unknown environments is proposed. The algorithm, based on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour searches and point-cloud decimation to reduce the processing requirements has successfully been tested in environments similar to that specified by Sellafield Ltd
    corecore