491 research outputs found

    Active Collaborative Localization in Heterogeneous Robot Teams

    Full text link
    Accurate and robust state estimation is critical for autonomous navigation of robot teams. This task is especially challenging for large groups of size, weight, and power (SWAP) constrained aerial robots operating in perceptually-degraded GPS-denied environments. We can, however, actively increase the amount of perceptual information available to such robots by augmenting them with a small number of more expensive, but less resource-constrained, agents. Specifically, the latter can serve as sources of perceptual information themselves. In this paper, we study the problem of optimally positioning (and potentially navigating) a small number of more capable agents to enhance the perceptual environment for their lightweight,inexpensive, teammates that only need to rely on cameras and IMUs. We propose a numerically robust, computationally efficient approach to solve this problem via nonlinear optimization. Our method outperforms the standard approach based on the greedy algorithm, while matching the accuracy of a heuristic evolutionary scheme for global optimization at a fraction of its running time. Ultimately, we validate our solution in both photorealistic simulations and real-world experiments. In these experiments, we use lidar-based autonomous ground vehicles as the more capable agents, and vision-based aerial robots as their SWAP-constrained teammates. Our method is able to reduce drift in visual-inertial odometry by as much as 90%, and it outperforms random positioning of lidar-equipped agents by a significant margin. Furthermore, our method can be generalized to different types of robot teams with heterogeneous perception capabilities. It has a wide range of applications, such as surveying and mapping challenging dynamic environments, and enabling resilience to large-scale perturbations that can be caused by earthquakes or storms.Comment: To appear in Robotics: Science and Systems (RSS) 202

    Stochastic Real-time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    Get PDF
    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is developed, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed with recursive updating of the target estimate with an Unscented Kalman Filter and the optimal path with Radau pseudospectral collocation methods and sequential quadratic programming. The basic real-time optimal control (RTOC) structure is investigated, specifically addressing limitations of current techniques in this area that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. Methods and tools required for implementation are developed, including variable calculation timing and tip-tail blending for potential discontinuities. Validation is accomplished with simulation and flight test, autonomously landing a quadrotor helicopter on a wire

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Aeronautical engineering: A continuing bibliography with indexes (supplement 317)

    Get PDF
    This bibliography lists 224 reports, articles, and other documents introduced into the NASA scientific and technical information system in May 1995. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Design of an Autonomous Quadrotor UAV for Urban Search and Rescue

    Get PDF
    Abstract This project entailed design and testing of an indoor quadrotor UAV capable of autonomous take-off, landing, and path finding. The propulsion system produces 1500g of thrust at 46% throttle using 7 propellers, minimizing craft size, but allowing for sufficient payload to carry a LIDAR, a CMOS camera, and rangefinders. These sensors are interfaced to an Overo processor, which sends high-level commands to a low-level flight controller, the HoverflyPro. Flight tests were conducted which demonstrated flight control and sensor operation

    Development and Validation of an IMU/GPS/Galileo Integration Navigation System for UAV

    Get PDF
    Several and distinct Unmanned Aircraft Vehicle (UAV) applications are emerging, demanding steps to be taken in order to allow those platforms to operate in an un-segregated airspace. The key risk component, hindering the widespread integration of UAV in an un-segregated airspace, is the autonomous component: the need for a high level of autonomy in the UAV that guarantees a safe and secure integration in an un-segregated airspace. At this point, the UAV accurate state estimation plays a fundamental role for autonomous UAV, being one of the main responsibilities of the onboard autopilot. Given the 21st century global economic paradigm, academic projects based on inexpensive UAV platforms but on expensive commercial autopilots start to become a non-economic solution. Consequently, there is a pressing need to overcome this problem through, on one hand, the development of navigation systems using the high availability of low cost, low power consumption, and small size navigation sensors offered in the market, and, on the other hand, using Global Navigation Satellite Systems Software Receivers (GNSS SR). Since the performance that is required for several applications in order to allow UAV to fly in an un-segregated airspace is not yet defined, for most UAV academic applications, the navigation system accuracy required should be at least the same as the one provided by the available commercial autopilots. This research focuses on the investigation of the performance of an integrated navigation system composed by a low performance inertial measurement unit (IMU) and a GNSS SR. A strapdown mechanization algorithm, to transform raw inertial data into navigation solution, was developed, implemented and evaluated. To fuse the data provided by the strapdown algorithm with the one provided by the GNSS SR, an Extended Kalman Filter (EKF) was implemented in loose coupled closed-loop architecture, and then evaluated. Moreover, in order to improve the performance of the IMU raw data, the Allan variance and denoise techniques were considered for both studying the IMU error model and improving inertial sensors raw measurements. In order to carry out the study, a starting question was made and then, based on it, eight questions were derived. These eight secondary questions led to five hypotheses, which have been successfully tested along the thesis. This research provides a deliverable to the Project of Research and Technologies on Unmanned Air Vehicles (PITVANT) Group, consisting of a well-documented UAV Development and Validation of an IMU/GPS/Galileo Integration Navigation System for UAV II navigation algorithm, an implemented and evaluated navigation algorithm in the MatLab environment, and Allan variance and denoising algorithms to improve inertial raw data, enabling its full implementation in the existent Portuguese Air Force Academy (PAFA) UAV. The derivable provided by this thesis is the answer to the main research question, in such a way that it implements a step by step procedure on how the Strapdown IMU (SIMU)/GNSS SR should be developed and implemented in order to replace the commercial autopilot. The developed integrated SIMU/GNSS SR solution evaluated, in post-processing mode, through van-test scenario, using real data signals, at the Galileo Test and Development Environment (GATE) test area in Berchtesgaden, Germany, when confronted with the solution provided by the commercial autopilot, proved to be of better quality. Although no centimetre-level of accuracy was obtained for the position and velocity, the results confirm that the integration strategy outperforms the Piccolo system performance, being this the ultimate goal of this research work

    Inertial navigation aided by simultaneous loacalization and mapping

    Get PDF
    Unmanned aerial vehicles technologies are getting smaller and cheaper to use and the challenges of payload limitation in unmanned aerial vehicles are being overcome. Integrated navigation system design requires selection of set of sensors and computation power that provides reliable and accurate navigation parameters (position, velocity and attitude) with high update rates and bandwidth in small and cost effective manner. Many of today’s operational unmanned aerial vehicles navigation systems rely on inertial sensors as a primary measurement source. Inertial Navigation alone however suffers from slow divergence with time. This divergence is often compensated for by employing some additional source of navigation information external to Inertial Navigation. From the 1990’s to the present day Global Positioning System has been the dominant navigation aid for Inertial Navigation. In a number of scenarios, Global Positioning System measurements may be completely unavailable or they simply may not be precise (or reliable) enough to be used to adequately update the Inertial Navigation hence alternative methods have seen great attention. Aiding Inertial Navigation with vision sensors has been the favoured solution over the past several years. Inertial and vision sensors with their complementary characteristics have the potential to answer the requirements for reliable and accurate navigation parameters. In this thesis we address Inertial Navigation position divergence. The information for updating the position comes from combination of vision and motion. When using such a combination many of the difficulties of the vision sensors (relative depth, geometry and size of objects, image blur and etc.) can be circumvented. Motion grants the vision sensors with many cues that can help better to acquire information about the environment, for instance creating a precise map of the environment and localize within the environment. We propose changes to the Simultaneous Localization and Mapping augmented state vector in order to take repeated measurements of the map point. We show that these repeated measurements with certain manoeuvres (motion) around or by the map point are crucial for constraining the Inertial Navigation position divergence (bounded estimation error) while manoeuvring in vicinity of the map point. This eliminates some of the uncertainty of the map point estimates i.e. it reduces the covariance of the map points estimates. This concept brings different parameterization (feature initialisation) of the map points in Simultaneous Localization and Mapping and we refer to it as concept of aiding Inertial Navigation by Simultaneous Localization and Mapping. We show that making such an integrated navigation system requires coordination with the guidance and control measurements and the vehicle task itself for performing the required vehicle manoeuvres (motion) and achieving better navigation accuracy. This fact brings new challenges to the practical design of these modern jam proof Global Positioning System free autonomous navigation systems. Further to the concept of aiding Inertial Navigation by Simultaneous Localization and Mapping we have investigated how a bearing only sensor such as single camera can be used for aiding Inertial Navigation. The results of the concept of Inertial Navigation aided by Simultaneous Localization and Mapping were used. New parameterization of the map point in Bearing Only Simultaneous Localization and Mapping is proposed. Because of the number of significant problems that appear when implementing the Extended Kalman Filter in Inertial Navigation aided by Bearing Only Simultaneous Localization and Mapping other algorithms such as Iterated Extended Kalman Filter, Unscented Kalman Filter and Particle Filters were implemented. From the results obtained, the conclusion can be drawn that the nonlinear filters should be the choice of estimators for this application

    Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

    Get PDF
    Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.Comment: PhD thesis, Aerospace Engineering, Texas A&M (2020). For more information, see https://vggoecks.com
    • …
    corecore