203 research outputs found
Qualitative Failure Analysis for a Small Quadrotor Unmanned Aircraft System
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106490/1/AIAA2013-4761.pd
Proprioceptive Invariant Robot State Estimation
This paper reports on developing a real-time invariant proprioceptive robot
state estimation framework called DRIFT. A didactic introduction to invariant
Kalman filtering is provided to make this cutting-edge symmetry-preserving
approach accessible to a broader range of robotics applications. Furthermore,
this work dives into the development of a proprioceptive state estimation
framework for dead reckoning that only consumes data from an onboard inertial
measurement unit and kinematics of the robot, with two optional modules, a
contact estimator and a gyro filter for low-cost robots, enabling a significant
capability on a variety of robotics platforms to track the robot's state over
long trajectories in the absence of perceptual data. Extensive real-world
experiments using a legged robot, an indoor wheeled robot, a field robot, and a
full-size vehicle, as well as simulation results with a marine robot, are
provided to understand the limits of DRIFT
Localization Algorithms for GNSS-denied and Challenging Environments
In this dissertation, the problem about localization in GNSS-denied and challenging environments is addressed. Specifically, the challenging environments discussed in this dissertation include two different types, environments including only low-resolution features and environments containing moving objects. To achieve accurate pose estimates, the errors are always bounded through matching observations from sensors with surrounding environments. These challenging environments, unfortunately, would bring troubles into matching related methods, such as fingerprint matching, and ICP. For instance, in environments with low-resolution features, the on-board sensor measurements could match to multiple positions on a map, which creates ambiguity; in environments with moving objects included, the accuracy of the estimated localization is affected by the moving objects when performing matching. In this dissertation, two sensor fusion based strategies are proposed to solve localization problems with respect to these two types of challenging environments, respectively.
For environments with only low-resolution features, such as flying over sea or desert, a multi-agent localization algorithm using pairwise communication with ranging and magnetic anomaly measurements is proposed in this dissertation. A scalable framework is then presented to extend the multi-agent localization algorithm to be suitable for a large group of agents (e.g., 128 agents) through applying CI algorithm. The simulation results show that the proposed algorithm is able to deal with large group sizes, achieve 10 meters level localization performance with 180 km traveling distance, while under restrictive communication constraints.
For environments including moving objects, lidar-inertial-based solutions are proposed and tested in this dissertation. Inspired by the CI algorithm presented above, a potential solution using multiple features motions estimate and tracking is analyzed. In order to improve the performance and effectiveness of the potential solution, a lidar-inertial based SLAM algorithm is then proposed. In this method, an efficient tightly-coupled iterated Kalman filter with a build-in dynamic object filter is designed as the front-end of the SLAM algorithm, and the factor graph strategy using a scan context technology as the loop closure detection is utilized as the back-end. The performance of the proposed lidar-inertial based SLAM algorithm is evaluated with several data sets collected in environments including moving objects, and compared with the state-of-the-art lidar-inertial based SLAM algorithms
NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
The ubiquitous nature of GPS has fostered its widespread integration of navigation into a variety of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation
Autonomous Localization Of A Uav In A 3d Cad Model
This thesis presents a novel method of indoor localization and autonomous navigation of Unmanned Aerial Vehicles(UAVs) within a building, given a prebuilt Computer Aided Design(CAD) model of the building. The proposed system is novel in that it leverages the support of machine learning and traditional computer vision techniques to provide a robust method of localizing and navigating a drone autonomously in indoor and GPS denied environments leveraging preexisting knowledge of the environment. The goal of this work is to devise a method to enable a UAV to deduce its current pose within a CAD model that is fast and accurate while also maintaining efficient use of resources. A 3-Dimensional CAD model of the building to be navigated through is provided as input to the system along with the required goal position. Initially, the UAV has no idea of its location within the building. The system, comprising a stereo camera system and an Inertial Measurement Unit(IMU) as its sensors, then generates a globally consistent map of its surroundings using a Simultaneous Localization and Mapping (SLAM) algorithm. In addition to the map, it also stores spatially correlated 3D features. These 3D features are then used to generate correspondences between the SLAM map and the 3D CAD model. The correspondences are then used to generate a transformation between the SLAM map and the 3D CAD model, thus effectively localizing the UAV in the 3D CAD model. Our method has been tested to successfully localize the UAV in the test building in an average of 15 seconds in the different scenarios tested contingent upon the abundance of target features in the observed data. Due to the absence of a motion capture system, the results have been verified by the placement of tags on the ground at strategic known locations in the building and measuring the error in the projection of the current UAV location on the ground with the tag
Reinforcement learning-based autonomous robot navigation and tracking
Autonomous navigation requires determining a collision-free path for a mobile robot
using only partial observations of the environment. This capability is highly needed
for a wide range of applications, such as search and rescue operations, surveillance,
environmental monitoring, and domestic service robots. In many scenarios, an accurate global map is not available beforehand, posing significant challenges for a robot
planning its path. This type of navigation is often referred to as Mapless Navigation,
and such work is not limited to only Unmanned Ground Vehicle (UGV) but also
other vehicles, such as Unmanned Aerial Vehicles (UAV) and more. This research
aims to develop Reinforcement Learning (RL)-based methods for autonomous navigation for mobile robots, as well as effective tracking strategies for a UAV to follow
a moving target.
Mapless navigation usually assumes accurate localisation, which is unrealistic.
In the real world, localisation methods, such as simultaneous localisation and mapping (SLAM), are needed. However, the localisation performance could deteriorate
depending on the environment and observation quality. Therefore, To avoid de-teriorated localisation, this work introduces an RL-based navigation algorithm to
enable mobile robots to navigate in unknown environments, while incorporating
localisation performance in training the policy. Specifically, a localisation-related
penalty is introduced in the reward space, ensuring localisation safety is taken into
consideration during navigation. Different metrics are formulated to identify if the
localisation performance starts to deteriorate in order to penalise the robot. As such, the navigation policy will not only optimise its paths in terms of travel distance and
collision avoidance towards the goal but also avoid venturing into areas that pose
challenges for localisation algorithms.
The localisation-safe algorithm is further extended to UAV navigation, which
uses image-based observations. Instead of deploying an end-to-end control pipeline,
this work establishes a hierarchical control framework that leverages both the capabilities of neural networks for perception and the stability and safety guarantees of
conventional controllers. The high-level controller in this hierarchical framework is a
neural network policy with semantic image inputs, trained using RL algorithms with
localisation-related rewards. The efficacy of the trained policy is demonstrated in
real-world experiments for localisation-safe navigation, and, notably, it exhibits effectiveness without the need for retraining, thanks to the hierarchical control scheme
and semantic inputs. Last, a tracking policy is introduced to enable a UAV to track a moving target. This study designs a reward space, enabling a vision-based UAV, which utilises
depth images for perception, to follow a target within a safe and visible range. The
objective is to maintain the mobile target at the centre of the drone camera’s image
without being occluded by other objects and to avoid collisions with obstacles. It
is observed that training such a policy from scratch may lead to local minima. To
address this, a state-based teacher policy is trained to perform the tracking task,
with environmental perception relying on direct access to state information, including position coordinates of obstacles, instead of depth images. An RL algorithm is
then constructed to train the vision-based policy, incorporating behavioural guidance from the state-based teacher policy. This approach yields promising tracking
performance
Planning, Estimation and Control for Mobile Robot Localization with Application to Long-Term Autonomy
There may arise two kinds of challenges in the problem of mobile robot localization; (i) a robot may have an a priori map of its environment, in which case the localization problem boils down to estimating the robot pose relative to a global frame or (ii) no a priori map information is given, in which case a robot may have to estimate a model of its environment and localize within it. In the case of a known map, simultaneous planning while localizing is a crucial ability for operating under uncertainty. We first address this problem by designing a method to dynamically replan while the localization uncertainty or environment map is updated. Extensive simulations are conducted to compare the proposed method with the performance of FIRM (Feedback-based Information RoadMap). However, a shortcoming of this method is its reliance on a Gaussian assumption for the Probability Density Function (pdf) on the robot state. This assumption may be violated during autonomous operation when a robot visits parts of the environment which appear similar to others. Such situations lead to ambiguity in data association between what is seen and the robot’s map leading to a non-Gaussian pdf on the robot state. We address this challenge by developing a motion planning method to resolve situations where ambiguous data associations result in a multimodal hypothesis on the robot state. A Receding Horizon approach is developed, to plan actions that sequentially disambiguate a multimodal belief to achieve tight localization on the correct pose in finite time. In our method, disambiguation is achieved through active data associations by picking target states in the map which allow distinctive information to be observed for each belief mode and creating local feedback controllers to visit the targets. Experiments are conducted for a kidnapped physical ground robot operating in an artificial maze-like environment.
The hardest challenge arises when no a priori information is present. In longterm tasks where a robot must drive for long durations before closing loops, our goal is to minimize the localization error growth rate such that; (i) accurate data associations can be made for loop closure, or (ii) in cases where loop closure is not possible, the localization error stays limited within some desired bounds. We analyze this problem and show that accurate heading estimation is key to limiting localization error drift. We make three contributions in this domain. First we present a method for accurate long-term localization using absolute orientation measurements and analyze the underlying structure of the SLAM problem and how it is affected by unbiased heading measurements. We show that consistent estimates over a 100km trajectory are possible and that the error growth rate can be controlled with active data acquisition. Then we study the more general problem when orientation measurements may not be present and develop a SLAM technique to separate orientation and position estimation. We show that our method’s accuracy degrades gracefully compared to the standard non-linear optimization based SLAM approach and avoids catastrophic failures which may occur due a bad initial guess in non-linear optimization. Finally we take our understanding of orientation sensing into the physical world and demonstrate a 2D SLAM technique that leverages absolute orientation sensing based on naturally occurring structural cues. We demonstrate our method using both high-fidelity simulations and a real-world experiment in a 66, 000 square foot warehouse. Empirical studies show that maps generated by our approach never suffer catastrophic failure, whereas existing scan matching based SLAM methods fail ≈ 50% of the time
Low computational SLAM for an autonomous indoor aerial inspection vehicle
The past decade has seen an increase in the capability of small scale Unmanned
Aerial Vehicle (UAV) systems, made possible through technological advancements
in battery, computing and sensor miniaturisation technology. This has opened a new
and rapidly growing branch of robotic research and has sparked the imagination of
industry leading to new UAV based services, from the inspection of power-lines to
remote police surveillance.
Miniaturisation of UAVs have also made them small enough to be practically flown
indoors. For example, the inspection of elevated areas in hazardous or damaged
structures where the use of conventional ground-based robots are unsuitable. Sellafield
Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require
frequent safety inspections. UAV inspections eliminate the current risk to personnel
of radiation exposure and other hazards in tall structures where scaffolding or hoists
are required.
This project focused on the development of a UAV for the novel application of
semi-autonomously navigating and inspecting these structures without the need for
personnel to enter the building. Development exposed a significant gap in knowledge
concerning indoor localisation, specifically Simultaneous Localisation and Mapping
(SLAM) for use on-board UAVs. To lower the on-board processing requirements
of SLAM, other UAV research groups have employed techniques such as off-board
processing, reduced dimensionality or prior knowledge of the structure, techniques
not suitable to this application given the unknown nature of the structures and the
risk of radio-shadows.
In this thesis a novel localisation algorithm, which enables real-time and threedimensional
SLAM running solely on-board a computationally constrained UAV in
heavily cluttered and unknown environments is proposed. The algorithm, based
on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour
searches and point-cloud decimation to reduce the processing requirements has
successfully been tested in environments similar to that specified by Sellafield Ltd
- …