218 research outputs found
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
Recommended from our members
Analysis and synthesis of collaborative opportunistic navigation systems
textNavigation is an invisible utility that is often taken for granted with considerable societal and economic impacts. Not only is navigation essential to our modern life, but the more it advances, the more possibilities are created. Navigation is at the heart of three emerging fields: autonomous vehicles, location-based services, and intelligent transportation systems. Global navigation satellite systems (GNSS) are insufficient for reliable anytime, anywhere navigation, particularly indoors, in deep urban canyons, and in environments under malicious attacks (e.g., jamming and spoofing). The conventional approach to overcome the limitations of GNSS-based navigation is to couple GNSS receivers with dead reckoning sensors. A new paradigm, termed opportunistic navigation (OpNav), is emerging. OpNav is analogous to how living creatures naturally navigate: by learning their environment. OpNav aims to exploit the plenitude of ambient radio frequency signals of opportunity (SOPs) in the environment. OpNav radio receivers, which may be handheld or vehicle-mounted, continuously search for opportune signals from which to draw position and timing information, employing on-the-fly signal characterization as necessary. In collaborative opportunistic navigation (COpNav), multiple receivers share information to construct and continuously refine a global signal landscape. For the sake of motivation, consider the following problem. A number of receivers with no a priori knowledge about their own states are dropped in an environment comprising multiple unknown terrestrial SOPs. The receivers draw pseudorange observations from the SOPs. The receivers' objective is to build a high-fidelity signal landscape map of the environment within which they localize themselves in space and time. We then ask: (i) Under what conditions is the environment fully observable? (ii) In cases where the environment is not fully observable, what are the observable states? (iii) How would receiver-controlled maneuvers affect observability? (iv) What is the degree of observability of the various states in the environment? (v) What motion planning strategy should the receivers employ for optimal information gathering? (vi) How effective are receding horizon strategies over greedy for receiver trajectory optimization, and what are their limitations? (vii) What level of collaboration between the receivers achieves a minimal price of anarchy? This dissertation addresses these fundamental questions and validates the theoretical conclusions numerically and experimentally.Electrical and Computer Engineerin
Range-only Collaborative Localization for Ground Vehicles
High-accuracy absolute localization for a team of vehicles is essential when
accomplishing various kinds of tasks. As a promising approach, collaborative
localization fuses the individual motion measurements and the inter-vehicle
measurements to collaboratively estimate the states. In this paper, we focus on
the range-only collaborative localization, which specifies the inter-vehicle
measurements as inter-vehicle ranging measurements. We first investigate the
observability properties of the system and derive that to achieve bounded
localization errors, two vehicles are required to remain static like external
infrastructures. Under the guide of the observability analysis, we then propose
our range-only collaborative localization system which categorize the ground
vehicles into two static vehicles and dynamic vehicles. The vehicles are
connected utilizing a UWB network that is capable of both producing
inter-vehicle ranging measurements and communication. Simulation results
validate the observability analysis and demonstrate that collaborative
localization is capable of achieving higher accuracy when utilizing the
inter-vehicle measurements. Extensive experimental results are performed for a
team of 3 and 5 vehicles. The real-world results illustrate that our proposed
system enables accurate and real-time estimation of all vehicles' absolute
poses.Comment: Proceedings of the 32nd International Technical Meeting of the
Satellite Division of The Institute of Navigation (ION GNSS+ 2019
Cooperative localization for mobile agents: a recursive decentralized algorithm based on Kalman filter decoupling
We consider cooperative localization technique for mobile agents with
communication and computation capabilities. We start by provide and overview of
different decentralization strategies in the literature, with special focus on
how these algorithms maintain an account of intrinsic correlations between
state estimate of team members. Then, we present a novel decentralized
cooperative localization algorithm that is a decentralized implementation of a
centralized Extended Kalman Filter for cooperative localization. In this
algorithm, instead of propagating cross-covariance terms, each agent propagates
new intermediate local variables that can be used in an update stage to create
the required propagated cross-covariance terms. Whenever there is a relative
measurement in the network, the algorithm declares the agent making this
measurement as the interim master. By acquiring information from the interim
landmark, the agent the relative measurement is taken from, the interim master
can calculate and broadcast a set of intermediate variables which each robot
can then use to update its estimates to match that of a centralized Extended
Kalman Filter for cooperative localization. Once an update is done, no further
communication is needed until the next relative measurement
Visual-based SLAM configurations for cooperative multi-UAV systems with a lead agent: an observability-based approach
In this work, the problem of the cooperative visual-based SLAM for the class of multi-UA systems that integrates a lead agent has been addressed. In these kinds of systems, a team of aerial robots flying in formation must follow a dynamic lead agent, which can be another aerial robot, vehicle or even a human. A fundamental problem that must be addressed for these kinds of systems
has to do with the estimation of the states of the aerial robots as well as the state of the lead agent.
In this work, the use of a cooperative visual-based SLAM approach is studied in order to solve the above problem. In this case, three different system configurations are proposed and investigated by means of an intensive nonlinear observability analysis. In addition, a high-level control scheme is proposed that allows to control the formation of the UAVs with respect to the lead agent. In this work, several theoretical results are obtained, together with an extensive set of computer simulations which are presented in order to numerically validate the proposal and to show that it can perform well under different circumstances (e.g., GPS-challenging environments). That is, the proposed method is able to operate robustly under many conditions providing a good position estimation of the aerial vehicles and the lead agent as well.Peer ReviewedPostprint (published version
Cooperative Navigation for Low-bandwidth Mobile Acoustic Networks.
This thesis reports on the design and validation of estimation and planning algorithms for underwater vehicle cooperative localization. While attitude and depth are easily instrumented with bounded-error, autonomous underwater vehicles (AUVs) have no internal sensor that directly observes XY position. The global positioning system (GPS) and other radio-based navigation techniques are not available because of the strong attenuation of electromagnetic signals in seawater. The navigation algorithms presented herein fuse local body-frame rate and attitude measurements with range observations between vehicles within a decentralized architecture.
The acoustic communication channel is both unreliable and low bandwidth, precluding many state-of-the-art terrestrial cooperative navigation algorithms. We exploit the underlying structure of a post-process centralized estimator in order to derive two real-time decentralized estimation frameworks. First, the origin state method enables a client vehicle to exactly reproduce the corresponding centralized estimate within a server-to-client vehicle network. Second, a graph-based navigation framework produces an approximate reconstruction of the centralized estimate onboard each vehicle. Finally, we present a method to plan a locally optimal server path to localize a client vehicle along a desired nominal trajectory. The planning algorithm introduces a probabilistic channel model into prior Gaussian belief space planning frameworks.
In summary, cooperative localization reduces XY position error growth within underwater vehicle networks. Moreover, these methods remove the reliance on static beacon networks, which do not scale to large vehicle networks and limit the range of operations. Each proposed localization algorithm was validated in full-scale AUV field trials. The planning framework was evaluated through numerical simulation.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113428/1/jmwalls_1.pd
A Survey on Aerial Swarm Robotics
The use of aerial swarms to solve real-world problems has been increasing steadily, accompanied by falling prices and improving performance of communication, sensing, and processing hardware. The commoditization of hardware has reduced unit costs, thereby lowering the barriers to entry to the field of aerial swarm robotics. A key enabling technology for swarms is the family of algorithms that allow the individual members of the swarm to communicate and allocate tasks amongst themselves, plan their trajectories, and coordinate their flight in such a way that the overall objectives of the swarm are achieved efficiently. These algorithms, often organized in a hierarchical fashion, endow the swarm with autonomy at every level, and the role of a human operator can be reduced, in principle, to interactions at a higher level without direct intervention. This technology depends on the clever and innovative application of theoretical tools from control and estimation. This paper reviews the state of the art of these theoretical tools, specifically focusing on how they have been developed for, and applied to, aerial swarms. Aerial swarms differ from swarms of ground-based vehicles in two respects: they operate in a three-dimensional space and the dynamics of individual vehicles adds an extra layer of complexity. We review dynamic modeling and conditions for stability and controllability that are essential in order to achieve cooperative flight and distributed sensing. The main sections of this paper focus on major results covering trajectory generation, task allocation, adversarial control, distributed sensing, monitoring, and mapping. Wherever possible, we indicate how the physics and subsystem technologies of aerial robots are brought to bear on these individual areas
Homography-Based State Estimation for Autonomous Exploration in Unknown Environments
This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position
- …