344 research outputs found
Vision-Aided Inertial Navigation
This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked
Real-Time Indoor Localization using Visual and Inertial Odometry
This project encompassed the design of a mobile, real-time localization device for use in an indoor environment. A system was designed and constructed using visual and inertial odometry methods to meet the project requirements. Stereoscopic image features were detected through a C++ Sobel filter implementation and matched. An inertial measurement unit (IMU) provided raw acceleration and rotation coordinates which were transformed into a global frame of reference. A Kalman filter produced motion approximations from the input data and transmitted the Kalman position state coordinates via a radio transceiver to a remote base station. This station used a graphical user interface to map the incoming coordinates
Computationally-efficient visual inertial odometry for autonomous vehicle
This thesis presents the design, implementation, and validation of a novel nonlinearfiltering
based Visual Inertial Odometry (VIO) framework for robotic navigation in GPSdenied
environments. The system attempts to track the vehicle’s ego-motion at each time
instant while capturing the benefits of both the camera information and the Inertial Measurement
Unit (IMU). VIO demands considerable computational resources and processing
time, and this makes the hardware implementation quite challenging for micro- and nanorobotic
systems. In many cases, the VIO process selects a small subset of tracked features
to reduce the computational cost. VIO estimation also suffers from the inevitable accumulation
of error. This limitation makes the estimation gradually diverge and even fail to
track the vehicle trajectory over long-term operation. Deploying optimization for the entire
trajectory helps to minimize the accumulative errors, but increases the computational cost
significantly. The VIO hardware implementation can utilize a more powerful processor
and specialized hardware computing platforms, such as Field Programmable Gate Arrays,
Graphics Processing Units and Application-Specific Integrated Circuits, to accelerate the
execution. However, the computation still needs to perform identical computational steps
with similar complexity. Processing data at a higher frequency increases energy consumption
significantly. The development of advanced hardware systems is also expensive and
time-consuming. Consequently, the approach of developing an efficient algorithm will be
beneficial with or without hardware acceleration. The research described in this thesis
proposes multiple solutions to accelerate the visual inertial odometry computation while
maintaining a comparative estimation accuracy over long-term operation among state-ofthe-
art algorithms.
This research has resulted in three significant contributions. First, this research involved
the design and validation of a novel nonlinear filtering sensor-fusion algorithm using trifocal
tensor geometry and a cubature Kalman filter. The combination has handled the system
nonlinearity effectively, while reducing the computational cost and system complexity significantly.
Second, this research develops two solutions to address the error accumulation
issue. For standalone self-localization projects, the first solution applies a local optimization
procedure for the measurement update, which performs multiple corrections on a single
measurement to optimize the latest filter state and covariance. For larger navigation
projects, the second solution integrates VIO with additional pseudo-ranging measurements
between the vehicle and multiple beacons in order to bound the accumulative errors. Third,
this research develops a novel parallel-processing VIO algorithm to speed up the execution
using a multi-core CPU. This allows the distribution of the filtering computation on each
core to process and optimize each feature measurement update independently.
The performance of the proposed visual inertial odometry framework is evaluated using
publicly-available self-localization datasets, for comparison with some other open-source
algorithms. The results illustrate that a proposed VIO framework is able to improve the
VIO’s computational efficiency without the installation of specialized hardware computing
platforms and advanced software libraries
Adaptive Localization and Mapping for Planetary Rovers
Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach
Enabling technologies for precise aerial manufacturing with unmanned aerial vehicles
The construction industry is currently experiencing a revolution with automation techniques
such as additive manufacturing and robot-enabled construction. Additive Manufacturing (AM)
is a key technology that can o er productivity improvement in the construction industry by
means of o -site prefabrication and on-site construction with automated systems. The key
bene t is that building elements can be fabricated with less materials and higher design freedom
compared to traditional manual methods.
O -site prefabrication with AM has been investigated for some time already, but it has limitations
in terms of logistical issues of components transportation and due to its lack of design
exibility on-site. On-site construction with automated systems, such as static gantry systems
and mobile ground robots performing AM tasks, can o er additional bene ts over o -site
prefabrication, but it needs further research before it will become practical and economical.
Ground-based automated construction systems also have the limitation that they cannot extend
the construction envelope beyond their physical size. The solution of using aerial robots
to liberate the process from the constrained construction envelope has been suggested, albeit
with technological challenges including precision of operation, uncertainty in environmental
interaction and energy e ciency.
This thesis investigates methods of precise manufacturing with aerial robots. In particular,
this work focuses on stabilisation mechanisms and origami-based structural elements that allow
aerial robots to operate in challenging environments. An integrated aerial self-aligning delta
manipulator has been utilised to increase the positioning accuracy of the aerial robots, and
a Material Extrusion (ME) process has been developed for Aerial Additive Manufacturing
(AAM). A 28-layer tower has been additively manufactured by aerial robots to demonstrate the
feasibility of AAM. Rotorigami and a bioinspired landing mechanism demonstrate their abilities
to overcome uncertainty in environmental interaction with impact protection capabilities and
improved robustness for UAV. Design principles using tensile anchoring methods have been
explored, enabling low-power operation and explores possibility of low-power aerial stabilisation.
The results demonstrate that precise aerial manufacturing needs to consider not only just the
robotic aspects, such as
ight control algorithms and mechatronics, but also material behaviour
and environmental interaction as factors for its success.Open Acces
SRIBO: An Efficient and Resilient Single-Range and Inertia Based Odometry for Flying Robots
Positioning with one inertial measurement unit and one ranging sensor is
commonly thought to be feasible only when trajectories are in certain patterns
ensuring observability. For this reason, to pursue observable patterns, it is
required either exciting the trajectory or searching key nodes in a long
interval, which is commonly highly nonlinear and may also lack resilience.
Therefore, such a positioning approach is still not widely accepted in
real-world applications. To address this issue, this work first investigates
the dissipative nature of flying robots considering aerial drag effects and
re-formulates the corresponding positioning problem, which guarantees
observability almost surely. On this basis, a dimension-reduced wriggling
estimator is proposed accordingly. This estimator slides the estimation horizon
in a stepping manner, and output matrices can be approximately evaluated based
on the historical estimation sequence. The computational complexity is then
further reduced via a dimension-reduction approach using polynomial fittings.
In this way, the states of robots can be estimated via linear programming in a
sufficiently long interval, and the degree of observability is thereby further
enhanced because an adequate redundancy of measurements is available for each
estimation. Subsequently, the estimator's convergence and numerical stability
are proven theoretically. Finally, both indoor and outdoor experiments verify
that the proposed estimator can achieve decimeter-level precision at hundreds
of hertz per second, and it is resilient to sensors' failures. Hopefully, this
study can provide a new practical approach for self-localization as well as
relative positioning of cooperative agents with low-cost and lightweight
sensors
Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification
In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology.
This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems
Neuromorphic Visual Odometry with Resonator Networks
Autonomous agents require self-localization to navigate in unknown
environments. They can use Visual Odometry (VO) to estimate self-motion and
localize themselves using visual sensors. This motion-estimation strategy is
not compromised by drift as inertial sensors or slippage as wheel encoders.
However, VO with conventional cameras is computationally demanding, limiting
its application in systems with strict low-latency, -memory, and -energy
requirements. Using event-based cameras and neuromorphic computing hardware
offers a promising low-power solution to the VO problem. However, conventional
algorithms for VO are not readily convertible to neuromorphic hardware. In this
work, we present a VO algorithm built entirely of neuronal building blocks
suitable for neuromorphic implementation. The building blocks are groups of
neurons representing vectors in the computational framework of Vector Symbolic
Architecture (VSA) which was proposed as an abstraction layer to program
neuromorphic hardware. The VO network we propose generates and stores a working
memory of the presented visual environment. It updates this working memory
while at the same time estimating the changing location and orientation of the
camera. We demonstrate how VSA can be leveraged as a computing paradigm for
neuromorphic robotics. Moreover, our results represent an important step
towards using neuromorphic computing hardware for fast and power-efficient VO
and the related task of simultaneous localization and mapping (SLAM). We
validate this approach experimentally in a simple robotic task and with an
event-based dataset, demonstrating state-of-the-art performance in these
settings.Comment: 14 pages, 5 figures, minor change
- …