142 research outputs found
Kinematics Based Visual Localization for Skid-Steering Robots: Algorithm and Theory
To build commercial robots, skid-steering mechanical design is of increased
popularity due to its manufacturing simplicity and unique mechanism. However,
these also cause significant challenges on software and algorithm design,
especially for pose estimation (i.e., determining the robot's rotation and
position), which is the prerequisite of autonomous navigation. While the
general localization algorithms have been extensively studied in research
communities, there are still fundamental problems that need to be resolved for
localizing skid-steering robots that change their orientation with a skid. To
tackle this problem, we propose a probabilistic sliding-window estimator
dedicated to skid-steering robots, using measurements from a monocular camera,
the wheel encoders, and optionally an inertial measurement unit (IMU).
Specifically, we explicitly model the kinematics of skid-steering robots by
both track instantaneous centers of rotation (ICRs) and correction factors,
which are capable of compensating for the complexity of track-to-terrain
interaction, the imperfectness of mechanical design, terrain conditions and
smoothness, and so on. To prevent performance reduction in robots' lifelong
missions, the time- and location- varying kinematic parameters are estimated
online along with pose estimation states in a tightly-coupled manner. More
importantly, we conduct in-depth observability analysis for different sensors
and design configurations in this paper, which provides us with theoretical
tools in making the correct choice when building real commercial robots. In our
experiments, we validate the proposed method by both simulation tests and
real-world experiments, which demonstrate that our method outperforms competing
methods by wide margins.Comment: 18 pages in tota
Monocular Visual Odometry for Fixed-Wing Small Unmanned Aircraft Systems
The popularity of small unmanned aircraft systems (SUAS) has exploded in recent years and seen increasing use in both commercial and military sectors. A key interest area for the military is to develop autonomous capabilities for these systems, of which navigation is a fundamental problem. Current navigation solutions suffer from a heavy reliance on a Global Positioning System (GPS). This dependency presents a significant limitation for military applications since many operations are conducted in environments where GPS signals are degraded or actively denied. Therefore, alternative navigation solutions without GPS must be developed and visual methods are one of the most promising approaches. A current visual navigation limitation is that much of the research has focused on developing and applying these algorithms on ground-based vehicles, small hand-held devices or multi-rotor SUAS. However, the Air Force has a need for fixed-wing SUAS to conduct extended operations. This research evaluates current state-of-the-art, open-source monocular visual odometry (VO) algorithms applied on fixed-wing SUAS flying at high altitudes under fast translation and rotation speeds. The algorithms tested are Semi-Direct VO (SVO), Direct Sparse Odometry (DSO), and ORB-SLAM2 (with loop closures disabled). Each algorithm is evaluated on a fixed-wing SUAS in simulation and real-world flight tests over Camp Atterbury, Indiana. Through these tests, ORB-SLAM2 is found to be the most robust and flexible algorithm under a variety of test conditions. However, all algorithms experience great difficulty maintaining localization in the collected real-world datasets, showing the limitations of using visual methods as the sole solution. Further study and development is required to fuse VO products with additional measurements to form a complete autonomous navigation solution
Proprioceptive Invariant Robot State Estimation
This paper reports on developing a real-time invariant proprioceptive robot
state estimation framework called DRIFT. A didactic introduction to invariant
Kalman filtering is provided to make this cutting-edge symmetry-preserving
approach accessible to a broader range of robotics applications. Furthermore,
this work dives into the development of a proprioceptive state estimation
framework for dead reckoning that only consumes data from an onboard inertial
measurement unit and kinematics of the robot, with two optional modules, a
contact estimator and a gyro filter for low-cost robots, enabling a significant
capability on a variety of robotics platforms to track the robot's state over
long trajectories in the absence of perceptual data. Extensive real-world
experiments using a legged robot, an indoor wheeled robot, a field robot, and a
full-size vehicle, as well as simulation results with a marine robot, are
provided to understand the limits of DRIFT
BotanicGarden: A High-Quality Dataset for Robot Navigation in Unstructured Natural Environments
The rapid developments of mobile robotics and autonomous navigation over the
years are largely empowered by public datasets for testing and upgrading, such
as sensor odometry and SLAM tasks. Impressive demos and benchmark scores have
arisen, which may suggest the maturity of existing navigation techniques.
However, these results are primarily based on moderate structured scenario
testing. When transitioning to challenging unstructured environments,
especially in GNSS-denied, texture-monotonous, and dense-vegetated natural
fields, their performance can hardly sustain at a high level and requires
further validation and improvement. To bridge this gap, we build a novel robot
navigation dataset in a luxuriant botanic garden of more than 48000m2.
Comprehensive sensors are used, including Gray and RGB stereo cameras, spinning
and MEMS 3D LiDARs, and low-cost and industrial-grade IMUs, all of which are
well calibrated and hardware-synchronized. An all-terrain wheeled robot is
employed for data collection, traversing through thick woods, riversides,
narrow trails, bridges, and grasslands, which are scarce in previous resources.
This yields 33 short and long sequences, forming 17.1km trajectories in total.
Excitedly, both highly-accurate ego-motions and 3D map ground truth are
provided, along with fine-annotated vision semantics. We firmly believe that
our dataset can advance robot navigation and sensor fusion research to a higher
level.Comment: This article has been accepted for publication in IEEE Robotics and
Automation Letter
Cost-effective robot for steep slope crops monitoring
This project aims to develop a low cost, simple and robust robot able to autonomously monitorcrops using simple sensors. It will be required do develop robotic sub-systems and integrate them with pre-selected mechanical components, electrical interfaces and robot systems (localization, navigation and perception) using ROS, for wine making regions and maize fields
Robotic navigation and inspection of bridge bearings
This thesis focuses on the development of a robotic platform for bridge bearing inspection. The existing literature on this topic highlights an aspiration for increased automation of bridge inspection, due to an increasing amount of ageing infrastructure and costly inspection.
Furthermore, bridge bearings are highlighted as being one of the most costly components of the bridge to maintain.
However, although autonomous robotic inspection is often stated as an aspiration, the existing literature for robotic bridge inspection often neglects to include the requirement of autonomous navigation. To achieve autonomous inspection, some methods for mapping and
localising in the bridge structure are required. This thesis compares existing methods for simultaneous localisation and mapping (SLAM) with localisation-only methods. In addition, a method for using pre-existing data to create maps for localisation is proposed.
A robotic platform was developed and these methods for localisation and mapping were then compared in a laboratory environment and then in a real bridge environment. The errors in the bridge environment are greater than in the laboratory environment, but remained within a defined error bound. A combined approach is suggested as an appropriate method for combining the lower errors of a SLAM approach with the advantages of a localisation approach for defining existing goals. Longer-term testing in a real bridge environment is still required.
The use of existing inspection data is then extended to the creation of a simulation environment, with the goal of creating a methodology for testing different configurations of
bridges or robots in a more realistic environment than laboratory testing, or other existing simulation environments.
Finally, the inspection of the structure surrounding the bridge bearing is considered, with a particular focus on the detection and segmentation of cracks in concrete. A deep learning approach is used to segment cracks from an existing dataset and compared to
an existing machine learning approach, with the deep-learning approach achieving a higher performance using a pixel-based evaluation. Other evaluation methods were also compared
that take the structure of the crack, and other related datasets, into account.
The generalisation of the approach for crack segmentation is evaluated by comparing
the results of the trained on different datasets. Finally, recommendations for improving the
datasets to allow better comparisons in future work is given
Robust convex optimisation techniques for autonomous vehicle vision-based navigation
This thesis investigates new convex optimisation techniques for motion and pose estimation. Numerous computer vision problems can be formulated as optimisation problems. These optimisation problems are generally solved via linear techniques using the singular value decomposition or iterative methods under an L2 norm minimisation. Linear techniques have the advantage of offering a closed-form solution that is simple to implement. The quantity being minimised is, however, not geometrically or statistically meaningful. Conversely, L2 algorithms rely on iterative estimation, where a cost function is minimised using algorithms such as Levenberg-Marquardt, Gauss-Newton, gradient descent or conjugate gradient. The cost functions involved are geometrically interpretable and can statistically be optimal under an assumption of Gaussian noise. However, in addition to their sensitivity to initial conditions, these algorithms are often slow and bear a high probability of getting trapped in a local minimum or producing infeasible solutions, even for small noise levels.
In light of the above, in this thesis we focus on developing new techniques for finding solutions via a convex optimisation framework that are globally optimal. Presently convex optimisation techniques in motion estimation have revealed enormous advantages. Indeed, convex optimisation ensures getting a global minimum, and the cost function is geometrically meaningful.
Moreover, robust optimisation is a recent approach for optimisation under uncertain data. In recent years the need to cope with uncertain data has become especially acute, particularly where real-world applications are concerned. In such circumstances, robust optimisation aims to recover an optimal solution whose feasibility must be guaranteed for any realisation of the uncertain data. Although many researchers avoid uncertainty due to the added complexity in constructing a robust
optimisation model and to lack of knowledge as to the nature of these uncertainties, and especially their propagation, in this thesis robust convex optimisation, while estimating the uncertainties at every step is investigated for the motion estimation problem.
First, a solution using convex optimisation coupled to the recursive least squares (RLS) algorithm and the robust H filter is developed for motion estimation. In another solution, uncertainties and their propagation are incorporated in a robust L convex optimisation framework for monocular visual motion estimation. In this solution, robust least squares is combined with a second order cone program (SOCP). A technique to improve the accuracy and the robustness of the fundamental matrix is also investigated in this thesis. This technique uses the covariance intersection approach to fuse feature location uncertainties, which leads to more consistent motion estimates.
Loop-closure detection is crucial in improving the robustness of navigation algorithms. In practice, after long navigation in an unknown environment, detecting that a vehicle is in a location it has previously visited gives the opportunity to increase the accuracy and consistency of the estimate. In this context, we have developed an efficient appearance-based method for visual loop-closure detection based on the combination of a Gaussian mixture model with the KD-tree data structure.
Deploying this technique for loop-closure detection, a robust L convex posegraph optimisation solution for unmanned aerial vehicle (UAVs) monocular motion estimation is introduced as well. In the literature, most proposed solutions formulate the pose-graph optimisation as a least-squares problem by minimising a cost function using iterative methods. In this work, robust convex optimisation under the L norm is adopted, which efficiently corrects the UAV’s pose after loop-closure detection.
To round out the work in this thesis, a system for cooperative monocular visual motion estimation with multiple aerial vehicles is proposed. The cooperative motion estimation employs state-of-the-art approaches for optimisation, individual motion estimation and registration. Three-view geometry algorithms in a convex optimisation framework are deployed on board the monocular vision system for each vehicle. In addition, vehicle-to-vehicle relative pose estimation is performed with a novel robust registration solution in a global optimisation framework. In parallel, and as a complementary solution for the relative pose, a robust non-linear H solution is designed as well to fuse measurements from the UAVs’ on-board inertial sensors with the visual estimates.
The suggested contributions have been exhaustively evaluated over a number of real-image data experiments in the laboratory using monocular vision systems and range imaging devices. In this thesis, we propose several solutions towards the goal of robust visual motion estimation using convex optimisation. We show that the convex optimisation framework may be extended to include uncertainty information, to achieve robust and optimal solutions. We observed that convex optimisation is a practical and very appealing alternative to linear techniques and iterative methods
Autonomous Robots in Dynamic Indoor Environments: Localization and Person-Following
Autonomous social robots have many tasks that they need to address such as localization, mapping, navigation, person following, place recognition, etc. In this thesis we focus on two key components required for the navigation of autonomous robots namely, person following behaviour and localization in dynamic human environments. We propose three novel approaches to address these components; two approaches for person following and one for indoor localization. A convolutional neural networks based approach and an Ada-boost based approach are developed for person following. We demonstrate the results by showing the tracking accuracy over time for this behaviour. For the localization task, we propose a novel approach which can act as a wrapper for traditional visual odometry based approaches to improve the localization accuracy in dynamic human environments. We evaluate this approach by showing how the performance varies with increasing number of dynamic agents present in the scene. This thesis provides qualitative and quantitative evaluations for each of the approaches proposed and show that we perform better than the current approaches
- …