43 research outputs found
Online Multi Camera-IMU Calibration
Visual-inertial navigation systems are powerful in their ability to
accurately estimate localization of mobile systems within complex environments
that preclude the use of global navigation satellite systems. However, these
navigation systems are reliant on accurate and up-to-date temporospatial
calibrations of the sensors being used. As such, online estimators for these
parameters are useful in resilient systems. This paper presents an extension to
existing Kalman Filter based frameworks for estimating and calibrating the
extrinsic parameters of multi-camera IMU systems. In addition to extending the
filter framework to include multiple camera sensors, the measurement model was
reformulated to make use of measurement data that is typically made available
in fiducial detection software. A secondary filter layer was used to estimate
time translation parameters without closed-loop feedback of sensor data.
Experimental calibration results, including the use of cameras with
non-overlapping fields of view, were used to validate the stability and
accuracy of the filter formulation when compared to offline methods. Finally
the generalized filter code has been open-sourced and is available online
Vehicular Teamwork: Collaborative localization of Autonomous Vehicles
This paper develops a distributed collaborative localization algorithm based
on an extended kalman filter. This algorithm incorporates Ultra-Wideband (UWB)
measurements for vehicle to vehicle ranging, and shows improvements in
localization accuracy where GPS typically falls short. The algorithm was first
tested in a newly created open-source simulation environment that emulates
various numbers of vehicles and sensors while simultaneously testing multiple
localization algorithms. Predicted error distributions for various algorithms
are quickly producible using the Monte-Carlo method and optimization techniques
within MatLab. The simulation results were validated experimentally in an
outdoor, urban environment. Improvements of localization accuracy over a
typical extended kalman filter ranged from 2.9% to 9.3% over 180 meter test
runs. When GPS was denied, these improvements increased up to 83.3% over a
standard kalman filter. In both simulation and experimentally, the DCL
algorithm was shown to be a good approximation of a full state filter, while
reducing required communication between vehicles. These results are promising
in showing the efficacy of adding UWB ranging sensors to cars for collaborative
and landmark localization, especially in GPS-denied environments. In the
future, additional moving vehicles with additional tags will be tested in other
challenging GPS denied environments
AutoCone: An OmniDirectional Robot for Lane-Level Cone Placement
This paper summarizes the progress in developing a rugged, low-cost,
automated ground cone robot network capable of traffic delineation at
lane-level precision. A holonomic omnidirectional base with a traffic
delineator was developed to allow flexibility in initialization. RTK GPS was
utilized to reduce minimum position error to 2 centimeters. Due to recent
developments, the cost of the platform is now less than $1,600. To minimize the
effects of GPS-denied environments, wheel encoders and an Extended Kalman
Filter were implemented to maintain lane-level accuracy during operation and a
maximum error of 1.97 meters through 50 meters with little to no GPS signal.
Future work includes increasing the operational speed of the platforms,
incorporating lanelet information for path planning, and cross-platform
estimation
Radar-Only Off-Road Local Navigation
Off-road robotics have traditionally utilized lidar for local navigation due
to its accuracy and high resolution. However, the limitations of lidar, such as
reduced performance in harsh environmental conditions and limited range, have
prompted the exploration of alternative sensing technologies. This paper
investigates the potential of radar for off-road local navigation, as it offers
the advantages of a longer range and the ability to penetrate dust and light
vegetation. We adapt existing lidar-based methods for radar and evaluate the
performance in comparison to lidar under various off-road conditions. We show
that radar can provide a significant range advantage over lidar while
maintaining accuracy for both ground plane estimation and obstacle detection.
And finally, we demonstrate successful autonomous navigation at a speed of 2.5
m/s over a path length of 350 m using only radar for ground plane estimation
and obstacle detection.Comment: 7 pages, 17 figures, ITSC 202
Autonomous deployment and repair of a sensor network using an unmanned aerial vehicle
We describe a sensor network deployment method using autonomous flying robots. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data we collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for a second, repair, pass to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth.)
RELLIS-3D Dataset: Data, Benchmarks and Analysis
Semantic scene understanding is crucial for robust and safe autonomous
navigation, particularly so in off-road environments. Recent deep learning
advances for 3D semantic segmentation rely heavily on large sets of training
data, however existing autonomy datasets either represent urban environments or
lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal
dataset collected in an off-road environment, which contains annotations for
13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis
Campus of Texas A&M University, and presents challenges to existing algorithms
related to class imbalance and environmental topography. Additionally, we
evaluate the current state of the art deep learning semantic segmentation
models on this dataset. Experimental results show that RELLIS-3D presents
challenges for algorithms designed for segmentation in urban environments. This
novel dataset provides the resources needed by researchers to continue to
develop more advanced algorithms and investigate new research directions to
enhance autonomous navigation in off-road environments. RELLIS-3D will be
published at https://github.com/unmannedlab/RELLIS-3D