3 research outputs found
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
A significant challenge in object detection is accurate identification of an
object's position in image space, whereas one algorithm with one set of
parameters is usually not enough, and the fusion of multiple algorithms and/or
parameters can lead to more robust results. Herein, a new computational
intelligence fusion approach based on the dynamic analysis of agreement among
object detection outputs is proposed. Furthermore, we propose an online versus
just in training image augmentation strategy. Experiments comparing the results
both with and without fusion are presented. We demonstrate that the augmented
and fused combination results are the best, with respect to higher accuracy
rates and reduction of outlier influences. The approach is demonstrated in the
context of cone, pedestrian and box detection for Advanced Driver Assistance
Systems (ADAS) applications.Comment: 21 pages, 12 figures, journal paper, MDPI Sensors, 201