19,639 research outputs found
Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach
The main aim of this work is the development of a vision-based road detection
system fast enough to cope with the difficult real-time constraints imposed by
moving vehicle applications. The hardware platform, a special-purpose massively
parallel system, has been chosen to minimize system production and operational
costs. This paper presents a novel approach to expectation-driven low-level
image segmentation, which can be mapped naturally onto mesh-connected massively
parallel SIMD architectures capable of handling hierarchical data structures.
The input image is assumed to contain a distorted version of a given template;
a multiresolution stretching process is used to reshape the original template
in accordance with the acquired image content, minimizing a potential function.
The distorted template is the process output.Comment: See http://www.jair.org/ for any accompanying file
Modular Autonomous Biosampler (MAB)- A prototype system for distinct biological size-class sampling and preservation
Presently, there is a community wide deficiency in our ability to collect and preserve multiple size-class biologic samples across a broad spectrum of oceanographic platforms (e.g. AUVs, ROVs, and Ocean Observing System Nodes). This is particularly surprising in comparison to the level of instrumentation that now exists for acquiring physical and geophysical data (e.g. side-scan sonar, current profiles etc.), from these same platforms. We present our effort to develop a low-cost, high sample capacity modular,autonomous biological sampling device (MAB). The unit is designed for filtering and preserving 3 distinct biological size-classes (including bacteria), and is deployable in any aquatic setting from a variety of platform modalities (AUV, ROV, or mooring)
Below Horizon Aircraft Detection Using Deep Learning for Vision-Based Sense and Avoid
Commercial operation of unmanned aerial vehicles (UAVs) would benefit from an
onboard ability to sense and avoid (SAA) potential mid-air collision threats.
In this paper we present a new approach for detection of aircraft below the
horizon. We address some of the challenges faced by existing vision-based SAA
methods such as detecting stationary aircraft (that have no relative motion to
the background), rejecting moving ground vehicles, and simultaneous detection
of multiple aircraft. We propose a multi-stage, vision-based aircraft detection
system which utilises deep learning to produce candidate aircraft that we track
over time. We evaluate the performance of our proposed system on real flight
data where we demonstrate detection ranges comparable to the state of the art
with the additional capability of detecting stationary aircraft, rejecting
moving ground vehicles, and tracking multiple aircraft
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
- …