2,054 research outputs found
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
Objects predict fixations better than early saliency
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as “saliency maps,” are often built on the assumption that “early” features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to “interesting” objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural
scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting
through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated
Dynamic Background Reconstruction via MAE for Infrared Small Target Detection
Infrared small target detection (ISTD) under complex backgrounds is a
difficult problem, for the differences between targets and backgrounds are not
easy to distinguish. Background reconstruction is one of the methods to deal
with this problem. This paper proposes an ISTD method based on background
reconstruction called Dynamic Background Reconstruction (DBR). DBR consists of
three modules: a dynamic shift window module (DSW), a background reconstruction
module (BR), and a detection head (DH). BR takes advantage of Vision
Transformers in reconstructing missing patches and adopts a grid masking
strategy with a masking ratio of 50\% to reconstruct clean backgrounds without
targets. To avoid dividing one target into two neighboring patches, resulting
in reconstructing failure, DSW is performed before input embedding. DSW
calculates offsets, according to which infrared images dynamically shift. To
reduce False Positive (FP) cases caused by regarding reconstruction errors as
targets, DH utilizes a structure of densely connected Transformer to further
improve the detection performance. Experimental results show that DBR achieves
the best F1-score on the two ISTD datasets, MFIRST (64.10\%) and SIRST
(75.01\%)
- …