1,330 research outputs found
Are all the frames equally important?
In this work, we address the problem of measuring and predicting temporal
video saliency - a metric which defines the importance of a video frame for
human attention. Unlike the conventional spatial saliency which defines the
location of the salient regions within a frame (as it is done for still
images), temporal saliency considers importance of a frame as a whole and may
not exist apart from context. The proposed interface is an interactive
cursor-based algorithm for collecting experimental data about temporal
saliency. We collect the first human responses and perform their analysis. As a
result, we show that qualitatively, the produced scores have very explicit
meaning of the semantic changes in a frame, while quantitatively being highly
correlated between all the observers. Apart from that, we show that the
proposed tool can simultaneously collect fixations similar to the ones produced
by eye-tracker in a more affordable way. Further, this approach may be used for
creation of first temporal saliency datasets which will allow training
computational predictive algorithms. The proposed interface does not rely on
any special equipment, which allows to run it remotely and cover a wide
audience.Comment: CHI'20 Late Breaking Work
Object and feature based modelling of attention in meeting and surveillance videos
MPhilThe aim of the thesis is to create and validate models of visual attention. To
this extent, a novel unsupervised object detection and tracking framework has been
developed by the author. It is demonstrated on people, faces and moving objects
and the output is integrated in modelling of visual attention. The proposed approach
integrates several types of modules in initialisation, target estimation and validation.
Tracking is rst used to introduce high-level features, by extending a popular model
based on low-level features[1]. Two automatic models of visual attention are further
implemented. One based on winner take it all and inhibition of return as the mech-
anisms of selection on a saliency model with high- and low-level features combined.
Another which is based only on high-level object tracking results and statistic proper-
ties from the collected eye-traces, with the possibility of activating inhibition of return
as an additional mechanism. The parameters of the tracking framework thoroughly
investigated and its success demonstrated. Eye-tracking experiments show that high-
level features are much better at explaining the allocation of attention by the subjects
in the study. Low-level features alone do correlate signi cantly with real allocation
of attention. However, in fact it lowers the correlation score when combined with
high-level features in comparison to using high-level features alone. Further, ndings
in collected eye-traces are studied with qualitative method, mainly to discover direc-
tions in future research in the area. Similarities and dissimilarities between automatic
models of attention and collected eye-traces are discusse
On the Distribution of Salient Objects in Web Images and its Influence on Salient Object Detection
It has become apparent that a Gaussian center bias can serve as an important
prior for visual saliency detection, which has been demonstrated for predicting
human eye fixations and salient object detection. Tseng et al. have shown that
the photographer's tendency to place interesting objects in the center is a
likely cause for the center bias of eye fixations. We investigate the influence
of the photographer's center bias on salient object detection, extending our
previous work. We show that the centroid locations of salient objects in
photographs of Achanta and Liu's data set in fact correlate strongly with a
Gaussian model. This is an important insight, because it provides an empirical
motivation and justification for the integration of such a center bias in
salient object detection algorithms and helps to understand why Gaussian models
are so effective. To assess the influence of the center bias on salient object
detection, we integrate an explicit Gaussian center bias model into two
state-of-the-art salient object detection algorithms. This way, first, we
quantify the influence of the Gaussian center bias on pixel- and segment-based
salient object detection. Second, we improve the performance in terms of F1
score, Fb score, area under the recall-precision curve, area under the receiver
operating characteristic curve, and hit-rate on the well-known data set by
Achanta and Liu. Third, by debiasing Cheng et al.'s region contrast model, we
exemplarily demonstrate that implicit center biases are partially responsible
for the outstanding performance of state-of-the-art algorithms. Last but not
least, as a result of debiasing Cheng et al.'s algorithm, we introduce a
non-biased salient object detection method, which is of interest for
applications in which the image data is not likely to have a photographer's
center bias (e.g., image data of surveillance cameras or autonomous robots)
Audio-Visual Glance Network for Efficient Video Recognition
Deep learning has made significant strides in video understanding tasks, but
the computation required to classify lengthy and massive videos using
clip-level video classifiers remains impractical and prohibitively expensive.
To address this issue, we propose Audio-Visual Glance Network (AVGN), which
leverages the commonly available audio and visual modalities to efficiently
process the spatio-temporally important parts of a video. AVGN firstly divides
the video into snippets of image-audio clip pair and employs lightweight
unimodal encoders to extract global visual features and audio features. To
identify the important temporal segments, we use an Audio-Visual Temporal
Saliency Transformer (AV-TeST) that estimates the saliency scores of each
frame. To further increase efficiency in the spatial dimension, AVGN processes
only the important patches instead of the whole images. We use an
Audio-Enhanced Spatial Patch Attention (AESPA) module to produce a set of
enhanced coarse visual features, which are fed to a policy network that
produces the coordinates of the important patches. This approach enables us to
focus only on the most important spatio-temporally parts of the video, leading
to more efficient video recognition. Moreover, we incorporate various training
techniques and multi-modal feature fusion to enhance the robustness and
effectiveness of our AVGN. By combining these strategies, our AVGN sets new
state-of-the-art performance in multiple video recognition benchmarks while
achieving faster processing speed.Comment: ICCV 202
- …