64,692 research outputs found
Learning Temporal Alignment Uncertainty for Efficient Event Detection
In this paper we tackle the problem of efficient video event detection. We
argue that linear detection functions should be preferred in this regard due to
their scalability and efficiency during estimation and evaluation. A popular
approach in this regard is to represent a sequence using a bag of words (BOW)
representation due to its: (i) fixed dimensionality irrespective of the
sequence length, and (ii) its ability to compactly model the statistics in the
sequence. A drawback to the BOW representation, however, is the intrinsic
destruction of the temporal ordering information. In this paper we propose a
new representation that leverages the uncertainty in relative temporal
alignments between pairs of sequences while not destroying temporal ordering.
Our representation, like BOW, is of a fixed dimensionality making it easily
integrated with a linear detection function. Extensive experiments on CK+,
6DMG, and UvA-NEMO databases show significant performance improvements across
both isolated and continuous event detection tasks.Comment: Appeared in DICTA 2015, 8 page
Face detection and clustering for video indexing applications
This paper describes a method for automatically detecting human faces in generic video sequences. We employ an iterative algorithm in order to give a confidence measure for the presence or absence of faces within video shots. Skin colour filtering is carried out on a selected number of frames per video shot, followed by the application of shape and size heuristics. Finally, the remaining candidate regions are normalized and projected into an eigenspace, the reconstruction error being the measure of confidence for presence/absence of face. Following this, the confidence score for the entire video shot is calculated. In order to cluster extracted faces into a set of face classes, we employ an incremental procedure using a PCA-based dissimilarity measure in con-junction with spatio-temporal correlation. Experiments were carried out on a representative broadcast news test corpus
Micro-expression Recognition using Spatiotemporal Texture Map and Motion Magnification
Micro-expressions are short-lived, rapid facial expressions that are exhibited by individuals when they are in high stakes situations. Studying these micro-expressions is important as these cannot be modified by an individual and hence offer us a peek into what the individual is actually feeling and thinking as opposed to what he/she is trying to portray. The spotting and recognition of micro-expressions has applications in the fields of criminal investigation, psychotherapy, education etc. However due to micro-expressionsâ short-lived and rapid nature; spotting, recognizing and classifying them is a major challenge. In this paper, we design a hybrid approach for spotting and recognizing micro-expressions by utilizing motion magnification using Eulerian Video Magnification and Spatiotemporal Texture Map (STTM). The validation of this approach was done on the spontaneous micro-expression dataset, CASMEII in comparison with the baseline. This approach achieved an accuracy of 80% viz. an increase by 5% as compared to the existing baseline by utilizing 10-fold cross validation using Support Vector Machines (SVM) with a linear kernel
LOMo: Latent Ordinal Model for Facial Analysis in Videos
We study the problem of facial analysis in videos. We propose a novel weakly
supervised learning method that models the video event (expression, pain etc.)
as a sequence of automatically mined, discriminative sub-events (eg. onset and
offset phase for smile, brow lower and cheek raise for pain). The proposed
model is inspired by the recent works on Multiple Instance Learning and latent
SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in
the videos, approximately. We obtain consistent improvements over relevant
competitive baselines on four challenging and publicly available video based
facial analysis datasets for prediction of expression, clinical pain and intent
in dyadic conversations. In combination with complimentary features, we report
state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR
- âŠ