4,738 research outputs found
Lost in Time: Temporal Analytics for Long-Term Video Surveillance
Video surveillance is a well researched area of study with substantial work
done in the aspects of object detection, tracking and behavior analysis. With
the abundance of video data captured over a long period of time, we can
understand patterns in human behavior and scene dynamics through data-driven
temporal analytics. In this work, we propose two schemes to perform descriptive
and predictive analytics on long-term video surveillance data. We generate
heatmap and footmap visualizations to describe spatially pooled trajectory
patterns with respect to time and location. We also present two approaches for
anomaly prediction at the day-level granularity: a trajectory-based statistical
approach, and a time-series based approach. Experimentation with one year data
from a single camera demonstrates the ability to uncover interesting insights
about the scene and to predict anomalies reasonably well.Comment: To Appear in Springer LNE
Sparsity in Dynamics of Spontaneous Subtle Emotions: Analysis \& Application
Spontaneous subtle emotions are expressed through micro-expressions, which
are tiny, sudden and short-lived dynamics of facial muscles; thus poses a great
challenge for visual recognition. The abrupt but significant dynamics for the
recognition task are temporally sparse while the rest, irrelevant dynamics, are
temporally redundant. In this work, we analyze and enforce sparsity constrains
to learn significant temporal and spectral structures while eliminate
irrelevant facial dynamics of micro-expressions, which would ease the challenge
in the visual recognition of spontaneous subtle emotions. The hypothesis is
confirmed through experimental results of automatic spontaneous subtle emotion
recognition with several sparsity levels on CASME II and SMIC, the only two
publicly available spontaneous subtle emotion databases. The overall
performances of the automatic subtle emotion recognition are boosted when only
significant dynamics are preserved from the original sequences.Comment: IEEE Transaction of Affective Computing (2016
Enriched Long-term Recurrent Convolutional Network for Facial Micro-Expression Recognition
Facial micro-expression (ME) recognition has posed a huge challenge to
researchers for its subtlety in motion and limited databases. Recently,
handcrafted techniques have achieved superior performance in micro-expression
recognition but at the cost of domain specificity and cumbersome parametric
tunings. In this paper, we propose an Enriched Long-term Recurrent
Convolutional Network (ELRCN) that first encodes each micro-expression frame
into a feature vector through CNN module(s), then predicts the micro-expression
by passing the feature vector through a Long Short-term Memory (LSTM) module.
The framework contains two different network variants: (1) Channel-wise
stacking of input data for spatial enrichment, (2) Feature-wise stacking of
features for temporal enrichment. We demonstrate that the proposed approach is
able to achieve reasonably good performance, without data augmentation. In
addition, we also present ablation studies conducted on the framework and
visualizations of what CNN "sees" when predicting the micro-expression classes.Comment: Published in Micro-Expression Grand Challenge 2018, Workshop of 13th
IEEE Facial & Gesture 201
Less is More: Micro-expression Recognition from Video using Apex Frame
Despite recent interest and advances in facial micro-expression research,
there is still plenty room for improvement in terms of micro-expression
recognition. Conventional feature extraction approaches for micro-expression
video consider either the whole video sequence or a part of it, for
representation. However, with the high-speed video capture of micro-expressions
(100-200 fps), are all frames necessary to provide a sufficiently meaningful
representation? Is the luxury of data a bane to accurate recognition? A novel
proposition is presented in this paper, whereby we utilize only two images per
video: the apex frame and the onset frame. The apex frame of a video contains
the highest intensity of expression changes among all frames, while the onset
is the perfect choice of a reference frame with neutral expression. A new
feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to
encode essential expressiveness of the apex frame. We evaluated the proposed
method on five micro-expression databases: CAS(ME), CASME II, SMIC-HS,
SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with
our proposed technique achieving a state-of-the-art F1-score recognition
performance of 61% and 62% in the high frame rate CASME II and SMIC-HS
databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment
of grant support adde
Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition
In the recent year, state-of-the-art for facial micro-expression recognition
have been significantly advanced by deep neural networks. The robustness of
deep learning has yielded promising performance beyond that of traditional
handcrafted approaches. Most works in literature emphasized on increasing the
depth of networks and employing highly complex objective functions to learn
more features. In this paper, we design a Shallow Triple Stream
Three-dimensional CNN (STSTNet) that is computationally light whilst capable of
extracting discriminative high level features and details of micro-expressions.
The network learns from three optical flow features (i.e., optical strain,
horizontal and vertical optical flow fields) computed based on the onset and
apex frames of each video. Our experimental results demonstrate the
effectiveness of the proposed STSTNet, which obtained an unweighted average
recall rate of 0.7605 and unweighted F1-score of 0.7353 on the composite
database consisting of 442 samples from the SMIC, CASME II and SAMM databases.Comment: 5 pages, 1 figure, Accepted and published in IEEE FG 201
Analysis of Material Sample Heated by Impinging Hot Hydrogen Jet in a Non-Nuclear Tester
A computational conjugate heat transfer methodology was developed and anchored with data obtained from a hot-hydrogen jet heated, non-nuclear materials tester, as a first step towards developing an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective and thermal radiative, and conjugate heat transfers. Predicted hot hydrogen jet and material surface temperatures were compared with those of measurement. Predicted solid temperatures were compared with those obtained with a standard heat transfer code. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature
- …