164 research outputs found
Relative Facial Action Unit Detection
This paper presents a subject-independent facial action unit (AU) detection
method by introducing the concept of relative AU detection, for scenarios where
the neutral face is not provided. We propose a new classification objective
function which analyzes the temporal neighborhood of the current frame to
decide if the expression recently increased, decreased or showed no change.
This approach is a significant change from the conventional absolute method
which decides about AU classification using the current frame, without an
explicit comparison with its neighboring frames. Our proposed method improves
robustness to individual differences such as face scale and shape, age-related
wrinkles, and transitions among expressions (e.g., lower intensity of
expressions). Our experiments on three publicly available datasets (Extended
Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement
of our approach over conventional absolute techniques. Keywords: facial action
coding system (FACS); relative facial action unit detection; temporal
information;Comment: Accepted at IEEE Winter Conference on Applications of Computer
Vision, Steamboat Springs Colorado, USA, 201
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations
Although deep language representations have become the dominant form of
language featurization in recent years, in many settings it is important to
understand a model's decision-making process. This necessitates not only an
interpretable model but also interpretable features. In particular, language
must be featurized in a way that is interpretable while still characterizing
the original text well. We present SenteCon, a method for introducing human
interpretability in deep language representations. Given a passage of text,
SenteCon encodes the text as a layer of interpretable categories in which each
dimension corresponds to the relevance of a specific category. Our empirical
evaluations indicate that encoding language with SenteCon provides high-level
interpretability at little to no cost to predictive performance on downstream
tasks. Moreover, we find that SenteCon outperforms existing interpretable
language representations with respect to both its downstream performance and
its agreement with human characterizations of the text.Comment: Accepted to Findings of ACL 202
Hand2Face: Automatic Synthesis and Recognition of Hand Over Face Occlusions
A person's face discloses important information about their affective state.
Although there has been extensive research on recognition of facial
expressions, the performance of existing approaches is challenged by facial
occlusions. Facial occlusions are often treated as noise and discarded in
recognition of affective states. However, hand over face occlusions can provide
additional information for recognition of some affective states such as
curiosity, frustration and boredom. One of the reasons that this problem has
not gained attention is the lack of naturalistic occluded faces that contain
hand over face occlusions as well as other types of occlusions. Traditional
approaches for obtaining affective data are time demanding and expensive, which
limits researchers in affective computing to work on small datasets. This
limitation affects the generalizability of models and deprives researchers from
taking advantage of recent advances in deep learning that have shown great
success in many fields but require large volumes of data. In this paper, we
first introduce a novel framework for synthesizing naturalistic facial
occlusions from an initial dataset of non-occluded faces and separate images of
hands, reducing the costly process of data collection and annotation. We then
propose a model for facial occlusion type recognition to differentiate between
hand over face occlusions and other types of occlusions such as scarves, hair,
glasses and objects. Finally, we present a model to localize hand over face
occlusions and identify the occluded regions of the face.Comment: Accepted to International Conference on Affective Computing and
Intelligent Interaction (ACII), 201
- …