6,743 research outputs found
Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data
Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Learnable PINs: Cross-Modal Embeddings for Person Identity
We propose and investigate an identity sensitive joint embedding of face and
voice. Such an embedding enables cross-modal retrieval from voice to face and
from face to voice. We make the following four contributions: first, we show
that the embedding can be learnt from videos of talking faces, without
requiring any identity labels, using a form of cross-modal self-supervision;
second, we develop a curriculum learning schedule for hard negative mining
targeted to this task, that is essential for learning to proceed successfully;
third, we demonstrate and evaluate cross-modal retrieval for identities unseen
and unheard during training over a number of scenarios and establish a
benchmark for this novel task; finally, we show an application of using the
joint embedding for automatically retrieving and labelling characters in TV
dramas.Comment: To appear in ECCV 201
Linking recorded data with emotive and adaptive computing in an eHealth environment
Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development
End-to-End Audiovisual Fusion with LSTMs
Several end-to-end deep learning approaches have been recently presented
which simultaneously extract visual features from the input images and perform
visual speech classification. However, research on jointly extracting audio and
visual features and performing classification is very limited. In this work, we
present an end-to-end audiovisual model based on Bidirectional Long Short-Term
Memory (BLSTM) networks. To the best of our knowledge, this is the first
audiovisual fusion model which simultaneously learns to extract features
directly from the pixels and spectrograms and perform classification of speech
and nonlinguistic vocalisations. The model consists of multiple identical
streams, one for each modality, which extract features directly from mouth
regions and spectrograms. The temporal dynamics in each stream/modality are
modeled by a BLSTM and the fusion of multiple streams/modalities takes place
via another BLSTM. An absolute improvement of 1.9% in the mean F1 of 4
nonlingusitic vocalisations over audio-only classification is reported on the
AVIC database. At the same time, the proposed end-to-end audiovisual fusion
system improves the state-of-the-art performance on the AVIC database leading
to a 9.7% absolute increase in the mean F1 measure. We also perform audiovisual
speech recognition experiments on the OuluVS2 database using different views of
the mouth, frontal to profile. The proposed audiovisual system significantly
outperforms the audio-only model for all views when the acoustic noise is high.Comment: Accepted to AVSP 2017. arXiv admin note: substantial text overlap
with arXiv:1709.00443 and text overlap with arXiv:1701.0584
Towards a comprehensive 3D dynamic facial expression database
Human faces play an important role in everyday life, including the expression of person identity,
emotion and intentionality, along with a range of biological functions. The human face has also become the
subject of considerable research effort, and there has been a shift towards understanding it using stimuli of
increasingly more realistic formats. In the current work, we outline progress made in the production of a
database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for
capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear,
disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and
extreme). We also present details of a psychological experiment that was used to formally evaluate the
accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers
and practitioners. The goal is to scale up the work with more actors and expression types
- …