4,584 research outputs found
Face recognition technologies for evidential evaluation of video traces
Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
Challenges of Multi-Factor Authentication for Securing Advanced IoT (A-IoT) Applications
The unprecedented proliferation of smart devices together with novel
communication, computing, and control technologies have paved the way for the
Advanced Internet of Things~(A-IoT). This development involves new categories
of capable devices, such as high-end wearables, smart vehicles, and consumer
drones aiming to enable efficient and collaborative utilization within the
Smart City paradigm. While massive deployments of these objects may enrich
people's lives, unauthorized access to the said equipment is potentially
dangerous. Hence, highly-secure human authentication mechanisms have to be
designed. At the same time, human beings desire comfortable interaction with
their owned devices on a daily basis, thus demanding the authentication
procedures to be seamless and user-friendly, mindful of the contemporary urban
dynamics. In response to these unique challenges, this work advocates for the
adoption of multi-factor authentication for A-IoT, such that multiple
heterogeneous methods - both well-established and emerging - are combined
intelligently to grant or deny access reliably. We thus discuss the pros and
cons of various solutions as well as introduce tools to combine the
authentication factors, with an emphasis on challenging Smart City
environments. We finally outline the open questions to shape future research
efforts in this emerging field.Comment: 7 pages, 4 figures, 2 tables. The work has been accepted for
publication in IEEE Network, 2019. Copyright may be transferred without
notice, after which this version may no longer be accessibl
Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis
Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community
- …