358 research outputs found
Recommended from our members
Automatic affective dimension recognition from naturalistic facial expressions based on wavelet filtering and PLS regression
Automatic affective dimension recognition from facial expression continuously in naturalistic contexts is a very challenging research topic but very important in human-computer interaction. In this paper, an automatic recognition system was proposed to predict the affective dimensions such as Arousal, Valence and Dominance continuously in naturalistic facial expression videos. Firstly, visual and vocal features are extracted from image frames and audio segments in facial expression videos. Secondly, a wavelet transform based digital filtering method is applied to remove the irrelevant noise information in the feature space. Thirdly, Partial Least Squares regression is used to predict the affective dimensions from both video and audio modalities. Finally, two modalities are combined to boost overall performance in the decision fusion process. The proposed method is tested in the fourth international Audio/Visual Emotion Recognition Challenge (AVEC2014) dataset and compared to other state-of-the-art methods in the affect recognition sub-challenge with a good performance
Signature Verification Approach using Fusion of Hybrid Texture Features
In this paper, a writer-dependent signature verification method is proposed.
Two different types of texture features, namely Wavelet and Local Quantized
Patterns (LQP) features, are employed to extract two kinds of transform and
statistical based information from signature images. For each writer two
separate one-class support vector machines (SVMs) corresponding to each set of
LQP and Wavelet features are trained to obtain two different authenticity
scores for a given signature. Finally, a score level classifier fusion method
is used to integrate the scores obtained from the two one-class SVMs to achieve
the verification score. In the proposed method only genuine signatures are used
to train the one-class SVMs. The proposed signature verification method has
been tested using four different publicly available datasets and the results
demonstrate the generality of the proposed method. The proposed system
outperforms other existing systems in the literature.Comment: Neural Computing and Applicatio
Ensemble of Different Approaches for a Reliable Person Re-identification System
An ensemble of approaches for reliable person re-identification is proposed in this paper. The proposed ensemble is built combining widely used person re-identification systems using different color spaces and some variants of state-of-the-art approaches that are proposed in this paper. Different descriptors are tested, and both texture and color features are extracted from the images; then the different descriptors are compared using different distance measures (e.g., the Euclidean distance, angle, and the Jeffrey distance). To improve performance, a method based on skeleton detection, extracted from the depth map, is also applied when the depth map is available. The proposed ensemble is validated on three widely used datasets (CAVIAR4REID, IAS, and VIPeR), keeping the same parameter set of each approach constant across all tests to avoid overfitting and to demonstrate that the proposed system can be considered a general-purpose person re-identification system. Our experimental results show that the proposed system offers significant improvements over baseline approaches. The source code used for the approaches tested in this paper will be available at https://www.dei.unipd.it/node/2357 and http://robotics.dei.unipd.it/reid/
Recommended from our members
Efficient smile detection by Extreme Learning Machine
Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration
Textural features for fingerprint liveness detection
The main topic ofmy research during these three years concerned biometrics and in particular
the Fingerprint Liveness Detection (FLD), namely the recognition of fake fingerprints.
Fingerprints spoofing is a topical issue as evidenced by the release of the latest iPhone and
Samsung Galaxy models with an embedded fingerprint reader as an alternative to passwords.
Several videos posted on YouTube show how to violate these devices by using fake
fingerprints which demonstrated how the problemof vulnerability to spoofing constitutes a
threat to the existing fingerprint recognition systems.
Despite the fact that many algorithms have been proposed so far, none of them showed
the ability to clearly discriminate between real and fake fingertips. In my work, after a study
of the state-of-the-art I paid a special attention on the so called textural algorithms. I first
used the LBP (Local Binary Pattern) algorithm and then I worked on the introduction of the
LPQ (Local Phase Quantization) and the BSIF (Binarized Statistical Image Features) algorithms
in the FLD field.
In the last two years I worked especially on what we called the “user specific” problem.
In the extracted features we noticed the presence of characteristic related not only to the
liveness but also to the different users. We have been able to improve the obtained results
identifying and removing, at least partially, this user specific characteristic.
Since 2009 the Department of Electrical and Electronic Engineering of the University of
Cagliari and theDepartment of Electrical and Computer Engineering of the ClarksonUniversity
have organized the Fingerprint Liveness Detection Competition (LivDet). I have been
involved in the organization of both second and third editions of the Fingerprint Liveness
Detection Competition (LivDet 2011 and LivDet 2013) and I am currently involved in the acquisition
of live and fake fingerprint that will be inserted in three of the LivDet 2015 datasets
Fingerprint presentation attack detection utilizing spatio-temporal features
This article belongs to the Special Issue Biometric Sensing.This paper presents a novel mechanism for fingerprint dynamic presentation attack detec-tion. We utilize five spatio-temporal feature extractors to efficiently eliminate and mitigate different presentation attack species. The feature extractors are selected such that the fingerprint ridge/valley pattern is consolidated with the temporal variations within the pattern in fingerprint videos. An SVM classification scheme, with a second degree polynomial kernel, is used in our presentation attack detection subsystem to classify bona fide and attack presentations. The experiment protocol and evaluation are conducted following the ISO/IEC 30107-3:2017 standard. Our proposed approach demonstrates efficient capability of detecting presentation attacks with significantly low BPCER where BPCER is 1.11% for an optical sensor and 3.89% for a thermal sensor at 5% APCER for both.This work was supported by the European Union's Horizon 2020 for Research and Innovation Program under Grant 675087 (AMBER)
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Micro Expression Classification Accuracy Assessment
The ability to identify and draw appropriate implications from non-verbal cues is a challenging task in facial expression recognition and has been investigated by various disciplines particularly social science, medical science, psychology and technological sciences beyond three decades. Non-verbal cues often last a few seconds and are obvious (macro) whereas others are very short and difficult to interpret (micro). This research is based on the area of micro expression recognition with the main focus laid on understanding and exploring the combined effect of various existing feature extraction techniques and one of the most renowned machine learning algorithms identified as Support Vector Machine (SVM). Experiments are conducted on spatiotemporal descriptors extracted from the CASME II dataset using LBP-TOP, LBP-SIP, LPQ-TOP, HOG-TOP, HIGO-TOP and STLBP-IP. We have considered two different cases for the CASME II dataset where the first case measures performance for five class i.e. happiness, disgust, surprise, repression and others and the second case considers three classes namely positive, negative and surprise. LPQ-TOP with SVM produced highest accuracy against rest of the approaches in this work
High Order Volumetric Directional Pattern for Video-Based Face Recognition
Describing the dynamic textures has attracted growing attention in the field of computer vision and pattern recognition. In this paper, a novel approach for recognizing dynamic textures, namely, high order volumetric directional pattern (HOVDP), is proposed. It is an extension of the volumetric directional pattern (VDP) which extracts and fuses the temporal information (dynamic features) from three consecutive frames. HOVDP combines the movement and appearance features together considering the nth order volumetric directional variation patterns of all neighboring pixels from three consecutive frames. In experiments with two challenging video face databases, YouTube Celebrities and Honda/UCSD, HOVDP clearly outperformed a set of state-of-the-art approaches
- …