257 research outputs found
Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition
This paper presents a novel quadratic projection based feature extraction
framework, where a set of quadratic matrices is learned to distinguish each
class from all other classes. We formulate quadratic matrix learning (QML) as a
standard semidefinite programming (SDP) problem. However, the con- ventional
interior-point SDP solvers do not scale well to the problem of QML for
high-dimensional data. To solve the scalability of QML, we develop an efficient
algorithm, termed DualQML, based on the Lagrange duality theory, to extract
nonlinear features. To evaluate the feasibility and effectiveness of the
proposed framework, we conduct extensive experiments on biometric recognition.
Experimental results on three representative biometric recogni- tion tasks,
including face, palmprint, and ear recognition, demonstrate the superiority of
the DualQML-based feature extraction algorithm compared to the current
state-of-the-art algorithm
Bimodal Biometric Verification Mechanism using fingerprint and face images(BBVMFF)
An increased demand of biometric authentication coupled with automation of systems is observed in the recent times. Generally biometric recognition systems currently used consider only a single biometric characteristic for verification or authentication. Researchers have proved the inefficiencies in unimodal biometric systems and propagated the adoption of multimodal biometric systems for verification. This paper introduces Bi-modal Biometric Verification Mechanism using Fingerprint and Face (BBVMFF). The BBVMFF considers the frontal face and fingerprint biometric characteristics of users for verification. The BBVMFF Considers both the Gabor phase and magnitude features as biometric trait definitions and simple lightweight feature level fusion algorithm. The fusion algorithm proposed enables the applicability of the proposed BBVMFF in unimodal and Bi-modal modes proved by the experimental results presented
Learning Rich Features for Gait Recognition by Integrating Skeletons and Silhouettes
Gait recognition captures gait patterns from the walking sequence of an
individual for identification. Most existing gait recognition methods learn
features from silhouettes or skeletons for the robustness to clothing,
carrying, and other exterior factors. The combination of the two data
modalities, however, is not fully exploited. Previous multimodal gait
recognition methods mainly employ the skeleton to assist the local feature
extraction where the intrinsic discrimination of the skeleton data is ignored.
This paper proposes a simple yet effective Bimodal Fusion (BiFusion) network
which mines discriminative gait patterns in skeletons and integrates with
silhouette representations to learn rich features for identification.
Particularly, the inherent hierarchical semantics of body joints in a skeleton
is leveraged to design a novel Multi-Scale Gait Graph (MSGG) network for the
feature extraction of skeletons. Extensive experiments on CASIA-B and OUMVLP
demonstrate both the superiority of the proposed MSGG network in modeling
skeletons and the effectiveness of the bimodal fusion for gait recognition.
Under the most challenging condition of walking in different clothes on
CASIA-B, our method achieves the rank-1 accuracy of 92.1%.Comment: The paper is under consideration at Multimedia Tools and Application
On the analysis of EEG power, frequency and asymmetry in Parkinson's disease during emotion processing
Objective: While Parkinson’s disease (PD) has traditionally been described as a movement disorder, there is growing evidence of disruption in emotion information processing associated with the disease. The aim of this study was to investigate whether there are specific electroencephalographic (EEG) characteristics that discriminate PD patients and normal controls during emotion information processing.
Method: EEG recordings from 14 scalp sites were collected from 20 PD patients and 30 age-matched normal controls. Multimodal (audio-visual) stimuli were presented to evoke specific targeted emotional states such as happiness, sadness, fear, anger, surprise and disgust. Absolute and relative power, frequency and asymmetry measures derived from spectrally analyzed EEGs were subjected to repeated ANOVA measures for group comparisons as well as to discriminate function analysis to examine their utility as classification indices. In addition, subjective ratings were obtained for the used emotional stimuli.
Results: Behaviorally, PD patients showed no impairments in emotion recognition as measured by subjective ratings. Compared with normal controls, PD patients evidenced smaller overall relative delta, theta, alpha and beta power, and at bilateral anterior regions smaller absolute theta, alpha, and beta power and higher mean total spectrum frequency across different emotional states. Inter-hemispheric theta, alpha, and beta power asymmetry index differences were noted, with controls exhibiting greater right than left hemisphere activation. Whereas intra-hemispheric alpha power asymmetry reduction was exhibited in patients bilaterally at all regions. Discriminant analysis correctly classified 95.0% of the patients and controls during emotional stimuli.
Conclusion: These distributed spectral powers in different frequency bands might provide meaningful information about emotional processing in PD patients
LDA-PAFF: Linear Discriminate Analysis Based Personal Authentication using Finger Vein and Face Images
Biometric based identifications are widely used for individuals personnel identification in recognition system. The unimodal recognition systems currently suffer from noisy data, spoofing attacks, biometric sensor data quality and many more. Robust personnel recognition can be achieved considering multimodal biometric traits. In this paper the LDA (Linear Discriminate analysis) based Personnel Authentication using Finger vein and Face Images (LDA-PAFF) is introduced considering the Finger Vein and Face biometric traits. The Magnitude and Phase features obtained from Gabor Kernels is considered to define the biometric traits of personnel. The biometric feature space is reduced using Fischer Score and Linear Discriminate Analysis. Personnel recognition is achieved using the weighted K-nearest neighbor classifier. The experimental study presented in the paper considers the (Group of Machine Learning and Applications, Shandong University-Homologous Multimodal Traits) SDUMLA-HMT multimodal biometric dataset. The performance of the LDA-PAFF is compared with the existing recognition systems and the performance improvement is proved through the results obtained
Palmprint identification using an ensemble of sparse representations
Among various palmprint identification methods proposed in the literature, sparse representation for classification (SRC) is very attractive offering high accuracy. Although SRC has good discriminative ability, its performance strongly depends on the quality of the training data. In particular, SRC suffers from two major problems: lack of training samples per class and large intra-class variations. In fact, palmprint images not only contain identity information but they also have other information, such as illumination and geometrical distortions due to the unconstrained conditions and the movement of the hand. In this case, the sparse representation assumption may not hold well in the original space since samples from different classes may be considered from the same class. This paper aims to enhance palmprint identification performance through SRC by proposing a simple yet efficient method based on an ensemble of sparse representations through an ensemble of discriminative dictionaries satisfying SRC assumption. The ensemble learning has the advantage to reduce the sensitivity due to the limited size of the training data and is performed based on random subspace sampling over 2D-PCA space while keeping the image inherent structure and information. In order to obtain discriminative dictionaries satisfying SRC assumption, a new space is learned by minimizing and maximizing the intra-class and inter-class variations using 2D-LDA. Extensive experiments are conducted on two publicly available palmprint data sets: multispectral and PolyU. Obtained results showed very promising results compared with both state-of-the-art holistic and coding methods. Besides these findings, we provide an empirical analysis of the parameters involved in the proposed technique to guide the neophyte. 2018 IEEE.This work was supported by the National Priority Research Program from the Qatar National Research Fund under Grant 6-249-1-053. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University.Scopu
Recommended from our members
Multimodal biometrics score level fusion using non-confidence information
Multimodal biometrics refers to automatic authentication methods that depend on multiple modalities of measurable physical characteristics. It alleviates most of the restrictions of single biometrics. To combine the multimodal biometrics scores, three different categories of fusion approaches including rule based, classification based and density based approaches are available. When choosing an approach, one has to consider not only the fusion performance, but also system requirements and other circumstances. In the context of verification, classification errors arise from samples in the overlapping region (or non- confidence region) between genuine users and impostors. In score space, a further separation of the samples outside the non-confidence region does not result in further verification improvements. Therefore, information contained in the non-confidence region might be useful for improving the fusion process. Up to this point, no attempts are reported in the literature that tries to enhance the fusion process using this additional information. In this work, the use of this information is explored in rule based and density based approaches mentioned above
Multi-modal association learning using spike-timing dependent plasticity (STDP)
We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs.
Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The
experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the
predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms
of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio
Intelligent Biosignal Processing in Wearable and Implantable Sensors
This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
- …