1,306 research outputs found

    Modified Firefly Optimization with Deep Learning based Multimodal Biometric Verification Model

    Get PDF
    Biometric security has become a main concern in the data security field. Over the years, initiatives in the biometrics field had an increasing growth rate. The multimodal biometric method with greater recognition and precision rate for smart cities remains to be a challenge. By comparison, made with the single biometric recognition, we considered the multimodal biometric recognition related to finger vein and fingerprint since it has high security, accurate recognition, and convenient sample collection. This article presents a Modified Firefly Optimization with Deep Learning based Multimodal Biometric Verification (MFFODL-MBV) model. The presented MFFODL-MBV technique performs biometric verification using multiple biometrics such as fingerprint, DNA, and microarray. In the presented MFFODL-MBV technique, EfficientNet model is employed for feature extraction. For biometric recognition, MFFO algorithm with long short-term memory (LSTM) model is applied with MFFO algorithm as hyperparameter optimizer. To ensure the improved outcomes of the MFFODL-MBV approach, a widespread experimental analysis was performed. The wide-ranging experimental analysis reported improvements in the MFFODL-MBV technique over other models

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio
    • …
    corecore