346 research outputs found

    Improving acoustic vehicle classification by information fusion

    No full text
    We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac

    An adaptive model for multi-modal biometrics decision fusion

    Get PDF
    Master'sMASTER OF ENGINEERIN

    An intelligent multimodal biometric authentication model for personalised healthcare services

    Get PDF
    With the advent of modern technologies, the healthcare industry is moving towards a more personalised smart care model. The enablers of such care models are the Internet of Things (IoT) and Artificial Intelligence (AI). These technologies collect and analyse data from persons in care to alert relevant parties if any anomaly is detected in a patient’s regular pattern. However, such reliance on IoT devices to capture continuous data extends the attack surfaces and demands high-security measures. Both patients and devices need to be authenticated to mitigate a large number of attack vectors. The biometric authentication method has been seen as a promising technique in these scenarios. To this end, this paper proposes an AI-based multimodal biometric authentication model for single and group-based users’ device-level authentication that increases protection against the traditional single modal approach. To test the efficacy of the proposed model, a series of AI models are trained and tested using physiological biometric features such as ECG (Electrocardiogram) and PPG (Photoplethysmography) signals from five public datasets available in Physionet and Mendeley data repositories. The multimodal fusion authentication model shows promising results with 99.8% accuracy and an Equal Error Rate (EER) of 0.16

    Towards Explaining the Success (Or Failure) of Fusion in Biometric Authentication

    Get PDF
    Biometric authentication is a process of verifying an identity claim using a person's behavioral and physiological characteristics. Due to vulnerability of the system to environmental noise and variation caused by the user, fusion of several biometric-enabled systems is identified as a promising solution. In the literature, various fixed rules (e.g. min, max, median, mean) and trainable classifiers (e.g. linear combination of scores or weighted sum) are used to combine the scores of several base-systems. Despite many empirical experiments being reported in the literature, few works are targeted at studying a wide range of factors that can affect the fusion performance, but most undertook these factors in isolation. Some of these factors are: 1) dependency among features to be combined, 2) the choice of fusion classifier/operator, 3) the choice of decision threshold, 4) the relative base-system performance, 5) the presence of noise (or the degree of robustness of classifiers to noise), and 6) the type of classifier output. To understand these factors, we propose to model Equal Error Rate (EER), a commonly used performance measure in biometric authentication. Tackling factors 1--5 implies that the use of class conditional Gaussian distribution is imperative, at least to begin with. When the class conditional scores (client or impostor) to be combined are based on a multivariate Gaussian, factors 1, 3, 4 and 5 can be readily modeled. The challenge now lies in establishing the missing link between EER and the fusion classifier mentioned above. Based on the EER framework, we can even derive such missing link with non-linear fusion classifiers, a proposal that, to the best of our knowledge, has not been investigated before. The principal difference between the theoretical EER model proposed here and previous studies in this direction is that scores are considered log-likelihood ratios (of client versus impostor) and the decision threshold is considered a prior (or log-prior ratio). In the previous studies, scores are considered posterior probabilities whereby the role of adjustable threshold as a prior adjustment parameter is somewhat less emphasized. When the EER models (especially those on Order Statistics) cannot address adequately factors 1 and 4, we rely on simulations, which are relatively easy to carry out and whose results can be interpreted more easily. There are however several issues left untreated in the EER models, namely 1) what if the scores are known to be not approximately normally distributed (for instance those due to Multi-Layer Perceptron outputs); 2) what if scores among classifiers to be combined are not comparable in range (their distributions are different from each other); 3) how to evaluate the performance measure other than EER. For issue 1, we propose to reverse the process of the squashing function such that the data (scores) is once again approximately normal. For issue 2, some score normalization procedures are also proposed, namely F-ratio normalization (F-Norm) and margin normalization. F-Norm has the advantage that scores are made comparable while the relative shape of the distribution remains the same. Margin normalization has the advantage that no free parameter is required and such transformation relies entirely on the class conditional scores. Finally, although the Gaussian assumption is central to this work, we show that it is possible to remove such assumption by modeling the scores to be combined with a mixture of Gaussians. Some 1186 BANCA experiments verify that such approach can estimate the system performance better than using the Gaussian assumption

    Performance analysis of multimodal biometric fusion

    Get PDF
    Biometrics is constantly evolving technology which has been widely used in many official and commercial identification applications. In fact in recent years biometric-based authentication techniques received more attention due to increased concerns in security. Most biometric systems that are currently in use typically employ a single biometric trait. Such systems are called unibiometric systems. Despite considerable advances in recent years, there are still challenges in authentication based on a single biometric trait, such as noisy data, restricted degree of freedom, intra-class variability, non-universality, spoof attack and unacceptable error rates. Some of the challenges can be handled by designing a multimodal biometric system. Multimodal biometric systems are those which utilize or are capable of utilizing, more than one physiological or behavioural characteristic for enrolment, verification, or identification. In this thesis, we propose a novel fusion approach at a hybrid level between iris and online signature traits. Online signature and iris authentication techniques have been employed in a range of biometric applications. Besides improving the accuracy, the fusion of both of the biometrics has several advantages such as increasing population coverage, deterring spoofing activities and reducing enrolment failure. In this doctoral dissertation, we make a first attempt to combine online signature and iris biometrics. We principally explore the fusion of iris and online signature biometrics and their potential application as biometric identifiers. To address this issue, investigations is carried out into the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. We compare the results of the multimodal approach with the results of the individual online signature and iris authentication approaches. This dissertation describes research into the feature and decision fusion levels in multimodal biometrics.State of Kuwait – The Public Authority of Applied Education and Trainin

    Multimodal Biometrics Enhancement Recognition System based on Fusion of Fingerprint and PalmPrint: A Review

    Get PDF
    This article is an overview of a current multimodal biometrics research based on fingerprint and palm-print. It explains the pervious study for each modal separately and its fusion technique with another biometric modal. The basic biometric system consists of four stages: firstly, the sensor which is used for enrolmen

    Resilient Infrastructure and Building Security

    Get PDF

    Sparse Methods for Robust and Efficient Visual Recognition

    Get PDF
    Visual recognition has been a subject of extensive research in computer vision. A vast literature exists on feature extraction and learning methods for recognition. However, due to large variations in visual data, robust visual recognition is still an open problem. In recent years, sparse representation-based methods have become popular for visual recognition. By learning a compact dictionary of data and exploiting the notion of sparsity, start-of-the-art results have been obtained on many recognition tasks. However, existing data-driven sparse model techniques may not be optimal for some challenging recognition problems. In this dissertation, we consider some of these recognition tasks and present approaches based on sparse coding for robust and efficient recognition in such cases. First we study the problem of low-resolution face recognition. This is a challenging problem, and methods have been proposed using super-resolution and machine learning based techniques. However, these methods cannot handle variations like illumination changes which can happen at low resolutions, and degrade the performance. We propose a generative approach for classifying low resolution faces, by exploiting 3D face models. Further, we propose a joint sparse coding framework for robust classification at low resolutions. The effectiveness of the method is demonstrated on different face datasets. In the second part, we study a robust feature-level fusion method for multimodal biometric recognition. Although score-level and decision-level fusion methods exist in biometric literature, feature-level fusion is challenging due to different output formats of biometric modalities. In this work, we propose a novel sparse representation-based method for multimodal fusion, and present experimental results for a large multimodal dataset. Robustness to noise and occlusion are demonstrated. In the third part, we consider the problem of domain adaptation, where we want to learn effective classifiers for cases where the test images come from a different distribution than the training data. Typically, due to high cost of human annotation, very few labeled samples are available for images in the test domain. Specifically, we study the problem of adapting sparse dictionary-based classification methods for such cases. We describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both domains in the projected low dimensional space. The proposed method is efficient and performs on par or better than many competing state-of-the-art methods. Lastly, we study an emerging analysis framework of sparse coding for image classification. We show that the analysis sparse coding can give similar performance as the typical synthesis sparse coding methods, while being much faster at sparse encoding. In the end, we conclude the dissertation with discussions and possible future directions
    corecore