311,936 research outputs found

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Complementary Feature Level Data Fusion for Biometric Authentication Using Neural Networks

    Get PDF
    Data fusion as a formal research area is referred to as multi‐sensor data fusion. The premise is that combined data from multiple sources can provide more meaningful, accurate and reliable information than that provided by data from a single source. There are many application areas in military and security as well as civilian domains. Multi‐sensor data fusion as applied to biometric authentication is termed multi‐modal biometrics. Though based on similar premises, and having many similarities to formal data fusion, multi‐modal biometrics has some differences in relation to data fusion levels. The objective of the current study was to apply feature level fusion of fingerprint feature and keystroke dynamics data for authentication purposes, utilizing Artificial Neural Networks (ANNs) as a classifier. Data fusion was performed adopting the complementary paradigm, which utilized all processed data from both sources. Experimental results returned a false acceptance rate (FAR) of 0.0 and a worst case false rejection rate (FRR) of 0.0004. This shows a worst case performance that is at least as good as most other research in the field. The experimental results also demonstrated that data fusion gave a better outcome than either fingerprint or keystroke dynamics alone

    Multi-Source Data Fusion for Cyberattack Detection in Power Systems

    Full text link
    Cyberattacks can cause a severe impact on power systems unless detected early. However, accurate and timely detection in critical infrastructure systems presents challenges, e.g., due to zero-day vulnerability exploitations and the cyber-physical nature of the system coupled with the need for high reliability and resilience of the physical system. Conventional rule-based and anomaly-based intrusion detection system (IDS) tools are insufficient for detecting zero-day cyber intrusions in the industrial control system (ICS) networks. Hence, in this work, we show that fusing information from multiple data sources can help identify cyber-induced incidents and reduce false positives. Specifically, we present how to recognize and address the barriers that can prevent the accurate use of multiple data sources for fusion-based detection. We perform multi-source data fusion for training IDS in a cyber-physical power system testbed where we collect cyber and physical side data from multiple sensors emulating real-world data sources that would be found in a utility and synthesizes these into features for algorithms to detect intrusions. Results are presented using the proposed data fusion application to infer False Data and Command injection-based Man-in- The-Middle (MiTM) attacks. Post collection, the data fusion application uses time-synchronized merge and extracts features followed by pre-processing such as imputation and encoding before training supervised, semi-supervised, and unsupervised learning models to evaluate the performance of the IDS. A major finding is the improvement of detection accuracy by fusion of features from cyber, security, and physical domains. Additionally, we observed the co-training technique performs at par with supervised learning methods when fed with our features

    Antimicrobial peptide identification using multi-scale convolutional network

    Get PDF
    Background: Antibiotic resistance has become an increasingly serious problem in the past decades. As an alternative choice, antimicrobial peptides (AMPs) have attracted lots of attention. To identify new AMPs, machine learning methods have been commonly used. More recently, some deep learning methods have also been applied to this problem. Results: In this paper, we designed a deep learning model to identify AMP sequences. We employed the embedding layer and the multi-scale convolutional network in our model. The multi-scale convolutional network, which contains multiple convolutional layers of varying filter lengths, could utilize all latent features captured by the multiple convolutional layers. To further improve the performance, we also incorporated additional information into the designed model and proposed a fusion model. Results showed that our model outperforms the state-of-the-art models on two AMP datasets and the Antimicrobial Peptide Database (APD)3 benchmark dataset. The fusion model also outperforms the state-of-the-art model on an anti-inflammatory peptides (AIPs) dataset at the accuracy. Conclusions: Multi-scale convolutional network is a novel addition to existing deep neural network (DNN) models. The proposed DNN model and the modified fusion model outperform the state-of-the-art models for new AMP discovery. The source code and data are available at https://github.com/zhanglabNKU/APIN

    Robust multi-modal and multi-unit feature level fusion of face and iris biometrics

    Get PDF
    Multi-biometrics has recently emerged as a mean of more robust and effcient personal verification and identification. Exploiting information from multiple sources at various levels i.e., feature, score, rank or decision, the false acceptance and rejection rates can be considerably reduced. Among all, feature level fusion is relatively an understudied problem. This paper addresses the feature level fusion for multi-modal and multi-unit sources of information. For multi-modal fusion the face and iris biometric traits are considered, while the multi-unit fusion is applied to merge the data from the left and right iris images. The proposed approach computes the SIFT features from both biometric sources, either multi- modal or multi-unit. For each source, the extracted SIFT features are selected via spatial sampling. Then these selected features are finally concatenated together into a single feature super-vector using serial fusion. This concatenated feature vector is used to perform classification. Experimental results from face and iris standard biometric databases are presented. The reported results clearly show the performance improvements in classification obtained by applying feature level fusion for both multi-modal and multi-unit biometrics in comparison to uni-modal classification and score level fusion
    corecore