408,998 research outputs found

    Antimicrobial peptide identification using multi-scale convolutional network

    Get PDF
    Background: Antibiotic resistance has become an increasingly serious problem in the past decades. As an alternative choice, antimicrobial peptides (AMPs) have attracted lots of attention. To identify new AMPs, machine learning methods have been commonly used. More recently, some deep learning methods have also been applied to this problem. Results: In this paper, we designed a deep learning model to identify AMP sequences. We employed the embedding layer and the multi-scale convolutional network in our model. The multi-scale convolutional network, which contains multiple convolutional layers of varying filter lengths, could utilize all latent features captured by the multiple convolutional layers. To further improve the performance, we also incorporated additional information into the designed model and proposed a fusion model. Results showed that our model outperforms the state-of-the-art models on two AMP datasets and the Antimicrobial Peptide Database (APD)3 benchmark dataset. The fusion model also outperforms the state-of-the-art model on an anti-inflammatory peptides (AIPs) dataset at the accuracy. Conclusions: Multi-scale convolutional network is a novel addition to existing deep neural network (DNN) models. The proposed DNN model and the modified fusion model outperform the state-of-the-art models for new AMP discovery. The source code and data are available at https://github.com/zhanglabNKU/APIN

    Robust multi-modal and multi-unit feature level fusion of face and iris biometrics

    Get PDF
    Multi-biometrics has recently emerged as a mean of more robust and effcient personal verification and identification. Exploiting information from multiple sources at various levels i.e., feature, score, rank or decision, the false acceptance and rejection rates can be considerably reduced. Among all, feature level fusion is relatively an understudied problem. This paper addresses the feature level fusion for multi-modal and multi-unit sources of information. For multi-modal fusion the face and iris biometric traits are considered, while the multi-unit fusion is applied to merge the data from the left and right iris images. The proposed approach computes the SIFT features from both biometric sources, either multi- modal or multi-unit. For each source, the extracted SIFT features are selected via spatial sampling. Then these selected features are finally concatenated together into a single feature super-vector using serial fusion. This concatenated feature vector is used to perform classification. Experimental results from face and iris standard biometric databases are presented. The reported results clearly show the performance improvements in classification obtained by applying feature level fusion for both multi-modal and multi-unit biometrics in comparison to uni-modal classification and score level fusion

    Target Type Tracking with PCR5 and Dempster's rules: A Comparative Analysis

    Full text link
    In this paper we consider and analyze the behavior of two combinational rules for temporal (sequential) attribute data fusion for target type estimation. Our comparative analysis is based on Dempster's fusion rule proposed in Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a very efficient Target Type Tracking and reduces drastically the latency delay for correct Target Type decision with respect to Demspter's rule. For cases presenting some short Target Type switches, Demspter's rule is proved to be unable to detect the switches and thus to track correctly the Target Type changes. The approach proposed here is totally new, efficient and promising to be incorporated in real-time Generalized Data Association - Multi Target Tracking systems (GDA-MTT) and provides an important result on the behavior of PCR5 with respect to Dempster's rule. The MatLab source code is provided inComment: 10 pages, 5 diagrams. Presented to Fusion 2006 International Conference, Florence, Italy, July 200

    Multi-level fusion of hard and soft information

    Get PDF
    Proceedings of: 17th International Conference on Information Fusion (FUSION 2014): Salamanca, Spain 7-10 July 2014.Driven by the underlying need for a yet to be developed framework for fusing heterogeneous data and information at different semantic levels coming from both sensory and human sources, we present some results of the research being conducted within the NATO Research Task Group IST-106/RTG-051 on "Information Filtering and Multi Source Information Fusion". As part of this on-going effort, we discuss here a first outcome of our investigation on multi-level fusion. It deals with removing the first hurdle between data/information sources and processes being at different levels: representation. Our contention here is that a common representation and description framework is the premise for enabling processing overarching different semantic levels. To this end we discuss here the use of the Battle Management Language (BML) as a way ("lingua franca") to encode sensory data, a priori and contextual knowledge, both as hard and soft data.Publicad

    Multi-source hierarchical conditional random field model for feature fusion of remote sensing images and LiDAR data

    Get PDF
    Feature fusion of remote sensing images and LiDAR points cloud data, which have strong complementarity, can effectively play the advantages of multi-class features to provide more reliable information support for the remote sensing applications, such as object classification and recognition. In this paper, we introduce a novel multi-source hierarchical conditional random field (MSHCRF) model to fuse features extracted from remote sensing images and LiDAR data for image classification. Firstly, typical features are selected to obtain the interest regions from multi-source data, then MSHCRF model is constructed to exploit up the features, category compatibility of images and the category consistency of multi-source data based on the regions, and the outputs of the model represents the optimal results of the image classification. Competitive results demonstrate the precision and robustness of the proposed method

    Combining feature fusion and decision fusion for classification of hyperspectral and LiDAR data

    Get PDF
    This paper proposes a method to combine feature fusion and decision fusion together for multi-sensor data classification. First, morphological features which contain elevation and spatial information, are generated on both LiDAR data and the first few principal components (PCs) of original hyperspectral (HS) image. We got the fused features by projecting the spectral (original HS image), spatial and elevation features onto a lower subspace through a graph-based feature fusion method. Then, we got four classification maps by using spectral features, spatial features, elevation features and the graph fused features individually as input of SVM classifier. The final classification map was obtained by fusing the four classification maps through the weighted majority voting. Experimental results on fusion of HS and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest demonstrate effectiveness of the proposed method. Compared to the methods using single data source or only feature fusion, with the proposed method, overall classification accuracies were improved by 10% and 2%, respectively
    corecore