2,781 research outputs found

    Recognition of Promoters in DNA Sequences Using Weightily Averaged One-dependence Estimators

    Get PDF
    AbstractThe completion of the human genome project in the last decade has generated a strong demand in computational analysis techniques in order to fully exploit the acquired human genome database. The human genome project generated a perplexing mass of genetic data which necessitates automatic genome annotation. There is a growing interest in the process of gene finding and gene recognition from DNA sequences. In genetics, a promoter is a segment of a DNA that marks the starting point of transcription of a particular gene. Therefore, recognizing promoters is a one step towards gene finding in DNA sequences. Promoters also play a fundamental role in many other vital cellular processes. Aberrant promoters can cause a wide range of diseases including cancers. This paper describes a state-of-the-art machine learning based approach called weightily averaged one-dependence estimators to tackle the problem of recognizing promoters in genetic sequences. To lower the computational complexity and to increase the generalization capability of the system, we employ an entropy-based feature extraction approach to select relevant nucleotides that are directly responsible for promoter recognition. We carried out experiments on a dataset extracted from the biological literature for a proof-of-concept. The proposed system has achieved an accuracy of 97.17% in classifying promoters. The experimental results demonstrate the efficacy of our framework and encourage us to extend the framework to recognize promoter sequences in various species of higher eukaryotes

    Local feature weighting in nearest prototype classification

    Get PDF
    The distance metric is the corner stone of nearest neighbor (NN)-based methods, and therefore, of nearest prototype (NP) algorithms. That is because they classify depending on the similarity of the data. When the data is characterized by a set of features which may contribute to the classification task in different levels, feature weighting or selection is required, sometimes in a local sense. However, local weighting is typically restricted to NN approaches. In this paper, we introduce local feature weighting (LFW) in NP classification. LFW provides each prototype its own weight vector, opposite to typical global weighting methods found in the NP literature, where all the prototypes share the same one. Providing each prototype its own weight vector has a novel effect in the borders of the Voronoi regions generated: They become nonlinear. We have integrated LFW with a previously developed evolutionary nearest prototype classifier (ENPC). The experiments performed both in artificial and real data sets demonstrate that the resulting algorithm that we call LFW in nearest prototype classification (LFW-NPC) avoids overfitting on training data in domains where the features may have different contribution to the classification task in different areas of the feature space. This generalization capability is also reflected in automatically obtaining an accurate and reduced set of prototypes.Publicad

    The Optimisation of Bayesian Classifier in Predictive Spatial Modelling for Secondary Mineral Deposits

    Get PDF
    This paper discusses the general concept of Bayesian Network classifier and the optimisation of a predictive spatial model using Naive Bayes (NB) on secondary mineral deposit data. A different NB modelling approaches to mineral distribution data was used to predict the occurrence of a particular mineral deposit in a given area, which include; predictive attributes sub-selection, normalised attributes selection, NB dependent attributes and the strictness to NB model assumptions of attributes independence selection. The performance of the model was determined by selecting a model with the best predictive accuracy. The NB classifier that violates assumptions of attributes independence was used to compare with other forms of NB. The aim is to improve the general performance of the model through the best selection of predictive attribute data. The paper elaborates the workings of a Bayesian Network learning model, the concept of NB and its application to predicting mineral deposit potentials. The result of the optimised NB model based on predictive accuracies and the Receivr Operating Characteristics (ROC) value is also determined

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    Multi-label learning by extended multi-tier stacked ensemble method with label correlated feature subset augmentation

    Get PDF
    Classification is one of the basic and most important operations that can be used in data science and machine learning applications. Multi-label classification is an extension of the multi-class problem where a set of class labels are associated with a particular instance at a time. In a multiclass problem, a single class label is associated with an instance at a time. However, there are many different stacked ensemble methods that have been proposed and because of the complexity associated with the multi-label problems, there is still a lot of scope for improving the prediction accuracy. In this paper, we are proposing the novel extended multi-tier stacked ensemble (EMSTE) method with label correlationby feature subset selection technique and then augmenting those feature subsets while constructing the intermediate dataset for improving the prediction accuracy in the generalization phase of the stacking. The performance effect of the proposed method has been compared with existing methods and showed that our proposed method outperforms the other methods
    corecore