10 research outputs found
Locally weighted learning: How and when does it work in Bayesian networks?
© 2016, Taylor and Francis Ltd. All rights reserved. Bayesian network (BN), a simple graphical notation for conditional independence assertions, is promised to represent the probabilistic relationships between diseases and symptoms. Learning the structure of a Bayesian network classifier (BNC) encodes conditional independence assumption between attributes, which may deteriorate the classification performance. One major approach to mitigate the BNC’s primary weakness (the attributes independence assumption) is the locally weighted approach. And this type of approach has been proved to achieve good performance for naive Bayes, a BNC with simple structure. However, we do not know whether or how effective it works for improving the performance of the complex BNC. In this paper, we first do a survey on the complex structure models for BNCs and their improvements, then carry out a systematically experimental analysis to investigate the effectiveness of locally weighted method for complex BNCs, e.g., tree-augmented naive Bayes (TAN), averaged one-dependence estimators AODE and hidden naive Bayes (HNB), measured by classification accuracy (ACC) and the area under the ROC curve ranking (AUC). Experiments and comparisons on 36 benchmark data sets collected from University of California, Irvine (UCI) in Weka system demonstrate that locally weighting technologies just slightly outperforms unweighted complex BNCs on ACC and AUC. In other words, although locally weighting could significantly improve the performance of NB (a BNC with simple structure), it could not work well on BNCs with complex structures. This is because the performance improvements of BNCs are attributed to their structures not the locally weighting
An outlier ranking tree selection approach to extreme pruning of random forests.
Random Forest (RF) is an ensemble classification technique that was developed by Breiman over a decade ago. Compared with other ensemble techniques, it has proved its accuracy and superiority. Many researchers, however, believe that there is still room for enhancing and improving its performance in terms of predictive accuracy. This explains why, over the past decade, there have been many extensions of RF where each extension employed a variety of techniques and strategies to improve certain aspect(s) of RF. Since it has been proven empirically that ensembles tend to yield better results when there is a significant diversity among the constituent models, the objective of this paper is twofold. First, it investigates how an unsupervised learning technique, namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the RF. Second, trees with the highest LOF scores are then used to create a new RF termed LOFB-DRF that is much smaller in size than RF, and yet performs at least as good as RF, but mostly exhibits higher performance in terms of accuracy. The latter refers to a known technique called ensemble pruning. Experimental results on 10 real datasets prove the superiority of our proposed method over the traditional RF. Unprecedented pruning levels reaching as high as 99% have been achieved at the time of boosting the predictive accuracy of the ensemble. The notably extreme pruning level makes the technique a good candidate for real-time applications
A Diversity-Accuracy Measure for Homogenous Ensemble Selection
Several selection methods in the literature are essentially based on an evaluation function that determines whether a model M contributes positively to boost the performances of the whole ensemble. In this paper, we propose a method called DIversity and ACcuracy for Ensemble Selection (DIACES) using an evaluation function based on both diversity and accuracy. The method is applied on homogenous ensembles composed of C4.5 decision trees and based on a hill climbing strategy. This allows selecting ensembles with the best compromise between maximum diversity and minimum error rate. Comparative studies show that in most cases the proposed method generates reduced size ensembles with better performances than usual ensemble simplification methods
Improving classifications for cardiac autonomic neuropathy using multi-level ensemble classifiers and feature selection based on random forest
This paper is devoted to empirical investigation of novel multi-level ensemble meta classifiers for the detection and monitoring of progression of cardiac autonomic neuropathy, CAN, in diabetes patients. Our experiments relied on an extensive database and concentrated on ensembles of ensembles, or multi-level meta classifiers, for the classification of cardiac autonomic neuropathy progression. First, we carried out a thorough investigation comparing the performance of various base classifiers for several known sets of the most essential features in this database and determined that Random Forest significantly and consistently outperforms all other base classifiers in this new application. Second, we used feature selection and ranking implemented in Random Forest. It was able to identify a new set of features, which has turned out better than all other sets considered for this large and well-known database previously. Random Forest remained the very best classier for the new set of features too. Third, we investigated meta classifiers and new multi-level meta classifiers based on Random Forest, which have improved its performance. The results obtained show that novel multi-level meta classifiers achieved further improvement and obtained new outcomes that are significantly better compared with the outcomes published in the literature previously for cardiac autonomic neuropathy
Une nouvelle méthode d’élagage d’ensemble de classifieurs basée sur le concept de marge
Les méthodes d’ensemble ont été utilisées avec succès comme schéma de classification.
Les algorithmes d’élagage d’ensembles de classifieurs sont apparus afin de réduire la
complexité de ce paradigme populaire d’apprentissage. Cet article présente une nouvelle
méthode efficace d’élagage d’ensembles qui, non seulement réduit de manière significative la
complexité des méthodes d’ensemble, mais permet aussi une meilleure précision de classification
que la version sans Ă©lagage. Cet algorithme consiste Ă ordonner tous les classifieurs de
base par rapport à leur entropie qui exploite une nouvelle version de la marge des méthodes
d’ensemble. La confrontation de cette méthode avec l’approche naïve d’élagage aléatoire des
classifieurs de base et avec un autre algorithme d’élagage par ordonnancement a permis de
montrer sa supériorité à travers une analyse empirique conséquente.Ensemble methods have been successfully used as a classification scheme. The
reduction of the complexity of this popular learning paradigm motivated the appearance of
ensemble pruning algorithms. This paper presents a new efficient ensemble pruning method
which not only highly reduces the complexity of ensemble methods but also performs better
than the non-pruned version in terms of classification accuracy. This algorithm consists in
ordering all the base classifiers with respect to their entropy which exploits a new version of
the margin of ensemble methods. Confrontation with both the naive approach of randomly
pruning base classifiers and another ordered-based pruning algorithm turned out convincing
in an extensive empirical analysis
Incremental construction of classifier and discriminant ensembles
We discuss approaches to incrementally construct an ensemble. The first constructs an ensemble of classifiers choosing a subset from a larger set, and the second constructs an ensemble of discriminants, where a classifier is used for some classes only. We investigate criteria including accuracy, significant improvement, diversity, correlation, and the role of search direction. For discriminant ensembles, we test subset selection and trees. Fusion is by voting or by a linear model. Using 14 classifiers on 38 data sets. incremental search finds small, accurate ensembles in polynomial time. The discriminant ensemble uses a subset of discriminants and is simpler, interpretable, and accurate. We see that an incremental ensemble has higher accuracy than bagging and random subspace method; and it has a comparable accuracy to AdaBoost. but fewer classifiers.We would like to thank the three anonymous referees and the editor for their constructive comments, pointers to related literature, and pertinent questions which allowed us to better situate our work as well as organize the ms and improve the presentation. This work has been supported by the Turkish Academy of Sciences in the framework of the Young Scientist Award Program (EA-TUBA-GEBIP/2001-1-1), Bogazici University Scientific Research Project 05HA101 and Turkish Scientific Technical Research Council TUBITAK EEEAG 104EO79Publisher's VersionAuthor Pre-Prin
Novel approaches for hierarchical classification with case studies in protein function prediction
A very large amount of research in the data mining, machine learning, statistical pattern recognition and related research communities has focused on flat classification problems. However, many problems in the real world such as hierarchical protein function prediction have their classes naturally organised into hierarchies. The task of hierarchical classification, however, needs to be better defined as researchers into one application domain are often unaware of similar efforts developed in other research areas.
The first contribution of this thesis is to survey the task of hierarchical classification across different application domains and present an unifying framework for the task. After clearly defining the problem, we explore novel approaches to the task.
Based on the understanding gained by surveying the task of hierarchical classification, there are three major approaches to deal with hierarchical classification problems. The first approach is to use one of the many existing flat classification algorithms to predict only the leaf classes in the hierarchy. Note that, in the training phase, this approach completely ignores the hierarchical class relationships, i.e. the parent-child and sibling class relationships, but in the testing phase the ancestral classes of an instance can be inferred from its predicted leaf classes. The second approach is to build a set of local models, by training one flat classification algorithm for each local view of the hierarchy. The two main variations of this approach are: (a) training a local flat multi-class classifier at each non-leaf class node, where each classifier discriminates among the child classes of its associated class; or (b) training a local fiat binary classifier at each node of the class hierarchy, where each classifier predicts whether or not a new instance has the classifier’s associated class. In both these variations, in the testing phase a procedure is used to combine the predictions of the set of local classifiers in a coherent way, avoiding inconsistent predictions. The third approach is to use a global-model hierarchical classification algorithm, which builds one single classification model by taking into account all the hierarchical class relationships in the training phase. In the context of this categorization of hierarchical classification approaches, the other contributions of this thesis are as follows.
The second contribution of this thesis is a novel algorithm which is based on the local classifier per parent node approach. The novel algorithm is the selective representation approach that automatically selects the best protein representation to use at each non-leaf class node.
The third contribution is a global-model hierarchical classification extension of the well known naive Bayes algorithm. Given the good predictive performance of the global-model hierarchical-classification naive Bayes algorithm, we relax the Naive Bayes’ assumption that attributes are independent from each other given the class by using the concept of k dependencies. Hence, we extend the flat classification /¿-Dependence Bayesian network classifier to the task of hierarchical classification, which is the fourth contribution of this thesis.
Both the proposed global-model hierarchical classification Naive Bayes and the proposed global-model hierarchical /Âż-Dependence Bayesian network classifier have achieved predictive accuracies that were, overall, significantly higher than the predictive accuracies obtained by their corresponding local hierarchical classification versions, across a number of datasets for the task of hierarchical protein function prediction
Recommended from our members
E-banking operational risk assessment. A soft computing approach in the context of the Nigerian banking industry.
This study investigates E-banking Operational Risk Assessment (ORA) to enable the development of a new ORA framework and methodology. The general view is that E-banking systems have modified some of the traditional banking risks, particularly Operational Risk (OR) as suggested by the Basel Committee on Banking Supervision in 2003. In addition, recent E-banking financial losses together with risk management principles and standards raise the need for an effective ORA methodology and framework in the context of E-banking. Moreover, evaluation tools and / or methods for ORA are highly subjective, are still in their infant stages, and have not yet reached a consensus. Therefore, it is essential to develop valid and reliable methods for effective ORA and evaluations.
The main contribution of this thesis is to apply Fuzzy Inference System (FIS) and Tree Augmented NaĂŻve Bayes (TAN) classifier as standard tools for identifying OR, and measuring OR exposure level. In addition, a new ORA methodology is proposed which consists of four major steps: a risk model, assessment approach, analysis approach and a risk assessment process. Further, a new ORA framework and measurement metrics are proposed with six factors: frequency of triggering event, effectiveness of avoidance barriers, frequency of undesirable operational state, effectiveness of recovery barriers before the risk outcome, approximate cost for Undesirable Operational State (UOS) occurrence, and severity of the risk outcome.
The study results were reported based on surveys conducted with Nigerian senior banking officers and banking customers. The study revealed that the framework and assessment tools gave good predictions for risk learning and inference in such systems. Thus, results obtained can be considered promising and useful for both E-banking system adopters and future researchers in this area