9,338 research outputs found

    Combining similarity in time and space for training set formation under concept drift

    Get PDF
    Concept drift is a challenge in supervised learning for sequential data. It describes a phenomenon when the data distributions change over time. In such a case accuracy of a classifier benefits from the selective sampling for training. We develop a method for training set selection, particularly relevant when the expected drift is gradual. Training set selection at each time step is based on the distance to the target instance. The distance function combines similarity in space and in time. The method determines an optimal training set size online at every time step using cross validation. It is a wrapper approach, it can be used plugging in different base classifiers. The proposed method shows the best accuracy in the peer group on the real and artificial drifting data. The method complexity is reasonable for the field applications

    Bagging and boosting classification trees to predict churn.

    Get PDF
    Bagging; Boosting; Classification; Churn;

    Predictive User Modeling with Actionable Attributes

    Get PDF
    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches

    Predicting the outcome of patients with subarachnoid hemorrhage using machine learning techniques

    Get PDF
    Background: Outcome prediction for subarachnoid hemorrhage (SAH) helps guide care and compare global management strategies. Logistic regression models for outcome prediction may be cumbersome to apply in clinical practice. Objective: To use machine learning techniques to build a model of outcome prediction that makes the knowledge discovered from the data explicit and communicable to domain experts. Material and methods: A derivation cohort (n = 441) of nonselected SAH cases was analyzed using different classification algorithms to generate decision trees and decision rules. Algorithms used were C4.5, fast decision tree learner, partial decision trees, repeated incremental pruning to produce error reduction, nearest neighbor with generalization, and ripple down rule learner. Outcome was dichotomized in favorable [Glasgow outcome scale (GOS) = I–II] and poor (GOS = III–V). An independent cohort (n = 193) was used for validation. An exploratory questionnaire was given to potential users (specialist doctors) to gather their opinion on the classifier and its usability in clinical routine. Results: The best classifier was obtained with the C4.5 algorithm. It uses only two attributes [World Federation of Neurological Surgeons (WFNS) and Fisher’s scale] and leads to a simple decision tree. The accuracy of the classifier [area under the ROC curve (AUC) = 0.84; confidence interval (CI) = 0.80–0.88] is similar to that obtained by a logistic regression model (AUC = 0.86; CI = 0.83–0.89) derived from the same data and is considered better fit for clinical use.This work was supported in part by the Spanish Ministries of Science under Grant TRA2007-67374-C02-02 and Health under Grant FIS PI 070152. The work of A. Lagares and J.F. Alen was supported by the Fundación Mutua Madrileña

    Multi-test Decision Tree and its Application to Microarray Data Classification

    Get PDF
    Objective: The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. Methods: We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Results: Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 1414 datasets by an average 66 percent. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. Conclusion: This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts
    corecore