2 research outputs found

    Accurate and Interpretable Regression Trees using Oracle Coaching

    No full text
    In many real-world scenarios, predictive modelsneed to be interpretable, thus ruling out many machine learningtechniques known to produce very accurate models, e.g., neuralnetworks, support vector machines and all ensemble schemes.Most often, tree models or rule sets are used instead, typicallyresulting in significantly lower predictive performance. The over-all purpose of oracle coaching is to reduce this accuracy vs.comprehensibility trade-off by producing interpretable modelsoptimized for the specific production set at hand. The methodrequires production set inputs to be present when generating thepredictive model, a demand fulfilled in most, but not all, predic-tive modeling scenarios. In oracle coaching, a highly accurate, butopaque, model is first induced from the training data. This model(“the oracle”) is then used to label both the training instances andthe production instances. Finally, interpretable models are trainedusing different combinations of the resulting data sets. In thispaper, the oracle coaching produces regression trees, using neuralnetworks and random forests as oracles. The experiments, using32 publicly available data sets, show that the oracle coachingleads to significantly improved predictive performance, comparedto standard induction. In addition, it is also shown that ahighly accurate opaque model can be successfully used as a pre-processing step to reduce the noise typically present in data, evenin situations where production inputs are not available. In fact,just augmenting or replacing training data with another copyof the training set, but with the predictions from the opaquemodel as targets, produced significantly more accurate and/ormore compact regression trees.Sponsorship:This work was supported by the Swedish Foundation for StrategicResearch through the project High-Performance Data Mining for Drug EffectDetection (IIS11-0053), the Swedish Retail and Wholesale DevelopmentCouncil through the project Innovative Business Intelligence Tools (2013:5)and the Knowledge Foundation through the project Big Data Analytics byOnline Ensemble Learning (20120192).</p

    Accurate and Interpretable Regression Trees using Oracle Coaching

    No full text
    In many real-world scenarios, predictive models need to be interpretable, thus ruling out many machine learning techniques known to produce very accurate models, e.g., neural networks, support vector machines and all ensemble schemes. Most often, tree models or rule sets are used instead, typically resulting in significantly lower predictive performance. The over- all purpose of oracle coaching is to reduce this accuracy vs. comprehensibility trade-off by producing interpretable models optimized for the specific production set at hand. The method requires production set inputs to be present when generating the predictive model, a demand fulfilled in most, but not all, predic- tive modeling scenarios. In oracle coaching, a highly accurate, but opaque, model is first induced from the training data. This model (“the oracle”) is then used to label both the training instances and the production instances. Finally, interpretable models are trained using different combinations of the resulting data sets. In this paper, the oracle coaching produces regression trees, using neural networks and random forests as oracles. The experiments, using 32 publicly available data sets, show that the oracle coaching leads to significantly improved predictive performance, compared to standard induction. In addition, it is also shown that a highly accurate opaque model can be successfully used as a pre- processing step to reduce the noise typically present in data, even in situations where production inputs are not available. In fact, just augmenting or replacing training data with another copy of the training set, but with the predictions from the opaque model as targets, produced significantly more accurate and/or more compact regression trees.Sponsorship:This work was supported by the Swedish Foundation for StrategicResearch through the project High-Performance Data Mining for Drug EffectDetection (IIS11-0053), the Swedish Retail and Wholesale DevelopmentCouncil through the project Innovative Business Intelligence Tools (2013:5)and the Knowledge Foundation through the project Big Data Analytics byOnline Ensemble Learning (20120192).</p
    corecore