38 research outputs found

    Developing Microprocessor Based Expert Models for Instrument Interpretation

    No full text
    We describe a scheme for developing expert interpretive systems and transferring them to a microprocessor environment The scheme has been successfully implemented and tested by producing a program for interpreting results from a widely used medical laboratory instrument a scanning densitometer Specialists in the field of serum protein electrophoresis analysis provided the knowledge needed to build an interpretive model using EXPERT, a general production rule system for developing consultation models By constraining a few of the structures used in the general model it was possible to develop procedures for automatically translating the model to a specialized application program and then to a microprocessor assembly language program Thus, the model development can take place on a large machine, using established techniques for capturing and conveniently updating expert knowledge structures, while the final interpretive program can be targeted to a microprocessor depending on the application Our experience in carrying out the complete process illustrates many of the requirements involved in taking an expert system from its early development phase to actual implementation and use in a real world application

    Briefly Noted

    No full text

    Predictive data mining : a practical guide

    No full text
    xii, 228 p. : ill. ; 23 cm

    Estimating Performance Gains for Voted Decision Trees

    No full text
    Decision tree induction is a prominent learning method, typically yielding quick results with competitive predictive performance. However, it is not unusual to find other automated learning methods that exceed the predictive performance of a decision tree on the same application. Toachieve near-optimal classification results, resampling techniques can be employed to generate multiple decision-tree solutions. These decision trees are individually applied and their answers voted. The potential for exceptionally strong performance is counterbalanced by the substantial increase in computing time to induce many decision trees. We describe estimators of predictive performance for voted decision trees induced from bootstrap (bagged) or adaptive (boosted) resampling. The estimates are found by examining the performance of a single tree and its pruned subtrees over a single, training set and a large test set. Using publicly available collections of data, we show that these estimate..

    ABSTRACT Solving Regression Problems with Rule-based Ensemble Classifiers

    No full text
    We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this in-direct approach can often yield strong results for many ap-plications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees

    An empirical comparison of pattern recognition, neural nets, and machine learning classification methods

    No full text
    Classification methods from statistical pattern recognition, neural nets, and machine learning were applied to four real-world data sets. Each of these data sets has been previously analyzed and reported in the statistical, medical, or machine learning literature. The data sets are characterized by statisucal uncertainty; there is no completely accurate solution to these problems. Training and testing or resampling techniques are used to estimate the true error rates of the classification methods. Detailed attention is given to the analysis of performance of the neural nets using back propagation. For these problems, which have relatively few hypotheses and features, the machine learning procedures for rule induction or tree induction clearly performed best.
    corecore