42,377 research outputs found
Recommended from our members
A comparative survey of integrated learning systems
This paper presents the duction framework for unifying the three basic forms of inference - deduction, abduction, and induction - by specifying the possible relationships and influences among them in the context of integrated learning. Special assumptive forms of inference are defined that extend the use of these inference methods, and the properties of these forms are explored. A comparison to a related inference-based learning frame work is made. Finally several existing integrated learning programs are examined in the perspective of the duction framework
Recommended from our members
Learning approximate diagnosis
Model-based diagnosis (MBD) provides several advantages over experiential rule-based systems. A principal shortcoming of MBD is that MBD learns nothing from any given example. An MBD system facing the same task a second time will incur the same computational effort as that incurred the first time. Our earlier work on incorporating explanation-based learning (EBL) in MBD [4] suggested a diagnostic architecture integrating EBL and MBD components. In this architecture, EBL was used to learn diagnostic rules. But the diagnoses proposed by the rules could be erroneous. So constraint suspension testing was used to check all proposed diagnoses. Insisting on perfect accuracy causes the performance of this scheme for "learning while doing" to deteriorate rapidly with the size of the device to be diagnosed. In this paper, we describe a method for trading off accuracy for efficiency. In this approach, most diagnosis problems are handled by the associational rules learned from previous problems. Model-based reasoning and learning are activated only when performance drops below a given threshold. We present empirical results on circuits of increasing number of components illustrating how this approach scales up
Ensemble Learning for Free with Evolutionary Algorithms ?
Evolutionary Learning proceeds by evolving a population of classifiers, from
which it generally returns (with some notable exceptions) the single
best-of-run classifier as final result. In the meanwhile, Ensemble Learning,
one of the most efficient approaches in supervised Machine Learning for the
last decade, proceeds by building a population of diverse classifiers. Ensemble
Learning with Evolutionary Computation thus receives increasing attention. The
Evolutionary Ensemble Learning (EEL) approach presented in this paper features
two contributions. First, a new fitness function, inspired by co-evolution and
enforcing the classifier diversity, is presented. Further, a new selection
criterion based on the classification margin is proposed. This criterion is
used to extract the classifier ensemble from the final population only
(Off-line) or incrementally along evolution (On-line). Experiments on a set of
benchmark problems show that Off-line outperforms single-hypothesis
evolutionary learning and state-of-art Boosting and generates smaller
classifier ensembles
Recommended from our members
Learning multiple fault diagnosis
This paper describes two methods for integrating model-based diagnosis (MBD) and explanation-based learning. The first method (EBL) uses a generate-test-debug paradigm, generating diagnostic hypotheses using learned associational rules that summarize model-based diagnostic experiences. This strategy is a form of "learning while doing" model-based troubleshooting and could be called "online learning." The second diagnosis and learning method described here (EEL-STATIC) involves ''learning in advance." Learning begins in a training phase prior to performance or testing. Empirical results of computational experiments comparing the learning methods with MBD on two devices (the polybox and the binary full adder) are reported. For the same diagnostic performance, EBL-STATIC is several orders of magnitude faster than MBD while EBL can cause performance slow-down
- …