9,000 research outputs found

    Concentration and Confidence for Discrete Bayesian Sequence Predictors

    Full text link
    Bayesian sequence prediction is a simple technique for predicting future symbols sampled from an unknown measure on infinite sequences over a countable alphabet. While strong bounds on the expected cumulative error are known, there are only limited results on the distribution of this error. We prove tight high-probability bounds on the cumulative error, which is measured in terms of the Kullback-Leibler (KL) divergence. We also consider the problem of constructing upper confidence bounds on the KL and Hellinger errors similar to those constructed from Hoeffding-like bounds in the i.i.d. case. The new results are applied to show that Bayesian sequence prediction can be used in the Knows What It Knows (KWIK) framework with bounds that match the state-of-the-art.Comment: 17 page

    A Diversity-Accuracy Measure for Homogenous Ensemble Selection

    Get PDF
    Several selection methods in the literature are essentially based on an evaluation function that determines whether a model M contributes positively to boost the performances of the whole ensemble. In this paper, we propose a method called DIversity and ACcuracy for Ensemble Selection (DIACES) using an evaluation function based on both diversity and accuracy. The method is applied on homogenous ensembles composed of C4.5 decision trees and based on a hill climbing strategy. This allows selecting ensembles with the best compromise between maximum diversity and minimum error rate. Comparative studies show that in most cases the proposed method generates reduced size ensembles with better performances than usual ensemble simplification methods

    On-line learning with minimal degradation in feedforward networks

    Get PDF
    Dealing with non-stationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting. A neural learning technique that satisfies these requirements, without sacrifying the benefits of distributed representations, is presented. It relies on a formalization of the problem as the minimization of the error over the previously learned input-output (i-o) patterns, subject to the constraint of perfect encoding of the new pattern. Then this constrained optimization problem is transformed into an unconstrained one with hidden-unit activations as variables. This new formulation naturally leads to an algorithm for solving the problem, which we call Learning with Minimal Degradation (LMD). Some experimental comparisons of the performance of LMD with back-propagation are provided which, besides showing the advantages of using LMD, reveal the dependence of forgetting on the learning rate in back-propagation. We also explain why overtraining affects forgetting and fault-tolerance, which are seen as related problems.Peer Reviewe

    Food aid and child nutrition in rural Ethiopia

    Get PDF
    "Food aid programs have become increasingly important for disaster relief in many developing countries. In Ethiopia, a drought-stricken economy with one of the lowest per capita incomes in the world, food aid has amounted to almost 10 million metric tons (mt) from 1984 to 1998, almost 10 percent of annual cereal production. Because of the importance of food aid in Ethiopia, much effort has been devoted to evaluation of its effectiveness.....Many evaluations of food aid have examined its impact on household calorie availability. This paper focuses on the effects of food aid on individual nutritional status, as measured by indicators of child nutrition." from Author's Abstract
    • …
    corecore