133,038 research outputs found

    Active Learning with Statistical Models

    Get PDF
    For many types of machine learning algorithms, one can compute the statistically `optimal' way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.Comment: See http://www.jair.org/ for any accompanying file

    Learning Discontinuities with Product-of-Sigmoids for Switching between Local Models

    Get PDF
    Sensorimotor data from many interesting physical interactions comprises discontinuities. While existing locally weighted learning approaches aim at learning smooth function

    Explicit Forgetting Algorithms for Memory Based Learning

    Get PDF
    Memory-based learning algorithms lack a mechanism for tracking time-varying associative mappings. To widen their applicability, they must incorporate explicit forgetting algorithms to selectively delete observations. We describe Time-Weighted, Locally-Weighted and Performance-Error Weighted forgetting algorithms. These were evaluated with a Nearest-Neighbor Learner in a simple classification task. Locally-Weighted Forgetting outperformed Time-Weighted Forgetting under time-varying sampling distributions and mappings, and did equally well when only the mapping varied. Performance-Error forgetting tracked about as well as the other algorithms, but was superior since it permitted the Nearest-Neighbor learner to approach the Bayes\u27 misclassification rate when the input-output mapping became stationary

    Kernel Carpentry for Online Regression using Randomly Varying Coefficient Model

    Get PDF
    We present a Bayesian formulation of locally weighted learning (LWL) using the novel concept of a randomly varying coefficient model. Based on thi

    Bayesian locally weighted online learning

    Get PDF
    Locally weighted regression is a non-parametric technique of regression that is capable of coping with non-stationarity of the input distribution. Online algorithms like Receptive FieldWeighted Regression and Locally Weighted Projection Regression use a sparse representation of the locally weighted model to approximate a target function, resulting in an efficient learning algorithm. However, these algorithms are fairly sensitive to parameter initializations and have multiple open learning parameters that are usually set using some insights of the problem and local heuristics. In this thesis, we attempt to alleviate these problems by using a probabilistic formulation of locally weighted regression followed by a principled Bayesian inference of the parameters. In the Randomly Varying Coefficient (RVC) model developed in this thesis, locally weighted regression is set up as an ensemble of regression experts that provide a local linear approximation to the target function. We train the individual experts independently and then combine their predictions using a Product of Experts formalism. Independent training of experts allows us to adapt the complexity of the regression model dynamically while learning in an online fashion. The local experts themselves are modeled using a hierarchical Bayesian probability distribution with Variational Bayesian Expectation Maximization steps to learn the posterior distributions over the parameters. The Bayesian modeling of the local experts leads to an inference procedure that is fairly insensitive to parameter initializations and avoids problems like overfitting. We further exploit the Bayesian inference procedure to derive efficient online update rules for the parameters. Learning in the regression setting is also extended to handle a classification task by making use of a logistic regression to model discrete class labels. The main contribution of the thesis is a spatially localised online learning algorithm set up in a probabilistic framework with principled Bayesian inference rule for the parameters of the model that learns local models completely independent of each other, uses only local information and adapts the local model complexity in a data driven fashion. This thesis, for the first time, brings together the computational efficiency and the adaptability of ā€˜non-competitiveā€™ locally weighted learning schemes and the modelling guarantees of the Bayesian formulation
    • ā€¦
    corecore