100,669 research outputs found

    Incremental online learning in high dimensions

    Get PDF
    this article, however, is problematic, as it requires a careful selection of initial ridge regression parameters to stabilize the highly rank-deficient full covariance matrix of the input data, and it is easy to create too much bias or too little numerical stabilization initially, which can trap the local distance metric adaptation in local minima.While the LWPR algorithm just computes about a factor 10 times longer for the 20D experiment in comparison to the 2D experiment, RFWR requires a 1000-fold increase of computation time, thus rendering this algorithm unsuitable for high-dimensional regression. In order to compare LWPR's results to other popular regression methods, we evaluated the 2D, 10D, and 20D cross data sets with gaussian process regression (GP) and support vector (SVM) regression in addition to our LWPR method. It should be noted that neither SVM nor GP methods is an incremental method, although they can be considered state-of-the-art for batch regression under relatively small numbers of training data and reasonable input dimensionality. The computational complexity of these methods is prohibitively high for real-time applications. The GP algorithm (Gibbs & MacKay, 1997) used a generic covariance function and optimized over the hyperparameters. The SVM regression was performed using a standard available package (Saunders et al., 1998) and optimized for kernel choices. Figure 6 compares the performance of LWPR and gaussian processes for the above-mentioned data sets using 100, 300, and 500 training data point

    Incremental Online Learning in High Dimensions

    Get PDF
    Locally weighted projection regression (LWPR) is a new algorithm for incremental non-linear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its cor

    LWPR: A Scalable Method for Incremental Online Learning in High Dimensions

    Get PDF
    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear func- tion approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computa- tionally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high dimensional spaces and compare various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, iii) adjusts its weighting kernels based only on local information in order to minimize the danger of negative interference of incremental learning, iv) has a computational complexity that is linear in the num- ber of inputs, and v) can deal with a large number of - possibly redundant - inputs, as shown in various empirical evaluations with up to 50 dimensional data sets. For a probabilistic interpreta- tion, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high dimensional spaces

    Incremental Training of a Detector Using Online Sparse Eigen-decomposition

    Full text link
    The ability to efficiently and accurately detect objects plays a very crucial role for many computer vision tasks. Recently, offline object detectors have shown a tremendous success. However, one major drawback of offline techniques is that a complete set of training data has to be collected beforehand. In addition, once learned, an offline detector can not make use of newly arriving data. To alleviate these drawbacks, online learning has been adopted with the following objectives: (1) the technique should be computationally and storage efficient; (2) the updated classifier must maintain its high classification accuracy. In this paper, we propose an effective and efficient framework for learning an adaptive online greedy sparse linear discriminant analysis (GSLDA) model. Unlike many existing online boosting detectors, which usually apply exponential or logistic loss, our online algorithm makes use of LDA's learning criterion that not only aims to maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions. We provide a better alternative for online boosting algorithms in the context of training a visual object detector. We demonstrate the robustness and efficiency of our methods on handwriting digit and face data sets. Our results confirm that object detection tasks benefit significantly when trained in an online manner.Comment: 14 page

    Representation discovery using a fixed basis in reinforcement learning

    Get PDF
    A thesis presented for the degree of Doctor of Philosophy, School of Computer Science and Applied Mathematics. University of the Witwatersrand, South Africa. 26 August 2016.In the reinforcement learning paradigm, an agent learns by interacting with its environment. At each state, the agent receives a numerical reward. Its goal is to maximise the discounted sum of future rewards. One way it can do this is through learning a value function; a function which maps states to the discounted sum of future rewards. With an accurate value function and a model of the environment, the agent can take the optimal action in each state. In practice, however, the value function is approximated, and performance depends on the quality of the approximation. Linear function approximation is a commonly used approximation scheme, where the value function is represented as a weighted sum of basis functions or features. In continuous state environments, there are infinitely many such features to choose from, introducing the new problem of feature selection. Existing algorithms such as OMP-TD are slow to converge, scale poorly to high dimensional spaces, and have not been generalised to the online learning case. We introduce heuristic methods for reducing the search space in high dimensions that significantly reduce computational costs and also act as regularisers. We extend these methods and introduce feature regularisation for incremental feature selection in the batch learning case, and show that introducing a smoothness prior is effective with our SSOMP-TD and STOMP-TD algorithms. Finally we generalise OMP-TD and our algorithms to the online case and evaluate them empirically.LG201

    Stochastic Optimization of PCA with Capped MSG

    Full text link
    We study PCA as a stochastic optimization problem and propose a novel stochastic approximation algorithm which we refer to as "Matrix Stochastic Gradient" (MSG), as well as a practical variant, Capped MSG. We study the method both theoretically and empirically

    'Transformations towards sustainability':Emerging approaches, critical reflections, and a research agenda

    Get PDF
    Over the last two decades researchers have come to understand much about the global challenges confronting human society (e.g. climate change; biodiversity loss; water, energy and food insecurity; poverty and widening social inequality). However, the extent to which research and policy efforts are succeeding in steering human societies towards more sustainable and just futures is unclear. Attention is increasingly turning towards better understanding how to navigate processes of social and institutional transformation to bring about more desirable trajectories of change in various sectors of human society. A major knowledge gap concerns understanding how transformations towards sustainability are conceptualised, understood and analysed. Limited existing scholarship on this topic is fragmented, sometimes overly deterministic, and weak in its capacity to critically analyse transformation processes which are inherently political and contested. This paper aims to advance understanding of transformations towards sustainability, recognising it as both a normative and an analytical concept. We firstly review existing concepts of transformation in global environmental change literature, and the role of governance in relation to it. We then propose a framework for understanding and critically analysing transformations towards sustainability based on the existing ā€˜Earth System Governanceā€™ framework (Biermann et al., 2009). We then outline a research agenda, and argue that transdisciplinary research approaches and a key role for early career researchers are vital for pursuing this agenda. Finally, we argue that critical reflexivity among global environmental change scholars, both individually and collectively, will be important for developing innovative research on transformations towards sustainability to meaningfully contribute to policy and action over time
    • ā€¦
    corecore