246,807 research outputs found

    Incremental Sparse Bayesian Ordinal Regression

    Get PDF
    Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches

    The Automatic Statistician: A Relational Perspective

    Get PDF
    Department of Computer EngineeringGaussian Processes (GPs) provide a general and analytically tractable way of capturing complex time-varying, nonparametric functions. The time varying parameters of GPs can be explained as a composition of base kernels such as linear, smoothness or periodicity in that covariance kernels are closed under addition and multiplication. The Automatic Bayesian Covariance Discovery (ABCD) system constructs natural-language description of time-series data by treating unknown time-series data nonparametrically using GPs. Unfortunately, learning a composite covariance kernel with a single time-series dataset often results in less informative kernels instead of finding qualitative distinct descriptions. We address this issue by proposing a relational kernel learning which can model relationship between sets of data and find shared structure among the time series datasets. We show the shared structure can help learning more accurate models for sets of regression problems with some synthetic data, US top market capitalization stock data and US house sales index data.ope

    Transfer of Training Between Tracking Tasks Employing Quickened and Unquickened Displays

    Get PDF
    Man\u27s propensity for solving complex non-linear operations during continuous tracking is high. If the situation demands it he can readily learn to differentiate a displayed signal after the mechanism he is controlling has integrated it. If the tracking is made more and more complicated, however, accuracy falls off rapidly and learning time increases. For any specific task, it is usually possible to design a machine to assume the human operator\u27s functions, but it is not always desirable to replace a man with another machine in a complex tracking task. Non-linear automatic systems are difficult and expensive to build. Most important, however, they are usually restricted to one application and do not share man\u27s flexibility. For practicality and economy, then, it is desirable to try to simplify the operator\u27s task in situations that call for more accuracy than a man ordinarily gives

    Automatic LQR Tuning Based on Gaussian Process Global Optimization

    Full text link
    This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Results of a two- and four-dimensional tuning problems highlight the method's potential for automatic controller tuning on robotic platforms.Comment: 8 pages, 5 figures, to appear in IEEE 2016 International Conference on Robotics and Automation. Video demonstration of the experiments available at https://am.is.tuebingen.mpg.de/publications/marco_icra_201

    A Bode Sensitivity Integral for Linear Time-Periodic Systems

    Get PDF
    Bode's sensitivity integral is a well-known formula that quantifies some of the limitations in feedback control for linear time-invariant systems. In this note, we show that there is a similar formula for linear time-periodic systems. The harmonic transfer function is used to prove the result. We use the notion of roll-off 2, which means that the first time-varying Markov parameter is equal to zero. It then follows that the harmonic transfer function is an analytic operator and a trace class operator. These facts are used to prove the result
    corecore