1,928 research outputs found

    Incremental Sparse Bayesian Ordinal Regression

    Get PDF
    Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches

    A Noise-Robust Fast Sparse Bayesian Learning Model

    Full text link
    This paper utilizes the hierarchical model structure from the Bayesian Lasso in the Sparse Bayesian Learning process to develop a new type of probabilistic supervised learning approach. The hierarchical model structure in this Bayesian framework is designed such that the priors do not only penalize the unnecessary complexity of the model but will also be conditioned on the variance of the random noise in the data. The hyperparameters in the model are estimated by the Fast Marginal Likelihood Maximization algorithm which can achieve sparsity, low computational cost and faster learning process. We compare our methodology with two other popular learning models; the Relevance Vector Machine and the Bayesian Lasso. We test our model on examples involving both simulated and empirical data, and the results show that this approach has several performance advantages, such as being fast, sparse and also robust to the variance in random noise. In addition, our method can give out a more stable estimation of variance of random error, compared with the other methods in the study.Comment: 15 page
    corecore