152,339 research outputs found

    The Bayesian Backfitting Relevance Vector Machine

    Get PDF
    Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, are usually computationally prohibitive. This paper makes several important contributions that allow Bayesian learning to scale to more complex, real-world learning scenarios. Firstly, we show that back tting | a traditional non-parametric, yet highly e cient regression tool | can be derived in a novel formulation within an expectation maximization (EM) framework and thus can nally be given a probabilistic interpretation. Secondly, we show that the general framework of sparse Bayesian learning and in particular the relevance vector machine (RVM), can be derived as a highly e cient algorithm using a Bayesian version of back tting at its core. As we demonstrate on several regression and classi cation benchmarks, Bayesian back tting o ers a compelling alternative to current regression methods, especially when the size and dimensionality of the data challenge computational resources

    Fault classification in dynamic processes using multiclass relevance vector machine and slow feature analysis

    Get PDF
    This paper proposes a modifed relevance vector machine with slow feature analysis fault classification for industrial processes. Traditional support vector machine classification does not work well when there are insufficient training samples. A relevance vector machine, which is a Bayesian learning-based probabilistic sparse model, is developed to determine the probabilistic prediction and sparse solutions for the fault category. This approach has the benefits of good generalization ability and robustness to small training samples. To maximize the dynamic separability between classes and reduce the computational complexity, slow feature analysis is used to extract the inner dynamic features and reduce the dimension. Experiments comparing the proposed method, relevance vector machine and support vector machine classification are performed using the Tennessee Eastman process. For all faults, relevance vector machine has a classification rate of 39%, while the proposed algorithm has an overall classification rate of 76.1%. This shows the efficiency and advantages of the proposed method

    Healing the Relevance Vector Machine through Augmentation

    Get PDF
    The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full predictive distributions for test cases. However, the predictive uncertainties have the unintuitive property, that emphthey get smaller the further you move away from the training cases. We give a thorough analysis. Inspired by the analogy to non-degenerate Gaussian Processes, we suggest augmentation to solve the problem. The purpose of the resulting model, RVM*, is primarily to corroborate the theoretical and experimental analysis. Although RVM* could be used in practical applications, it is no longer a truly sparse model. Experiments show that sparsity comes at the expense of worse predictive distributions

    kernlab - An S4 Package for Kernel Methods in R

    Get PDF
    kernlab is an extensible package for kernel-based machine learning methods in R. It takes advantage of R's new S4 ob ject model and provides a framework for creating and using kernel-based algorithms. The package contains dot product primitives (kernels), implementations of support vector machines and the relevance vector machine, Gaussian processes, a ranking algorithm, kernel PCA, kernel CCA, and a spectral clustering algorithm. Moreover it provides a general purpose quadratic programming solver, and an incomplete Cholesky decomposition method.

    Fault Diagnosis of Reciprocating Compressors Using Revelance Vector Machines with A Genetic Algorithm Based on Vibration Data

    Get PDF
    This paper focuses on the development of an advanced fault classifier for monitoring reciprocating compressors (RC) based on vibration signals. Many feature parameters can be used for fault diagnosis, here the classifier is developed based on a relevance vector machine (RVM) which is optimized with genetic algorithms (GA) so determining a more effective subset of the parameters. Both a one-against-one scheme based RVM and a multiclass multi-kernel relevance vector machine (mRVM) have been evaluated to identify a more effective method for implementing the multiclass fault classification for the compressor. The accuracy of both techniques is discussed correspondingly to determine an optimal fault classifier which can correlate with the physical mechanisms underlying the features. The results show that the models perform well, the classification accuracy rate being up to 97% for both algorithms

    A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation

    Get PDF
    In this research, a comparison of the relevance vector machine (RVM), least square support vector machine (LSSVM) and the radial basis function neural network (RBFNN) for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investigated via the experimental and the simulation studies, and the statistical analysis method is employed to analyze the performance of the three machine learning algorithms in the simulation study. The analysis demonstrate that the M profile of RBFNN estimation has a relatively good match to the measured profile for the experimental study; for the simulation study, the LSSVM is the most precise one among the three machine learning algorithms, besides, the performance of RVM is basically identical to the RBFNN

    MoBiL: A hybrid feature set for Automatic Human Translation quality assessment

    Get PDF
    In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set and feature selection and evaluation framework. The set includes translation quality indicators that can be utilized to automatically predict the quality of human translations in terms of content adequacy and language fluency. We compare MoBiL with the QuEst baseline set by using them in classifiers trained with support vector machine and relevance vector machine learning algorithms on the same data set. We also report an experiment on feature selection to opt for fewer but more informative features from MoBiL. Our experiments show that classifiers trained on our feature set perform consistently better in predicting both adequacy and fluency than the classifiers trained on the baseline feature set. MoBiL also performs well when used with both support vector machine and relevance vector machine algorithms
    corecore