10 research outputs found

    Soluble minimax groups and their representations

    No full text
    SIGLEAvailable from British Library Document Supply Centre- DSC:D61054 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Novelty Detection in Large-Vehicle Turbocharger Operation

    No full text
    Abstract. We develop novelty detection techniques for the analysis of data from a large-vehicle engine turbocharger in order to illustrate how abnormal events of operational significance may be identified with respect to a model of normality. Results are validated using polynomial function modelling and reduced dimensionality visualisation techniques to show that system operation can be automatically classified into one of three distinct state spaces, each corresponding to a unique set of running conditions. This classification is used to develop a regression algorithm that is able to predict the dynamical operating parameters of the turbocharger and allow the automatic detection of periods of abnormal operation. Visualisation of system trajectories in high-dimensional space are communicated to the user using parameterised projection techniques, allowing ease of interpretation of changes in system behaviour

    Feature Selection for Value Function Approximation Using Bayesian Model Selection

    No full text
    Abstract. Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unkown value function can be obtained, is one of the main challenges in scaling RL to real-world applications. Here we consider the Gaussian process based framework GPTD for approximate policy evaluation, and propose feature selection through marginal likelihood optimization of the associated hyperparameters. Our approach has two appealing benefits: (1) given just sample transitions, we can solve the policy evaluation problem fully automatically (without looking at the learning task, and, in theory, independent of the dimensionality of the state space), and (2) model selection allows us to consider more sophisticated kernels, which in turn enable us to identify relevant subspaces and eliminate irrelevant state variables such that we can achieve substantial computational savings and improved prediction performance.

    Extensions of the informative vector machine

    No full text
    Abstract The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a blockdiagonal covariance matrix, for “learning to learn ” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.
    corecore