319 research outputs found
Foot pressure distributions during walking in African elephants (Loxodonta africana)
Elephants, the largest living land mammals, have evolved a specialized foot morphology to help reduce locomotor pressures while supporting their large body mass. Peak pressures that could cause tissue damage are mitigated passively by the anatomy of elephants' feet, yet this mechanism does not seem to work well for some captive animals. This study tests how foot pressures vary among African and Asian elephants from habitats where natural substrates predominate but where foot care protocols differ. Variations in pressure patterns might be related to differences in husbandry, including but not limited to trimming and the substrates that elephants typically stand and move on. Both species' samples exhibited the highest concentration of peak pressures on the lateral digits of their feet (which tend to develop more disease in elephants) and lower pressures around the heel. The trajectories of the foot's centre of pressure were also similar, confirming that when walking at similar speeds, both species load their feet laterally at impact and then shift their weight medially throughout the step until toe-off. Overall, we found evidence of variations in foot pressure patterns that might be attributable to husbandry and other causes, deserving further examination using broader, more comparable samples
Identification of genetic factors underpinning phenotypic heterogeneity in Huntington's disease and other neurodegenerative disorders
Neurodegenerative diseases including Huntington’s disease (HD), the spinocerebellar ataxias and C9orf72 associated Amyotrophic Lateral Sclerosis / Frontotemporal dementia (ALS/FTD) do not present and progress in the same way in all patients. Instead there is phenotypic variability in age at onset, progression and symptoms. Understanding this variability is not only clinically valuable, but identification of the genetic factors underpinning this variability has the potential to highlight genes and pathways which may be amenable to therapeutic manipulation, hence help find drugs for these devastating and currently incurable diseases. Identification of genetic modifiers of neurodegenerative diseases is the overarching aim of this thesis. To identify genetic variants which modify disease progression it is first necessary to have a detailed characterization of the disease and its trajectory over time. In this thesis clinical data from the TRACK-HD studies, for which I collected data as a clinical fellow, was used to study disease progression over time in HD, and give subjects a progression score for subsequent analysis. In this thesis I show blood transcriptomic signatures of HD status and stage which parallel HD brain and overlap with Alzheimer’s disease brain. Using the Huntington’s disease progression score in a genome wide association study, both a locus on chromosome 5 tagging MSH3, and DNA handling pathways more broadly, are shown to modify HD progression: these results are explored. Transcriptomic signatures associated with HD progression rate are also investigated. In this thesis I show that DNA repair variants also modify age at onset in spinocerebellar ataxias (1, 2, 3, 6, 7 and 17), which are, like HD, caused by triplet repeat expansions, suggesting a common mechanism. Extending this thesis’ examination of the relationship between phenotype and genotype I show that the C9orf72 expansion, normally associated with ALS/FTD, is also the commonest cause of HD phenocopy presentations
Nested Variational Compression in Deep Gaussian Processes
Deep Gaussian processes provide a flexible approach to probabilistic modelling of data using either supervised or unsupervised learning. For tractable inference approximations to the marginal likelihood of the model must be made. The original approach to approximate inference in these models used variational compression to allow for approximate variational marginalization of the hidden variables leading to a lower bound on the marginal likelihood of the model [Damianou and Lawrence, 2013]. In this paper we extend this idea with a nested variational compression. The resulting lower bound on the likelihood can be easily parallelized or adapted for stochastic variational inference
Nested Variational Compression in Deep Gaussian Processes
Deep Gaussian processes provide a flexible approach to probabilistic modelling of data using either supervised or unsupervised learning. For tractable inference approximations to the marginal likelihood of the model must be made. The original approach to approximate inference in these models used variational compression to allow for approximate variational marginalization of the hidden variables leading to a lower bound on the marginal likelihood of the model [Damianou and Lawrence, 2013]. In this paper we extend this idea with a nested variational compression. The resulting lower bound on the likelihood can be easily parallelized or adapted for stochastic variational inference
Detecting mode-shape discontinuities without differentiation - Examining a Gaussian process approach
Detecting damage by inspection of mode-shape curvature is an enticing approach which is hindered by the requirement to differentiate the inferred mode-shape. Inaccuracies in the inferred mode-shapes are compounded by the numerical differentiation process; since these small inaccuracies are caused by noise in the data, the method is untenable for most real situations. This publication proposes a new method for detecting discontinuities in the smoothness of the function, without directly calculating the curvature i.e. without differentiation. We present this methodology and examine its performance on a finite element simulation of a cracked beam under random excitation. In order to demonstrate the advantages of the approach, increasing amounts of noise are added to the simulation data, and the benefits of the method with respect to simple curvature calculation is demonstrated. The method is based upon Gaussian Process Regression, a technique usually used for pattern recognition and closely related to neural network approaches. We develop a unique covariance function, which allows for a non-smooth point. Simple optimisation of this point (by complete enumeration) is effective in detecting the damage location. We discuss extensions of the technique (to e.g. multiple damage locations) as well as pointing out some potential pitfall
Scalable variational Gaussian process classification
Gaussian process classification is a popular method with a number of
appealing properties. We show how to scale the model within a variational
inducing point framework, outperforming the state of the art on benchmark
datasets. Importantly, the variational formulation can be exploited to allow
classification in problems with millions of data points, as we demonstrate in
experiments.JH was supported by a MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1, and a Google Focussed Research award.This is the final version of the article. It was first available from JMLR via http://jmlr.org/proceedings/papers/v38/hensman15.pd
Scalable transformed additive signal decomposition by non-conjugate Gaussian process inference
Many functions and signals of interest are formed by the addition of multiple underlying components, often nonlinearly transformed and modified by noise. Examples may be found in the literature on Generalized Additive Models [1] and Underdetermined Source Separation [2] or other mode decomposition techniques. Recovery of the underlying component processes often depends on finding and exploiting statistical regularities within them. Gaussian Processes (GPs) [3] have become the dominant way to model statistical expectations over functions. Recent advances make inference of the GP posterior efficient for large scale datasets and arbitrary likelihoods [4,5]. Here we extend these methods to the additive GP case [6, 7], thus achieving scalable marginal posterior inference over each latent function in settings such as those above
MCMC for variationally sparse Gaussian processes
Gaussian process (GP) models form a core part of probabilistic machine
learning. Considerable research effort has been made into attacking three
issues with GP models: how to compute efficiently when the number of data is
large; how to approximate the posterior when the likelihood is not Gaussian and
how to estimate covariance function parameter posteriors. This paper
simultaneously addresses these, using a variational approximation to the
posterior which is sparse in support of the function but otherwise free-form.
The result is a Hybrid Monte-Carlo sampling scheme which allows for a
non-Gaussian approximation over the function values and covariance parameters
simultaneously, with efficient computations based on inducing-point sparse GPs.
Code to replicate each experiment in this paper will be available shortly.JH was funded by an MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1 and a Google Focussed Research award.This is the final version of the article. It first appeared from the Neural Information Processing Systems Foundation via https://papers.nips.cc/paper/5875-mcmc-for-variationally-sparse-gaussian-processe
Practical constraints on real time Bayesian filtering for NDE applications
An experimental evaluation of Bayesian positional filtering algorithms applied to mobile robots for Non-Destructive Evaluation is presented using multiple positional sensing data – a real time, on-robot implementation of an Extended Kalman and Particle filter was used to control a robot performing representative raster scanning of a sample. Both absolute and relative positioning were employed – the absolute being an indoor acoustic GPS system that required careful calibration. The performance of the tracking algorithms are compared in terms of computational cost and the accuracy of trajectory estimates. It is demonstrated that for real time NDE scanning, the Extended Kalman Filter is a more sensible choice given the high computational overhead for the Particle filter
Overcoming mean-field approximations in recurrent Gaussian process models
We identify a new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process. Inference in this setting has either employed computationally intensive MCMC methods, or relied on factorisations of the variational posterior. As we demonstrate in our experiments, the factorisation between latent system states and transition function can lead to a miscalibrated posterior and to learning unnecessarily large noise terms. We eliminate this factorisation by explicitly modelling the dependence between state trajectories and the Gaussian process posterior. Samples of the latent states can then be tractably generated by conditioning on this representation. The method we obtain (VCDT: variationally coupled dynamics and trajectories) gives better predictive performance and more calibrated estimates of the transition function, yet maintains the same time and space complexities as mean-field methods. Code is available at: g i t h u b . c o m / i a l o n g / G P t
- …