316 research outputs found
Foot pressure distributions during walking in African elephants (Loxodonta africana)
Elephants, the largest living land mammals, have evolved a specialized foot morphology to help reduce locomotor pressures while supporting their large body mass. Peak pressures that could cause tissue damage are mitigated passively by the anatomy of elephants' feet, yet this mechanism does not seem to work well for some captive animals. This study tests how foot pressures vary among African and Asian elephants from habitats where natural substrates predominate but where foot care protocols differ. Variations in pressure patterns might be related to differences in husbandry, including but not limited to trimming and the substrates that elephants typically stand and move on. Both species' samples exhibited the highest concentration of peak pressures on the lateral digits of their feet (which tend to develop more disease in elephants) and lower pressures around the heel. The trajectories of the foot's centre of pressure were also similar, confirming that when walking at similar speeds, both species load their feet laterally at impact and then shift their weight medially throughout the step until toe-off. Overall, we found evidence of variations in foot pressure patterns that might be attributable to husbandry and other causes, deserving further examination using broader, more comparable samples
Identification of genetic factors underpinning phenotypic heterogeneity in Huntington's disease and other neurodegenerative disorders
Neurodegenerative diseases including Huntingtonās disease (HD), the spinocerebellar ataxias and C9orf72 associated Amyotrophic Lateral Sclerosis / Frontotemporal dementia (ALS/FTD) do not present and progress in the same way in all patients. Instead there is phenotypic variability in age at onset, progression and symptoms. Understanding this variability is not only clinically valuable, but identification of the genetic factors underpinning this variability has the potential to highlight genes and pathways which may be amenable to therapeutic manipulation, hence help find drugs for these devastating and currently incurable diseases. Identification of genetic modifiers of neurodegenerative diseases is the overarching aim of this thesis. To identify genetic variants which modify disease progression it is first necessary to have a detailed characterization of the disease and its trajectory over time. In this thesis clinical data from the TRACK-HD studies, for which I collected data as a clinical fellow, was used to study disease progression over time in HD, and give subjects a progression score for subsequent analysis. In this thesis I show blood transcriptomic signatures of HD status and stage which parallel HD brain and overlap with Alzheimerās disease brain. Using the Huntingtonās disease progression score in a genome wide association study, both a locus on chromosome 5 tagging MSH3, and DNA handling pathways more broadly, are shown to modify HD progression: these results are explored. Transcriptomic signatures associated with HD progression rate are also investigated. In this thesis I show that DNA repair variants also modify age at onset in spinocerebellar ataxias (1, 2, 3, 6, 7 and 17), which are, like HD, caused by triplet repeat expansions, suggesting a common mechanism. Extending this thesisā examination of the relationship between phenotype and genotype I show that the C9orf72 expansion, normally associated with ALS/FTD, is also the commonest cause of HD phenocopy presentations
Nested Variational Compression in Deep Gaussian Processes
Deep Gaussian processes provide a flexible approach to probabilistic modelling of data using either supervised or unsupervised learning. For tractable inference approximations to the marginal likelihood of the model must be made. The original approach to approximate inference in these models used variational compression to allow for approximate variational marginalization of the hidden variables leading to a lower bound on the marginal likelihood of the model [Damianou and Lawrence, 2013]. In this paper we extend this idea with a nested variational compression. The resulting lower bound on the likelihood can be easily parallelized or adapted for stochastic variational inference
Nested Variational Compression in Deep Gaussian Processes
Deep Gaussian processes provide a flexible approach to probabilistic modelling of data using either supervised or unsupervised learning. For tractable inference approximations to the marginal likelihood of the model must be made. The original approach to approximate inference in these models used variational compression to allow for approximate variational marginalization of the hidden variables leading to a lower bound on the marginal likelihood of the model [Damianou and Lawrence, 2013]. In this paper we extend this idea with a nested variational compression. The resulting lower bound on the likelihood can be easily parallelized or adapted for stochastic variational inference
Detecting mode-shape discontinuities without differentiation - Examining a Gaussian process approach
Detecting damage by inspection of mode-shape curvature is an enticing approach which is hindered by the requirement to differentiate the inferred mode-shape. Inaccuracies in the inferred mode-shapes are compounded by the numerical differentiation process; since these small inaccuracies are caused by noise in the data, the method is untenable for most real situations. This publication proposes a new method for detecting discontinuities in the smoothness of the function, without directly calculating the curvature i.e. without differentiation. We present this methodology and examine its performance on a finite element simulation of a cracked beam under random excitation. In order to demonstrate the advantages of the approach, increasing amounts of noise are added to the simulation data, and the benefits of the method with respect to simple curvature calculation is demonstrated. The method is based upon Gaussian Process Regression, a technique usually used for pattern recognition and closely related to neural network approaches. We develop a unique covariance function, which allows for a non-smooth point. Simple optimisation of this point (by complete enumeration) is effective in detecting the damage location. We discuss extensions of the technique (to e.g. multiple damage locations) as well as pointing out some potential pitfall
Scalable variational Gaussian process classification
Gaussian process classification is a popular method with a number of
appealing properties. We show how to scale the model within a variational
inducing point framework, outperforming the state of the art on benchmark
datasets. Importantly, the variational formulation can be exploited to allow
classification in problems with millions of data points, as we demonstrate in
experiments.JH was supported by a MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1, and a Google Focussed Research award.This is the final version of the article. It was first available from JMLR via http://jmlr.org/proceedings/papers/v38/hensman15.pd
Scalable transformed additive signal decomposition by non-conjugate Gaussian process inference
Many functions and signals of interest are formed by the addition of multiple underlying components, often nonlinearly transformed and modified by noise. Examples may be found in the literature on Generalized Additive Models [1] and Underdetermined Source Separation [2] or other mode decomposition techniques. Recovery of the underlying component processes often depends on finding and exploiting statistical regularities within them. Gaussian Processes (GPs) [3] have become the dominant way to model statistical expectations over functions. Recent advances make inference of the GP posterior efficient for large scale datasets and arbitrary likelihoods [4,5]. Here we extend these methods to the additive GP case [6, 7], thus achieving scalable marginal posterior inference over each latent function in settings such as those above
MCMC for variationally sparse Gaussian processes
Gaussian process (GP) models form a core part of probabilistic machine
learning. Considerable research effort has been made into attacking three
issues with GP models: how to compute efficiently when the number of data is
large; how to approximate the posterior when the likelihood is not Gaussian and
how to estimate covariance function parameter posteriors. This paper
simultaneously addresses these, using a variational approximation to the
posterior which is sparse in support of the function but otherwise free-form.
The result is a Hybrid Monte-Carlo sampling scheme which allows for a
non-Gaussian approximation over the function values and covariance parameters
simultaneously, with efficient computations based on inducing-point sparse GPs.
Code to replicate each experiment in this paper will be available shortly.JH was funded by an MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1 and a Google Focussed Research award.This is the final version of the article. It first appeared from the Neural Information Processing Systems Foundation via https://papers.nips.cc/paper/5875-mcmc-for-variationally-sparse-gaussian-processe
Practical constraints on real time Bayesian filtering for NDE applications
An experimental evaluation of Bayesian positional filtering algorithms applied to mobile robots for Non-Destructive Evaluation is presented using multiple positional sensing data ā a real time, on-robot implementation of an Extended Kalman and Particle filter was used to control a robot performing representative raster scanning of a sample. Both absolute and relative positioning were employed ā the absolute being an indoor acoustic GPS system that required careful calibration. The performance of the tracking algorithms are compared in terms of computational cost and the accuracy of trajectory estimates. It is demonstrated that for real time NDE scanning, the Extended Kalman Filter is a more sensible choice given the high computational overhead for the Particle filter
Adaptive Path Planning for Depth Constrained Bathymetric Mapping with an Autonomous Surface Vessel
This paper describes the design, implementation and testing of a suite of
algorithms to enable depth constrained autonomous bathymetric (underwater
topography) mapping by an Autonomous Surface Vessel (ASV). Given a target depth
and a bounding polygon, the ASV will find and follow the intersection of the
bounding polygon and the depth contour as modeled online with a Gaussian
Process (GP). This intersection, once mapped, will then be used as a boundary
within which a path will be planned for coverage to build a map of the
Bathymetry. Methods for sequential updates to GP's are described allowing
online fitting, prediction and hyper-parameter optimisation on a small embedded
PC. New algorithms are introduced for the partitioning of convex polygons to
allow efficient path planning for coverage. These algorithms are tested both in
simulation and in the field with a small twin hull differential thrust vessel
built for the task.Comment: 21 pages, 9 Figures, 1 Table. Submitted to The Journal of Field
Robotic
- ā¦