10 research outputs found

    Analyzing shape and residual pose of subcortical structures in brains of subjects with schizophrenia

    Get PDF
    This study focuses on four anatomical features of subcortical structures associated with schizophrenia: volume, surface area, shape and residual pose. Being a chronic mental disorder, schizophrenia affects 1% of the local population and is one of the leading causes of disability around the world. However, the symptoms of schizophrenia appear and spread gradually, and robust mathematical and statistical models of disease progression have the capability to help find meaningful biomarkers of schizophrenia, which may aid researchers and clinicians to develop potentially novel treatments of the disease. This study used the open-source Schizconnect dataset, and data was automatically segmented by the MRICloud pipeline, following which scans were mapped to a common surface template using unbiased diffeomorphic mapping. The first part of this study focuses on global volumetric and local surface analysis of 6 subcortical structures; the Amygdala, the Hippocampus, the Caudate, the Putamen, the Globus Pallidum, and the Thalamus. Significant total volume and regional surface area changes are seen in the hippocampus and thalamus, and reduced atrophy is seen in the diseased subjects compared to the control subjects for the hippocampus, globus pallidum, and thalamus, whereas increased atrophy is seen for the diseased subjects compared to the control subjects in the amygdala, caudate and putamen. This study also develops a mathematical formulation for residual pose analysis, describing a robust algorithm to obtain residual pose parameters from MR scans using general orthogonalized Procrustes analysis, and modelling of rigid transformation matrices as Lie Groups. Cross-sectional and longitudinal analysis is performed on these residual pose parameters, and significant differences are seen in the amygdala, hippocampus, caudate and globus pallidum for the cross-sectional study, whereas significant changes are seen in the amygdala, hippocampus, and caudate for the longitudinal study. This study aims to be the first known exploration of residual pose to characterize longitudinal development of schizophrenia and analyze complementary features to traditional shape analysis that have previously been discarded in the exploration of this disease, while also developing a robust mathematical formulation for pose analysis, in order to contribute to further research that has the potential to find biomarkers of disease onset and progression from non-invasive imaging modalities such as MRI

    Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

    Full text link
    Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems. In general, this has a cubic cost in dataset size and is sensitive to conditioning. We explore stochastic gradient algorithms as a computationally efficient method of approximately solving these linear systems: we develop low-variance optimization objectives for sampling from the posterior and extend these to inducing points. Counterintuitively, stochastic gradient descent often produces accurate predictions, even in cases where it does not converge quickly to the optimum. We explain this through a spectral characterization of the implicit bias from non-convergence. We show that stochastic gradient descent produces predictive distributions close to the true posterior both in regions with sufficient data coverage, and in regions sufficiently far away from the data. Experimentally, stochastic gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks. Its uncertainty estimates match the performance of significantly more expensive baselines on a large-scale Bayesian optimization task

    Stochastic Gradient Descent for Gaussian Processes Done Right

    Full text link
    We study the optimisation problem associated with Gaussian process regression using squared loss. The most common approach to this problem is to apply an exact solver, such as conjugate gradient descent, either directly, or to a reduced-order version of the problem. Recently, driven by successes in deep learning, stochastic gradient descent has gained traction as an alternative. In this paper, we show that when done right\unicode{x2014}by which we mean using specific insights from the optimisation and kernel communities\unicode{x2014}this approach is highly effective. We thus introduce a particular stochastic dual gradient descent algorithm, that may be implemented with a few lines of code using any deep learning framework. We explain our design decisions by illustrating their advantage against alternatives with ablation studies and show that the new method is highly competitive. Our evaluations on standard regression benchmarks and a Bayesian optimisation task set our approach apart from preconditioned conjugate gradients, variational Gaussian process approximations, and a previous version of stochastic gradient descent for Gaussian processes. On a molecular binding affinity prediction task, our method places Gaussian process regression on par in terms of performance with state-of-the-art graph neural networks
    corecore