1,055 research outputs found
Characterising population variability in brain structure through models of whole-brain structural connectivity
Models of whole-brain connectivity are valuable for understanding neurological function. This thesis
seeks to develop an optimal framework for extracting models of whole-brain connectivity from clinically
acquired diffusion data. We propose new approaches for studying these models. The aim is to
develop techniques which can take models of brain connectivity and use them to identify biomarkers
or phenotypes of disease.
The models of connectivity are extracted using a standard probabilistic tractography algorithm, modified
to assess the structural integrity of tracts, through estimates of white matter anisotropy. Connections
are traced between 77 regions of interest, automatically extracted by label propagation from
multiple brain atlases followed by classifier fusion. The estimates of tissue integrity for each tract
are input as indices in 77x77 ”connectivity” matrices, extracted for large populations of clinical data.
These are compared in subsequent studies.
To date, most whole-brain connectivity studies have characterised population differences using graph
theory techniques. However these can be limited in their ability to pinpoint the locations of differences
in the underlying neural anatomy. Therefore, this thesis proposes new techniques. These include
a spectral clustering approach for comparing population differences in the clustering properties of
weighted brain networks. In addition, machine learning approaches are suggested for the first time.
These are particularly advantageous as they allow classification of subjects and extraction of features
which best represent the differences between groups.
One limitation of the proposed approach is that errors propagate from segmentation and registration
steps prior to tractography. This can cumulate in the assignment of false positive connections, where
the contribution of these factors may vary across populations, causing the appearance of population
differences where there are none. The final contribution of this thesis is therefore to develop a common
co-ordinate space approach. This combines probabilistic models of voxel-wise diffusion for each subject
into a single probabilistic model of diffusion for the population. This allows tractography to be
performed only once, ensuring that there is one model of connectivity. Cross-subject differences can
then be identified by mapping individual subjects’ anisotropy data to this model. The approach is
used to compare populations separated by age and gender
Medical image denoising using convolutional denoising autoencoders
Image denoising is an important pre-processing step in medical image
analysis. Different algorithms have been proposed in past three decades with
varying denoising performances. More recently, having outperformed all
conventional methods, deep learning based models have shown a great promise.
These methods are however limited for requirement of large training sample size
and high computational costs. In this paper we show that using small sample
size, denoising autoencoders constructed using convolutional layers can be used
for efficient denoising of medical images. Heterogeneous images can be combined
to boost sample size for increased denoising performance. Simplest of networks
can reconstruct images with corruption levels so high that noise and signal are
not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on
Data Mining in Biomedical Informatics and Healthcare at ICDM, 201
Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework
The burgeoning growth of public domain data and the increasing complexity of
deep learning model architectures have underscored the need for more efficient
data representation and analysis techniques. This paper is motivated by the
work of (Helal, 2023) and aims to present a comprehensive overview of
tensorization. This transformative approach bridges the gap between the
inherently multidimensional nature of data and the simplified 2-dimensional
matrices commonly used in linear algebra-based machine learning algorithms.
This paper explores the steps involved in tensorization, multidimensional data
sources, various multiway analysis methods employed, and the benefits of these
approaches. A small example of Blind Source Separation (BSS) is presented
comparing 2-dimensional algorithms and a multiway algorithm in Python. Results
indicate that multiway analysis is more expressive. Contrary to the intuition
of the dimensionality curse, utilising multidimensional datasets in their
native form and applying multiway analysis methods grounded in multilinear
algebra reveal a profound capacity to capture intricate interrelationships
among various dimensions while, surprisingly, reducing the number of model
parameters and accelerating processing. A survey of the multi-away analysis
methods and integration with various Deep Neural Networks models is presented
using case studies in different application domains.Comment: 34 pages, 8 figures, 4 table
- …