2,459 research outputs found
On Invariance and Selectivity in Representation Learning
We discuss data representation which can be learned automatically from data,
are invariant to transformations, and at the same time selective, in the sense
that two points have the same representation only if they are one the
transformation of the other. The mathematical results here sharpen some of the
key claims of i-theory -- a recent theory of feedforward processing in sensory
cortex
Deep generative modelling of the imaged human brain
Human-machine symbiosis is a very promising opportunity for the field of neurology given that the interpretation of the imaged human brain is a trivial feat
for neither entity. However, before machine learning systems can be used in
real world clinical situations, many issues with automated analysis must first be
solved. In this thesis I aim to address what I consider the three biggest hurdles
to the adoption of automated machine learning interpretative systems. For each
issue, I will first elucidate the reader on its importance given the overarching
narratives of both neurology and machine learning, and then showcase my proposed solutions to these issues through the use of deep generative models of the
imaged human brain.
First, I start by addressing what is an uncontroversial and universal sign of intelligence: the ability to extrapolate knowledge to unseen cases. Human neuroradiologists have studied the anatomy of the healthy brain and can therefore,
with some success, identify most pathologies present on an imaged brain, even
without having ever been previously exposed to them. Current discriminative
machine learning systems require vast amounts of labelled data in order to accurately identify diseases. In this first part I provide a generative framework that
permits machine learning models to more efficiently leverage unlabelled data for
better diagnoses with either none or small amounts of labels.
Secondly, I address a major ethical concern in medicine: equitable evaluation
of all patients, regardless of demographics or other identifying characteristics.
This is, unfortunately, something that even human practitioners fail at, making
the matter ever more pressing: unaddressed biases in data will become biases
in the models. To address this concern I suggest a framework through which
a generative model synthesises demographically counterfactual brain imaging
to successfully reduce the proliferation of demographic biases in discriminative
models.
Finally, I tackle the challenge of spatial anatomical inference, a task at the centre
of the field of lesion-deficit mapping, which given brain lesions and associated
cognitive deficits attempts to discover the true functional anatomy of the brain.
I provide a new Bayesian generative framework and implementation that allows
for greatly improved results on this challenge, hopefully, paving part of the road
towards a greater and more complete understanding of the human brain
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT
Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen.
Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt.
Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken
Quantitation in MRI : application to ageing and epilepsy
Multi-atlas propagation and label fusion techniques have recently been developed for segmenting
the human brain into multiple anatomical regions. In this thesis, I investigate
possible adaptations of these current state-of-the-art methods. The aim is to study ageing
on the one hand, and on the other hand temporal lobe epilepsy as an example for a
neurological disease.
Overall effects are a confounding factor in such anatomical analyses. Intracranial volume
(ICV) is often preferred to normalize for global effects as it allows to normalize for estimated
maximum brain size and is hence independent of global brain volume loss, as seen
in ageing and disease. I describe systematic differences in ICV measures obtained at 1.5T
versus 3T, and present an automated method of measuring intracranial volume, Reverse
MNI Brain Masking (RBM), based on tissue probability maps in MNI standard space. I
show that this is comparable to manual measurements and robust against field strength
differences.
Correct and robust segmentation of target brains which show gross abnormalities, such as
ventriculomegaly, is important for the study of ageing and disease. We achieved this with
incorporating tissue classification information into the image registration process. The
best results in elderly subjects, patients with TLE and healthy controls were achieved using
a new approach using multi-atlas propagation with enhanced registration (MAPER).
I then applied MAPER to the problem of automatically distinguishing patients with TLE
with (TLE-HA) and without (TLE-N) hippocampal atrophy on MRI from controls, and
determine the side of seizure onset. MAPER-derived structural volumes were used for
a classification step consisting of selecting a set of discriminatory structures and applying
support vector machine on the structural volumes as well as morphological similarity
information such as volume difference obtained with spectral analysis. Acccuracies were
91-100 %, indicating that the method might be clinically useful.
Finally, I used the methods developed in the previous chapters to investigate brain regional
volume changes across the human lifespan in over 500 healthy subjects between 20
to 90 years of age, using data from three different scanners (2x 1.5T, 1x 3T), using the IXI
database. We were able to confirm several known changes, indicating the veracity of the
method. In addition, we describe the first multi-region, whole-brain database of normal
ageing
- …