30 research outputs found

    Advanced Image Analysis for Modeling the Aging Brain

    Get PDF
    Both normal aging and neurodegenerative diseases such as Alzheimer’s disease (AD) cause morphological changes of the brain due to neurodegeneration. As neurodegeneration due to disease may be difficult to distinguish from that of normal aging, interpretation of magnetic resonance (MR) brain images in the context of diagnosis of neurodegenerative diseases is challenging, especially in the early stages of the disease. This thesis presented comprehensive models of the aging brain and novel computer-aided diagnosis methods, based on advanced, quantitative analysis of brain MR images, facilitating the differentiation between normal and abnormal neurodegeneration. I aimed to evaluate and develop methods for clinical decision support using features derived from MR brain images: I evaluated a classification method to predict global cognitive decline in the general population, evaluated five brain segmentation methods and developed a spatio-temporal model of morphological differences in the brain due to normal aging. To create this model I developed two novel techniques that allow performing non-rigid groupwise image registration on large imaging datasets. The novel aging brain models and computer-aided diagnosis methods facilitate the differentiation between normal and abnormal neurodegeneration. This will help in establishing more accurate diagnoses of patients, and in identifying patients at risk of developing neurodegenerative disease before symptoms emerge. In the future, the method’s performance and efficacy should be evaluated in clinical practice

    Image synthesis for the attenuation correction and analysis of PET/MR data

    Get PDF
    While magnetic resonance imaging (MRI) provides high-resolution anatomical information, positron emission tomography (PET) provides functional information. Combined PET/MR scanners are expected to offer a new range of clinical applications but efforts are still necessary to mitigate some limitations of this promising technology. One of the factors limiting the use of PET/MR scanners, especially in the case of neurology studies, is the imperfect attenuation correction, leading to a strong bias of the PET activity. Exploiting the simultaneous acquisition of both modalities, I explored a new family of methods to synthesise X-ray computed tomography (CT) images from MR images. The synthetic images are generated through a multi-atlas information propagation scheme, locally matching the MRI-derived patient's morphology to a database of MR/CT image pairs, using a local image similarity measure. The proposed algorithm provides a significant improvement in PET reconstruction accuracy when compared with the current correction, allowing an unbiased analysis of the PET images. A similar image synthesis scheme was then used to better identify abnormalities in cerebral glucose metabolism measured by [18]F-fluorodeoxyglucose (FDG) PET. This framework consists of creating a subject-specific healthy PET model based on the propagation of morphologically-matched PET images, and comparing the subject's PET image to the model via a Z-score. By accounting for inter-subject morphological differences, the proposed method reduces the variance of the normal population used for comparison in the Z-score, thus increasing the sensitivity. To demonstrate that the applicability of the proposed CT synthesis method is not limited to PET/MR attenuation correction, I redesigned the synthesis process to derive tissue attenuation properties from MR images in the head & neck and pelvic regions to facilitate MR-based radiotherapy treatment planning

    Characterising population variability in brain structure through models of whole-brain structural connectivity

    No full text
    Models of whole-brain connectivity are valuable for understanding neurological function. This thesis seeks to develop an optimal framework for extracting models of whole-brain connectivity from clinically acquired diffusion data. We propose new approaches for studying these models. The aim is to develop techniques which can take models of brain connectivity and use them to identify biomarkers or phenotypes of disease. The models of connectivity are extracted using a standard probabilistic tractography algorithm, modified to assess the structural integrity of tracts, through estimates of white matter anisotropy. Connections are traced between 77 regions of interest, automatically extracted by label propagation from multiple brain atlases followed by classifier fusion. The estimates of tissue integrity for each tract are input as indices in 77x77 ”connectivity” matrices, extracted for large populations of clinical data. These are compared in subsequent studies. To date, most whole-brain connectivity studies have characterised population differences using graph theory techniques. However these can be limited in their ability to pinpoint the locations of differences in the underlying neural anatomy. Therefore, this thesis proposes new techniques. These include a spectral clustering approach for comparing population differences in the clustering properties of weighted brain networks. In addition, machine learning approaches are suggested for the first time. These are particularly advantageous as they allow classification of subjects and extraction of features which best represent the differences between groups. One limitation of the proposed approach is that errors propagate from segmentation and registration steps prior to tractography. This can cumulate in the assignment of false positive connections, where the contribution of these factors may vary across populations, causing the appearance of population differences where there are none. The final contribution of this thesis is therefore to develop a common co-ordinate space approach. This combines probabilistic models of voxel-wise diffusion for each subject into a single probabilistic model of diffusion for the population. This allows tractography to be performed only once, ensuring that there is one model of connectivity. Cross-subject differences can then be identified by mapping individual subjects’ anisotropy data to this model. The approach is used to compare populations separated by age and gender

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    When Cardiac Biophysics Meets Groupwise Statistics: Complementary Modelling Approaches for Patient-Specific Medicine

    Get PDF
    This habilitation manuscript contains research on biophysical and statistical modeling of the heart, as well as interactions between these two approaches

    When Cardiac Biophysics Meets Groupwise Statistics: Complementary Modelling Approaches for Patient-Specific Medicine

    Get PDF
    This habilitation manuscript contains research on biophysical and statistical modeling of the heart, as well as interactions between these two approaches

    Recalage déformable à base de graphes : mise en correspondance coupe-vers-volume et méthodes contextuelles

    Get PDF
    Image registration methods, which aim at aligning two or more images into one coordinate system, are among the oldest and most widely used algorithms in computer vision. Registration methods serve to establish correspondence relationships among images (captured at different times, from different sensors or from different viewpoints) which are not obvious for the human eye. A particular type of registration algorithm, known as graph-based deformable registration methods, has become popular during the last decade given its robustness, scalability, efficiency and theoretical simplicity. The range of problems to which it can be adapted is particularly broad. In this thesis, we propose several extensions to the graph-based deformable registration theory, by exploring new application scenarios and developing novel methodological contributions.Our first contribution is an extension of the graph-based deformable registration framework, dealing with the challenging slice-to-volume registration problem. Slice-to-volume registration aims at registering a 2D image within a 3D volume, i.e. we seek a mapping function which optimally maps a tomographic slice to the 3D coordinate space of a given volume. We introduce a scalable, modular and flexible formulation accommodating low-rank and high order terms, which simultaneously selects the plane and estimates the in-plane deformation through a single shot optimization approach. The proposed framework is instantiated into different variants based on different graph topology, label space definition and energy construction. Simulated and real-data in the context of ultrasound and magnetic resonance registration (where both framework instantiations as well as different optimization strategies are considered) demonstrate the potentials of our method.The other two contributions included in this thesis are related to how semantic information can be encompassed within the registration process (independently of the dimensionality of the images). Currently, most of the methods rely on a single metric function explaining the similarity between the source and target images. We argue that incorporating semantic information to guide the registration process will further improve the accuracy of the results, particularly in the presence of semantic labels making the registration a domain specific problem.We consider a first scenario where we are given a classifier inferring probability maps for different anatomical structures in the input images. Our method seeks to simultaneously register and segment a set of input images, incorporating this information within the energy formulation. The main idea is to use these estimated maps of semantic labels (provided by an arbitrary classifier) as a surrogate for unlabeled data, and combine them with population deformable registration to improve both alignment and segmentation.Our last contribution also aims at incorporating semantic information to the registration process, but in a different scenario. In this case, instead of supposing that we have pre-trained arbitrary classifiers at our disposal, we are given a set of accurate ground truth annotations for a variety of anatomical structures. We present a methodological contribution that aims at learning context specific matching criteria as an aggregation of standard similarity measures from the aforementioned annotated data, using an adapted version of the latent structured support vector machine (LSSVM) framework.Les méthodes de recalage d’images, qui ont pour but l’alignement de deux ou plusieurs images dans un même système de coordonnées, sont parmi les algorithmes les plus anciens et les plus utilisés en vision par ordinateur. Les méthodes de recalage servent à établir des correspondances entre des images (prises à des moments différents, par différents senseurs ou avec différentes perspectives), lesquelles ne sont pas évidentes pour l’œil humain. Un type particulier d’algorithme de recalage, connu comme « les méthodes de recalage déformables à l’aide de modèles graphiques » est devenu de plus en plus populaire ces dernières années, grâce à sa robustesse, sa scalabilité, son efficacité et sa simplicité théorique. La gamme des problèmes auxquels ce type d’algorithme peut être adapté est particulièrement vaste. Dans ce travail de thèse, nous proposons plusieurs extensions à la théorie de recalage déformable à l’aide de modèles graphiques, en explorant de nouvelles applications et en développant des contributions méthodologiques originales.Notre première contribution est une extension du cadre du recalage à l’aide de graphes, en abordant le problème très complexe du recalage d’une tranche avec un volume. Le recalage d’une tranche avec un volume est le recalage 2D dans un volume 3D, comme par exemple le mapping d’une tranche tomographique dans un système de coordonnées 3D d’un volume en particulier. Nos avons proposé une formulation scalable, modulaire et flexible pour accommoder des termes d'ordre élevé et de rang bas, qui peut sélectionner le plan et estimer la déformation dans le plan de manière simultanée par une seule approche d'optimisation. Le cadre proposé est instancié en différentes variantes, basés sur différentes topologies du graph, définitions de l'espace des étiquettes et constructions de l'énergie. Le potentiel de notre méthode a été démontré sur des données réelles ainsi que des données simulées dans le cadre d’une résonance magnétique d’ultrason (où le cadre d’installation et les stratégies d’optimisation ont été considérés).Les deux autres contributions inclues dans ce travail de thèse, sont liées au problème de l’intégration de l’information sémantique dans la procédure de recalage (indépendamment de la dimensionnalité des images). Actuellement, la plupart des méthodes comprennent une seule fonction métrique pour expliquer la similarité entre l’image source et l’image cible. Nous soutenons que l'intégration des informations sémantiques pour guider la procédure de recalage pourra encore améliorer la précision des résultats, en particulier en présence d'étiquettes sémantiques faisant du recalage un problème spécifique adapté à chaque domaine.Nous considérons un premier scénario en proposant un classificateur pour inférer des cartes de probabilité pour les différentes structures anatomiques dans les images d'entrée. Notre méthode vise à recaler et segmenter un ensemble d'images d'entrée simultanément, en intégrant cette information dans la formulation de l'énergie. L'idée principale est d'utiliser ces cartes estimées des étiquettes sémantiques (fournie par un classificateur arbitraire) comme un substitut pour les données non-étiquettées, et les combiner avec le recalage déformable pour améliorer l'alignement ainsi que la segmentation.Notre dernière contribution vise également à intégrer l'information sémantique pour la procédure de recalage, mais dans un scénario différent. Dans ce cas, au lieu de supposer que nous avons des classificateurs arbitraires pré-entraînés à notre disposition, nous considérons un ensemble d’annotations précis (vérité terrain) pour une variété de structures anatomiques. Nous présentons une contribution méthodologique qui vise à l'apprentissage des critères correspondants au contexte spécifique comme une agrégation des mesures de similarité standard à partir des données annotées, en utilisant une adaptation de l’algorithme « Latent Structured Support Vector Machine »

    Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications

    Get PDF
    Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable

    Proceedings of the Third International Workshop on Mathematical Foundations of Computational Anatomy - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceComputational anatomy is an emerging discipline at the interface of geometry, statistics and image analysis which aims at modeling and analyzing the biological shape of tissues and organs. The goal is to estimate representative organ anatomies across diseases, populations, species or ages, to model the organ development across time (growth or aging), to establish their variability, and to correlate this variability information with other functional, genetic or structural information. The Mathematical Foundations of Computational Anatomy (MFCA) workshop aims at fostering the interactions between the mathematical community around shapes and the MICCAI community in view of computational anatomy applications. It targets more particularly researchers investigating the combination of statistical and geometrical aspects in the modeling of the variability of biological shapes. The workshop is a forum for the exchange of the theoretical ideas and aims at being a source of inspiration for new methodological developments in computational anatomy. A special emphasis is put on theoretical developments, applications and results being welcomed as illustrations. Following the successful rst edition of this workshop in 20061 and second edition in New-York in 20082, the third edition was held in Toronto on September 22 20113. Contributions were solicited in Riemannian and group theoretical methods, geometric measurements of the anatomy, advanced statistics on deformations and shapes, metrics for computational anatomy, statistics of surfaces, modeling of growth and longitudinal shape changes. 22 submissions were reviewed by three members of the program committee. To guaranty a high level program, 11 papers only were selected for oral presentation in 4 sessions. Two of these sessions regroups classical themes of the workshop: statistics on manifolds and diff eomorphisms for surface or longitudinal registration. One session gathers papers exploring new mathematical structures beyond Riemannian geometry while the last oral session deals with the emerging theme of statistics on graphs and trees. Finally, a poster session of 5 papers addresses more application oriented works on computational anatomy
    corecore