5 research outputs found

    Geometric deep learning for Alzheimer's disease analysis

    Get PDF
    Alzheimer’s Disease (AD) represents between 50-70% of the cases of dementia, which translates in around 25-35 million people affected by this disease. During its development, patients suffering from AD experience an irreversible cognitive decline, which limits their autonomy on their daily lives. While many of the causes of AD are still unknown, researchers have noticed a abnormal amyloid deposition and neurofibrillary tangles that will start affecting the short-term memory of the patient, together with other cognitive functions. In fact, these pathophysiological changes start taking place even before the patient experiences the first symptoms. One of the structures that is first affected by the disease is the hippocampus. During the development of AD, this part of the brain experiences an irregular deformation that affects its capabilities of forming new memories. Therefore, many clinical work has set a focus on studying this structure and its evolution along the disease. Identifying the changes it suffers can help us understand better the causes of the patient's cognitive decline. Given the complexity that characterizes AD, identifying patterns during its development is still a cumbersome task for physicians.Thus, aiding the diagnosis and prognosis of the disease using Deep Learning methods can be highly beneficial, as seen for other medical applications. In particular, if the focus is set on single structures (e.g. the hippocampus) Geometric Deep Learning offers a set of models that are best suited for 3D shape representations. We believe these methods can help doctors identify abnormalities in the structure that can lead to AD in the future. In this work, we first study the capabilities of current Geometric Deep Learning methods in diagnosing patients suffering from AD, by only looking at the hippocampus. We start by studying one of the simplest 3d representations, point clouds. We continue by comparing this representation to other non-Euclidean representations, such as meshes, and also Euclidean ones (e.g. 3d masks). We observe that meshes are one of the optimal ways of representing 3d structures for capturing fine-grained changes, but they carry additional pre-processing steps that Euclidean representations do not require. Finally, once we have confirmed that Geometric Deep Learning, particularly mesh neural networks, can properly capture the effects of AD on the hippocampus, we expand their application to longitudinal analysis of the structure. We propose a new temporal model based on Spiral Resnet and Transformers that sets a new state-of-the-art for the task of predicting longitudinal trajectories of the hippocampus. We also evaluated the effect that imputing missing longitudinal data has on detecting subjects that are developping to AD. Our experiments show an increase of a 3% in distinguishing between converting and stable trajectories

    FSS-2019-nCov:A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection

    Get PDF
    The newly discovered coronavirus (COVID-19) pneumonia is providing major challenges to research in terms of diagnosis and disease quantification. Deep-learning (DL) techniques allow extremely precise image segmentation; yet, they necessitate huge volumes of manually labeled data to be trained in a supervised manner. Few-Shot Learning (FSL) paradigms tackle this issue by learning a novel category from a small number of annotated instances. We present an innovative semi-supervised few-shot segmentation (FSS) approach for efficient segmentation of 2019-nCov infection (FSS-2019-nCov) from only a few amounts of annotated lung CT scans. The key challenge of this study is to provide accurate segmentation of COVID-19 infection from a limited number of annotated instances. For that purpose, we propose a novel dual-path deep-learning architecture for FSS. Every path contains encoder–decoder (E-D) architecture to extract high-level information while maintaining the channel information of COVID-19 CT slices. The E-D architecture primarily consists of three main modules: a feature encoder module, a context enrichment (CE) module, and a feature decoder module. We utilize the pre-trained ResNet34 as an encoder backbone for feature extraction. The CE module is designated by a newly introduced proposed Smoothed Atrous Convolution (SAC) block and Multi-scale Pyramid Pooling (MPP) block. The conditioner path takes the pairs of CT images and their labels as input and produces a relevant knowledge representation that is transferred to the segmentation path to be used to segment the new images. To enable effective collaboration between both paths, we propose an adaptive recombination and recalibration (RR) module that permits intensive knowledge exchange between paths with a trivial increase in computational complexity. The model is extended to multi-class labeling for various types of lung infections. This contribution overcomes the limitation of the lack of large numbers of COVID-19 CT scans. It also provides a general framework for lung disease diagnosis in limited data situations

    A patch-based convolutional neural network for localized MRI brain segmentation.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Accurate segmentation of the brain is an important prerequisite for effective diagnosis, treatment planning, and patient monitoring. The use of manual Magnetic Resonance Imaging (MRI) segmentation in treating brain medical conditions is slowly being phased out in favour of fully-automated and semi-automated segmentation algorithms, which are more efficient and objective. Manual segmentation has, however, remained the gold standard for supervised training in image segmentation. The advent of deep learning ushered in a new era in image segmentation, object detection, and image classification. The convolutional neural network has contributed the most to the success of deep learning models. Also, the availability of increased training data when using Patch Based Segmentation (PBS) has facilitated improved neural network performance. On the other hand, even though deep learning models have achieved successful results, they still suffer from over-segmentation and under-segmentation due to several reasons, including visually unclear object boundaries. Even though there have been significant improvements, there is still room for better results as all proposed algorithms still fall short of 100% accuracy rate. In the present study, experiments were carried out to improve the performance of neural network models used in previous studies. The revised algorithm was then used for segmenting the brain into three regions of interest: White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). Particular emphasis was placed on localized component-based segmentation because both disease diagnosis and treatment planning require localized information, and there is a need to improve the local segmentation results, especially for small components. In the evaluation of the segmentation results, several metrics indicated the effectiveness of the localized approach. The localized segmentation resulted in the accuracy, recall, precision, null-error, false-positive rate, true-positive and F1- score increasing by 1.08%, 2.52%, 5.43%, 16.79%, -8.94%, 8.94%, 3.39% respectively. Also, when the algorithm was compared against state of the art algorithms, the proposed algorithm had an average predictive accuracy of 94.56% while the next best algorithm had an accuracy of 90.83%

    Deep learning for medical image processing

    Get PDF
    Medical image segmentation represents a fundamental aspect of medical image computing. It facilitates measurements of anatomical structures, like organ volume and tissue thickness, critical for many classification algorithms which can be instrumental for clinical diagnosis. Consequently, enhancing the efficiency and accuracy of segmentation algorithms could lead to considerable improvements in patient care and diagnostic precision. In recent years, deep learning has become the state-of-the-art approach in various domains of medical image computing, including medical image segmentation. The key advantages of deep learning methods are their speed and efficiency, which have the potential to transform clinical practice significantly. Traditional algorithms might require hours to perform complex computations, but with deep learning, such computational tasks can be executed much faster, often within seconds. This thesis focuses on two distinct segmentation strategies: voxel-based and surface-based. Voxel-based segmentation assigns a class label to each individual voxel of an image. On the other hand, surface-based segmentation techniques involve reconstructing a 3D surface from the input images, then segmenting that surface into different regions. This thesis presents multiple methods for voxel-based image segmentation. Here, the focus is segmenting brain structures, white matter hyperintensities, and abdominal organs. Our approaches confront challenges such as domain adaptation, learning with limited data, and optimizing network architectures to handle 3D images. Additionally, the thesis discusses ways to handle the failure cases of standard deep learning approaches, such as dealing with rare cases like patients who have undergone organ resection surgery. Finally, the thesis turns its attention to cortical surface reconstruction and parcellation. Here, deep learning is used to extract cortical surfaces from MRI scans as triangular meshes and parcellate these surfaces on a vertex level. The challenges posed by this approach include handling irregular and topologically complex structures. This thesis presents novel deep learning strategies for voxel-based and surface-based medical image segmentation. By addressing specific challenges in each approach, it aims to contribute to the ongoing advancement of medical image computing.Die Segmentierung medizinischer Bilder stellt einen fundamentalen Aspekt der medizinischen Bildverarbeitung dar. Sie erleichtert Messungen anatomischer Strukturen, wie Organvolumen und Gewebedicke, die fĂŒr viele Klassifikationsalgorithmen entscheidend sein können und somit fĂŒr klinische Diagnosen von Bedeutung sind. Daher könnten Verbesserungen in der Effizienz und Genauigkeit von Segmentierungsalgorithmen zu erheblichen Fortschritten in der Patientenversorgung und diagnostischen Genauigkeit fĂŒhren. Deep Learning hat sich in den letzten Jahren als fĂŒhrender Ansatz in verschiedenen Be-reichen der medizinischen Bildverarbeitung etabliert. Die Hauptvorteile dieser Methoden sind Geschwindigkeit und Effizienz, die die klinische Praxis erheblich verĂ€ndern können. Traditionelle Algorithmen benötigen möglicherweise Stunden, um komplexe Berechnungen durchzufĂŒhren, mit Deep Learning können solche rechenintensiven Aufgaben wesentlich schneller, oft innerhalb von Sekunden, ausgefĂŒhrt werden. Diese Dissertation konzentriert sich auf zwei Segmentierungsstrategien, die voxel- und oberflĂ€chenbasierte Segmentierung. Die voxelbasierte Segmentierung weist jedem Voxel eines Bildes ein Klassenlabel zu, wĂ€hrend oberflĂ€chenbasierte Techniken eine 3D-OberflĂ€che aus den Eingabebildern rekonstruieren und segmentieren. In dieser Arbeit werden mehrere Methoden fĂŒr die voxelbasierte Bildsegmentierung vorgestellt. Der Fokus liegt hier auf der Segmentierung von Gehirnstrukturen, HyperintensitĂ€ten der weißen Substanz und abdominellen Organen. Unsere AnsĂ€tze begegnen Herausforderungen wie der Anpassung an verschiedene DomĂ€nen, dem Lernen mit begrenzten Daten und der Optimierung von Netzwerkarchitekturen, um 3D-Bilder zu verarbeiten. DarĂŒber hinaus werden in dieser Dissertation Möglichkeiten erörtert, mit den FehlschlĂ€gen standardmĂ€ĂŸiger Deep-Learning-AnsĂ€tze umzugehen, beispielsweise mit seltenen FĂ€llen nach einer Organresektion. Schließlich legen wir den Fokus auf die Rekonstruktion und Parzellierung von kortikalen OberflĂ€chen. Hier wird Deep Learning verwendet, um kortikale OberflĂ€chen aus MRT-Scans als Dreiecksnetz zu extrahieren und diese OberflĂ€chen auf Knoten-Ebene zu parzellieren. Zu den Herausforderungen dieses Ansatzes gehört der Umgang mit unregelmĂ€ĂŸigen und topologisch komplexen Strukturen. Diese Arbeit stellt neuartige Deep-Learning-Strategien fĂŒr die voxel- und oberflĂ€chenbasierte medizinische Segmentierung vor. Durch die BewĂ€ltigung spezifischer Herausforderungen in jedem Ansatz trĂ€gt sie so zur Weiterentwicklung der medizinischen Bildverarbeitung bei
    corecore