9 research outputs found

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    An Investigation of Methods for CT Synthesis in MR-only Radiotherapy

    Get PDF

    Perception of Unstructured Environments for Autonomous Off-Road Vehicles

    Get PDF
    Autonome Fahrzeuge benötigen die Fähigkeit zur Perzeption als eine notwendige Voraussetzung für eine kontrollierbare und sichere Interaktion, um ihre Umgebung wahrzunehmen und zu verstehen. Perzeption für strukturierte Innen- und Außenumgebungen deckt wirtschaftlich lukrative Bereiche, wie den autonomen Personentransport oder die Industrierobotik ab, während die Perzeption unstrukturierter Umgebungen im Forschungsfeld der Umgebungswahrnehmung stark unterrepräsentiert ist. Die analysierten unstrukturierten Umgebungen stellen eine besondere Herausforderung dar, da die vorhandenen, natürlichen und gewachsenen Geometrien meist keine homogene Struktur aufweisen und ähnliche Texturen sowie schwer zu trennende Objekte dominieren. Dies erschwert die Erfassung dieser Umgebungen und deren Interpretation, sodass Perzeptionsmethoden speziell für diesen Anwendungsbereich konzipiert und optimiert werden müssen. In dieser Dissertation werden neuartige und optimierte Perzeptionsmethoden für unstrukturierte Umgebungen vorgeschlagen und in einer ganzheitlichen, dreistufigen Pipeline für autonome Geländefahrzeuge kombiniert: Low-Level-, Mid-Level- und High-Level-Perzeption. Die vorgeschlagenen klassischen Methoden und maschinellen Lernmethoden (ML) zur Perzeption bzw.~Wahrnehmung ergänzen sich gegenseitig. Darüber hinaus ermöglicht die Kombination von Perzeptions- und Validierungsmethoden für jede Ebene eine zuverlässige Wahrnehmung der möglicherweise unbekannten Umgebung, wobei lose und eng gekoppelte Validierungsmethoden kombiniert werden, um eine ausreichende, aber flexible Bewertung der vorgeschlagenen Perzeptionsmethoden zu gewährleisten. Alle Methoden wurden als einzelne Module innerhalb der in dieser Arbeit vorgeschlagenen Perzeptions- und Validierungspipeline entwickelt, und ihre flexible Kombination ermöglicht verschiedene Pipelinedesigns für eine Vielzahl von Geländefahrzeugen und Anwendungsfällen je nach Bedarf. Low-Level-Perzeption gewährleistet eine eng gekoppelte Konfidenzbewertung für rohe 2D- und 3D-Sensordaten, um Sensorausfälle zu erkennen und eine ausreichende Genauigkeit der Sensordaten zu gewährleisten. Darüber hinaus werden neuartige Kalibrierungs- und Registrierungsansätze für Multisensorsysteme in der Perzeption vorgestellt, welche lediglich die Struktur der Umgebung nutzen, um die erfassten Sensordaten zu registrieren: ein halbautomatischer Registrierungsansatz zur Registrierung mehrerer 3D~Light Detection and Ranging (LiDAR) Sensoren und ein vertrauensbasiertes Framework, welches verschiedene Registrierungsmethoden kombiniert und die Registrierung verschiedener Sensoren mit unterschiedlichen Messprinzipien ermöglicht. Dabei validiert die Kombination mehrerer Registrierungsmethoden die Registrierungsergebnisse in einer eng gekoppelten Weise. Mid-Level-Perzeption ermöglicht die 3D-Rekonstruktion unstrukturierter Umgebungen mit zwei Verfahren zur Schätzung der Disparität von Stereobildern: ein klassisches, korrelationsbasiertes Verfahren für Hyperspektralbilder, welches eine begrenzte Menge an Test- und Validierungsdaten erfordert, und ein zweites Verfahren, welches die Disparität aus Graustufenbildern mit neuronalen Faltungsnetzen (CNNs) schätzt. Neuartige Disparitätsfehlermetriken und eine Evaluierungs-Toolbox für die 3D-Rekonstruktion von Stereobildern ergänzen die vorgeschlagenen Methoden zur Disparitätsschätzung aus Stereobildern und ermöglichen deren lose gekoppelte Validierung. High-Level-Perzeption konzentriert sich auf die Interpretation von einzelnen 3D-Punktwolken zur Befahrbarkeitsanalyse, Objekterkennung und Hindernisvermeidung. Eine Domänentransferanalyse für State-of-the-art-Methoden zur semantischen 3D-Segmentierung liefert Empfehlungen für eine möglichst exakte Segmentierung in neuen Zieldomänen ohne eine Generierung neuer Trainingsdaten. Der vorgestellte Trainingsansatz für 3D-Segmentierungsverfahren mit CNNs kann die benötigte Menge an Trainingsdaten weiter reduzieren. Methoden zur Erklärbarkeit künstlicher Intelligenz vor und nach der Modellierung ermöglichen eine lose gekoppelte Validierung der vorgeschlagenen High-Level-Methoden mit Datensatzbewertung und modellunabhängigen Erklärungen für CNN-Vorhersagen. Altlastensanierung und Militärlogistik sind die beiden Hauptanwendungsfälle in unstrukturierten Umgebungen, welche in dieser Arbeit behandelt werden. Diese Anwendungsszenarien zeigen auch, wie die Lücke zwischen der Entwicklung einzelner Methoden und ihrer Integration in die Verarbeitungskette für autonome Geländefahrzeuge mit Lokalisierung, Kartierung, Planung und Steuerung geschlossen werden kann. Zusammenfassend lässt sich sagen, dass die vorgeschlagene Pipeline flexible Perzeptionslösungen für autonome Geländefahrzeuge bietet und die begleitende Validierung eine exakte und vertrauenswürdige Perzeption unstrukturierter Umgebungen gewährleistet

    Patch-based segmentation with spatial context for medical image analysis

    Get PDF
    Accurate segmentations in medical imaging form a crucial role in many applications from pa- tient diagnosis to population studies. As the amount of data generated from medical images increases, the ability to perform this task without human intervention becomes ever more de- sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from images which have already been manually labelled by clinical experts. Methods using this ap- proach have been shown to be e ective in many applications, demonstrating great potential for automatic labelling of large datasets. However, these methods usually require the use of image registration and are dependent on the outcome of the registration. Any registrations errors that occur are also propagated to the segmentation process and are likely to have an adverse e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a relaxation of the required image alignment, whilst achieving similar results. In general, these methods label each voxel of a target image by comparing the image patch centred on the voxel with neighbouring patches from an atlas library and assigning the most likely label according to the closest matches. The main contributions of this thesis focuses around this approach in providing accurate segmentation results whilst minimising the dependency on registration quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework, which utilises both intensity and spatial information, and explore the use of spatial context in a diverse range of applications. The proposed methods extend the potential for patch-based segmentation to tolerate registration errors by rede ning the \locality" for patch selection and comparison, whilst also allowing similar looking patches from di erent anatomical structures to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging from the brain to the knees, demonstrating its potential with results which are competitive to state-of-the-art techniques.Open Acces

    U-Net based deep convolutional neural network models for liver segmentation from CT scan images

    Get PDF
    Liver segmentation is a critical task for diagnosis, treatment and follow-up processes of liver cancer. Computed Tomography (CT) scans are the common medical image modality for the segmentation task. Liver segmentation is considered a very hard task for many reasons. Medical images are limited for researchers. Liver shape is changing based on the patient position during the CT scan process, and varies from patient to another based on the health conditions. Liver and other organs, for example heart, stomach, and pancreas, share similar gray scale range in CT images. Liver treatment using surgery operations is very critical because liver contains significant amount of blood and the position of liver is very close to critical organs like heart, lungs, stomach, and crucial blood veins. Therefore the accuracy of segmentation is critical to define liver and tumors shape and position especially when the treatment surgery conducted using radio frequency heating or cryoablation needles. In the literature, convolutional neural networks (CNN) have achieved very high accuracy on liver segmentation and the U-Net model is considered the state-of-the-art for the medical image segmentation task. Many researchers have developed CNN models based on U-Net and stacked U-Nets with/without bridged connections. However, CNN models need significant number of labeled samples for training and validation which is not commonly available in the case of liver CT images. The process of generating manual annotated masks for the training samples are time consuming and need involvement of expert clinical doctors. Data augmentation has thus been widely used in boosting the sample size for model training. Using rotation with steps of 15o and horizontal and vertical flipping as augmentation techniques, the lack of dataset and training samples issue is solved. The choice of rotation and flipping because in the real life situations, most of the CT scans recorded while the while patient lies on face down or with 45o, 60o,90o on right side according to the location of the tumor. Nonetheless, such process has brought up a new issue for liver segmentation. For example, due to the augmentation operations of rotation and flipping, the trained model detected part of the heart as a liver when it is on the wrong side of the body. The first part of this research conducted an extensive experimental study of U-Net based model in terms of deeper and wider, and variant bridging and skip-connections in order to give recommendation for using U-Net based models. Top-down and bottom-up approaches were used to construct variations of deeper models, whilst two, three, and four stacked U-Nets were applied to construct the wider U-Net models. The variation of the skip connections between two and three U-Nets are the key factors in the study. The proposed model used 2 bridged U-Nets with three extra skip connections between the U-Nets to overcome the flipping issue. A new loss function based on minimizing the distance between the center of mass between the predicted blobs has also enhanced the liver segmentation accuracy. Finally, the deep-supervision concept was integrated with the new loss functions where the total loss was calculated as the sum of weighted loss functions over each weighted deeply supervision. It has achieved a segmentation accuracy of up to 90%. The proposed model of 2 bridged U-Nets with compound skip-connections and specific number of levels, layers, filters, and image size has increased the accuracy of liver segmentation to ~90% whereas the original U-Net and bridged nets have recorded a segmentation accuracy of ~85%. Although applying extra deeply supervised layers and weighted compound of dice coefficient and centroid loss functions solved the flipping issue with ~93%, there is still a room for improving the accuracy by applying some image enhancement as pre-processing stage

    U-Net based deep convolutional neural network models for liver segmentation from CT scan images

    Get PDF
    Liver segmentation is a critical task for diagnosis, treatment and follow-up processes of liver cancer. Computed Tomography (CT) scans are the common medical image modality for the segmentation task. Liver segmentation is considered a very hard task for many reasons. Medical images are limited for researchers. Liver shape is changing based on the patient position during the CT scan process, and varies from patient to another based on the health conditions. Liver and other organs, for example heart, stomach, and pancreas, share similar gray scale range in CT images. Liver treatment using surgery operations is very critical because liver contains significant amount of blood and the position of liver is very close to critical organs like heart, lungs, stomach, and crucial blood veins. Therefore the accuracy of segmentation is critical to define liver and tumors shape and position especially when the treatment surgery conducted using radio frequency heating or cryoablation needles. In the literature, convolutional neural networks (CNN) have achieved very high accuracy on liver segmentation and the U-Net model is considered the state-of-the-art for the medical image segmentation task. Many researchers have developed CNN models based on U-Net and stacked U-Nets with/without bridged connections. However, CNN models need significant number of labeled samples for training and validation which is not commonly available in the case of liver CT images. The process of generating manual annotated masks for the training samples are time consuming and need involvement of expert clinical doctors. Data augmentation has thus been widely used in boosting the sample size for model training. Using rotation with steps of 15o and horizontal and vertical flipping as augmentation techniques, the lack of dataset and training samples issue is solved. The choice of rotation and flipping because in the real life situations, most of the CT scans recorded while the while patient lies on face down or with 45o, 60o,90o on right side according to the location of the tumor. Nonetheless, such process has brought up a new issue for liver segmentation. For example, due to the augmentation operations of rotation and flipping, the trained model detected part of the heart as a liver when it is on the wrong side of the body. The first part of this research conducted an extensive experimental study of U-Net based model in terms of deeper and wider, and variant bridging and skip-connections in order to give recommendation for using U-Net based models. Top-down and bottom-up approaches were used to construct variations of deeper models, whilst two, three, and four stacked U-Nets were applied to construct the wider U-Net models. The variation of the skip connections between two and three U-Nets are the key factors in the study. The proposed model used 2 bridged U-Nets with three extra skip connections between the U-Nets to overcome the flipping issue. A new loss function based on minimizing the distance between the center of mass between the predicted blobs has also enhanced the liver segmentation accuracy. Finally, the deep-supervision concept was integrated with the new loss functions where the total loss was calculated as the sum of weighted loss functions over each weighted deeply supervision. It has achieved a segmentation accuracy of up to 90%. The proposed model of 2 bridged U-Nets with compound skip-connections and specific number of levels, layers, filters, and image size has increased the accuracy of liver segmentation to ~90% whereas the original U-Net and bridged nets have recorded a segmentation accuracy of ~85%. Although applying extra deeply supervised layers and weighted compound of dice coefficient and centroid loss functions solved the flipping issue with ~93%, there is still a room for improving the accuracy by applying some image enhancement as pre-processing stage

    Use of Serial Block Face-Scanning Electron Microscopy to Study the Ultrastructure of Vertebrate and Invertebrate Biology

    Get PDF
    PhD ThesisThe development of Serial Block Face Scanning Electron Microscopy (SBF-SEM) allows for acquisition of serially sectioned, imaged data of ultrastructure at high resolution. In this project, optimisation of both SBF-SEM methodology and 3-D image segmentation analysis was applied to the ultrastructural examination of two types of biological tissues, each requiring a different experimental approach. The first project was a connectomic based study, to determine the relationship between the neurons that synapse upon the Lobula Giant Movement Detector 2 (LGMD 2) neuron, within the optic lobe of the locust. A substantial portion of the LGMD 2 neuron was reconstructed along with the afferent neurons, enabling the discovery of retinotopic mapping from the photoreceptors of the eye onto the LGMD 2 neuron. A sub-class of afferent neurons was also found, most likely vital in the process of signal integration across the large LGMD 2 neuron. For the second project, two types of skeletal muscle (psoas and soleus) obtained from fetal and adult guinea pigs were analysed to assess tissue-specific changes in mitochondrial morphology with muscle maturation. Distinct mitochondrial shapes were found across both muscles and age groups and a classification system was developed. It was found that, in both muscles, by late fetal gestation the mitochondrial network is well developed and akin to that found in the adult. Quantitative and qualitative differences in mitochondria morphology and complexity were found between the two muscles in the adult group. These differences are likely to be related to functional specialisation. All data collected during the experiments have also been made available online on Zenodo, roughly 240GB, which can be used for further studies. Overall SBF-SEM was proven to be a robust method of gaining new insights into the ultrastructure in both models and has wide ranging capabilities for a variety of experimental objectives
    corecore