94 research outputs found

    Automated Characterisation and Classification of Liver Lesions From CT Scans

    Get PDF
    Cancer is a general term for a wide range of diseases that can affect any part of the body due to the rapid creation of abnormal cells that grow outside their normal boundaries. Liver cancer is one of the common diseases that cause the death of more than 600,000 each year. Early detection is important to diagnose and reduce the incidence of death. Examination of liver lesions is performed with various medical imaging modalities such as Ultrasound (US), Computer tomography (CT), and Magnetic resonance imaging (MRI). The improvements in medical imaging and image processing techniques have significantly enhanced the interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. Moreover, CAD systems can help physician, as a second opinion, in characterising lesions and making the diagnostic decision. Thus, CAD systems have become an important research area. Particularly, these systems can provide diagnostic assistance to doctors to improve overall diagnostic accuracy. The traditional methods to characterise liver lesions and differentiate normal liver tissues from abnormal ones are largely dependent on the radiologists experience. Thus, CAD systems based on the image processing and artificial intelligence techniques gained a lot of attention, since they could provide constructive diagnosis suggestions to clinicians for decision making. The liver lesions are characterised through two ways: (1) Using a content-based image retrieval (CBIR) approach to assist the radiologist in liver lesions characterisation. (2) Calculating the high-level features that describe/ characterise the liver lesion in a way that is interpreted by humans, particularly Radiologists/Clinicians, based on the hand-crafted/engineered computational features (low-level features) and learning process. However, the research gap is related to the high-level understanding and interpretation of the medical image contents from the low-level pixel analysis, based on mathematical processing and artificial intelligence methods. In our work, the research gap is bridged if a relation of image contents to medical meaning in analogy to radiologist understanding is established. This thesis explores an automated system for the classification and characterisation of liver lesions in CT scans. Firstly, the liver is segmented automatically by using anatomic medical knowledge, histogram-based adaptive threshold and morphological operations. The lesions and vessels are then extracted from the segmented liver by applying AFCM and Gaussian mixture model through a region growing process respectively. Secondly, the proposed framework categorises the high-level features into two groups; the first group is the high-level features that are extracted from the image contents such as (Lesion location, Lesion focality, Calcified, Scar, ...); the second group is the high-level features that are inferred from the low-level features through machine learning process to characterise the lesion such as (Lesion density, Lesion rim, Lesion composition, Lesion shape,...). The novel Multiple ROIs selection approach is proposed, in which regions are derived from generating abnormality level map based on intensity difference and the proximity distance for each voxel with respect to the normal liver tissue. Then, the association between low-level, high-level features and the appropriate ROI are derived by assigning the ability of each ROI to represents a set of lesion characteristics. Finally, a novel feature vector is built, based on high-level features, and fed into SVM for lesion classification. In contrast with most existing research, which uses low-level features only, the use of high-level features and characterisation helps in interpreting and explaining the diagnostic decision. The methods are evaluated on a dataset containing 174 CT scans. The experimental results demonstrated that the efficacy of the proposed framework in the successful characterisation and classification of the liver lesions in CT scans. The achieved average accuracy was 95:56% for liver lesion characterisation. While the lesion’s classification accuracy was 97:1% for the entire dataset. The proposed framework is developed to provide a more robust and efficient lesion characterisation framework through comprehensions of the low-level features to generate semantic features. The use of high-level features (characterisation) helps in better interpretation of CT liver images. In addition, the difference-of-features using multiple ROIs were developed for robust capturing of lesion characteristics in a reliable way. This is in contrast to the current research trend of extracting the features from the lesion only and not paying much attention to the relation between lesion and surrounding area. The design of the liver lesion characterisation framework is based on the prior knowledge of the medical background to get a better and clear understanding of the liver lesion characteristics in medical CT images

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Towards automated three-dimensional tracking of nephrons through stacked histological image sets

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of Witwatersrand for the degree of Master of Science in Engineering. August, 2015The three-dimensional microarchitecture of the mammalian kidney is of keen interest in the fields of cell biology and biomedical engineering as it plays a crucial role in renal function. This study presents a novel approach to the automatic tracking of individual nephrons through three-dimensional histological image sets of mouse and rat kidneys. The image database forms part of a previous study carried out at the University of Aarhus, Denmark. The previous study involved manually tracking a few hundred nephrons through the image sets in order to explore the renal microarchitecture, the results of which forms the gold standard for this study. The purpose of the current research is to develop methods which contribute towards creating an automated, intelligent system as a standard tool for such image sets. This would reduce the excessive time and human effort previously required for the tracking task, enabling a larger sample of nephrons to be tracked. It would also be desirable, in future, to explore the renal microstructure of various species and diseased specimens. The developed algorithm is robust, able to isolate closely packed nephrons and track their convoluted paths despite a number of non-ideal conditions such as local image distortions, artefacts and connective tissue interference. The system consists of initial image pre-processing steps such as background removal, adaptive histogram equalisation and image segmentation. A feature extraction stage achieves data abstraction and information concentration by extracting shape iii descriptors, radial shape profiles and key coordinates for each nephron crosssection. A custom graph-based tracking algorithm is implemented to track the nephrons using the extracted coordinates. A rule-base and machine learning algorithms including an Artificial Neural Network and Support Vector Machine are used to evaluate the shape features and other information to validate the algorithm’s results through each of its iterations. The validation steps prove to be highly effective in rejecting incorrect tracking moves, with the rule-base having greater than 90% accuracy and the Artificial Neural Network and Support Vector Machine both producing 93% classification accuracies. Comparison of a selection of automatically and manually tracked nephrons yielded results of 95% accuracy and 98% tracking extent for the proximal convoluted tubule, proximal straight tubule and ascending thick limb of the loop of Henle. The ascending and descending thin limbs of the loop of Henle pose a challenge, having low accuracy and low tracking extent due to the low resolution, narrow diameter and high density of cross-sections in the inner medulla. Limited manual intervention is proposed as a solution to these limitations, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. The developed semi-automatic system saves a considerable amount of time and effort in comparison with the manual task. Furthermore, the developed methodology forms a foundation for future development towards a fully automated tracking system for nephrons

    General Dynamic Surface Reconstruction: Application to the 3D Segmentation of the Left Ventricle

    Get PDF
    Aquesta tesi descriu la nostra contribució a la reconstrucció tridimensional de les superfícies interna i externa del ventricle esquerre humà. La reconstrucció és un primer procés dins d'una aplicació global de Realitat Virtual dissenyada com una important eina de diagnòstic per a hospitals. L'aplicació parteix de la reconstrucció de les superfícies i proveeix a l'expert de manipulació interactiva del model en temps real, a més de càlculs de volums i de altres paràmetres d'interès. El procés de recuperació de les superfícies es caracteritza per la seva velocitat de convergència, la suavitat a les malles finals i la precisió respecte de les dades recuperades. Donat que el diagnòstic de patologies cardíaques requereix d'experiència, temps i molt coneixement professional, la simulació és un procés clau que millora la eficiència.Els nostres algorismes i implementacions han estat aplicats a dades sintètiques i reals amb diferències relatives a la quantitat de dades inexistents, casuístiques presents a casos patològics i anormals. Els conjunts de dades inclouen adquisicions d'instants concrets i de cicles cardíacs complets. La bondat del sistema de reconstrucció ha estat avaluada mitjançant paràmetres mèdics per a poder comparar els nostres resultats finals amb aquells derivats a partir de programari típic utilitzat pels professionals de la medicina.A més de l'aplicació directa al diagnòstic mèdic, la nostra metodologia permet reconstruccions de tipus genèric en el camp dels Gràfics 3D per ordinador. Les nostres reconstruccions permeten generar models tridimensionals amb un baix cost en quant a la interacció manual necessària i a la càrrega computacional associada. Altrament, el nostre mètode pot entendre's com un robust algorisme de triangularització que construeix superfícies partint de núvols de punts que poden obtenir-se d'escàners làser o sensors magnètics, per exemple.Esta tesis describe nuestra contribución a la reconstrucción tridimensional de las superficies interna y externa del ventrículo izquierdo humano. La reconstrucción es un primer proceso que forma parte de una aplicación global de Realidad Virtual diseñada como una importante herramienta de diagnóstico para hospitales. La aplicación parte de la reconstrucción de las superficies y provee al experto de manipulación interactiva del modelo en tiempo real, además de cálculos de volúmenes y de otros parámetros de interés. El proceso de recuperación de las superficies se caracteriza por su velocidad de convergencia, la suavidad en las mallas finales y la precisión respecto de los datos recuperados. Dado que el diagnóstico de patologías cardíacas requiere experiencia, tiempo y mucho conocimiento profesional, la simulación es un proceso clave que mejora la eficiencia.Nuestros algoritmos e implementaciones han sido aplicados a datos sintéticos y reales con diferencias en cuanto a la cantidad de datos inexistentes, casuística presente en casos patológicos y anormales. Los conjuntos de datos incluyen adquisiciones de instantes concretos y de ciclos cardíacos completos. La bondad del sistema de reconstrucción ha sido evaluada mediante parámetros médicos para poder comparar nuestros resultados finales con aquellos derivados a partir de programario típico utilizado por los profesionales de la medicina.Además de la aplicación directa al diagnóstico médico, nuestra metodología permite reconstrucciones de tipo genérico en el campo de los Gráficos 3D por ordenador. Nuestras reconstrucciones permiten generar modelos tridimensionales con un bajo coste en cuanto a la interacción manual necesaria y a la carga computacional asociada. Por otra parte, nuestro método puede entenderse como un robusto algoritmo de triangularización que construye superficies a partir de nubes de puntos que pueden obtenerse a partir de escáneres láser o sensores magnéticos, por ejemplo.This thesis describes a contribution to the three-dimensional reconstruction of the internal and external surfaces of the human's left ventricle. The reconstruction is a first process fitting in a complete VR application that will serve as an important diagnosis tool for hospitals. Beginning with the surfaces reconstruction, the application will provide volume and interactive real-time manipulation to the model. We focus on speed, precision and smoothness for the final surfaces. As long as heart diseases diagnosis requires experience, time and professional knowledge, simulation is a key-process that enlarges efficiency.The algorithms and implementations have been applied to both synthetic and real datasets with differences regarding missing data, present in cases where pathologies and abnormalities arise. The datasets include single acquisitions and complete cardiac cycles. The goodness of the reconstructions has been evaluated with medical parameters in order to compare our results with those retrieved by typical software used by physicians.Besides the direct application to medicine diagnosis, our methodology is suitable for generic reconstructions in the field of computer graphics. Our reconstructions can serve for getting 3D models at low cost, in terms of manual interaction and CPU computation overhead. Furthermore, our method is a robust tessellation algorithm that builds surfaces from clouds of points that can be retrieved from laser scanners or magnetic sensors, among other available hardware

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis
    corecore