99 research outputs found

    Novel Approaches to the Representation and Analysis of 3D Segmented Anatomical Districts

    Get PDF
    Nowadays, image processing and 3D shape analysis are an integral part of clinical practice and have the potentiality to support clinicians with advanced analysis and visualization techniques. Both approaches provide visual and quantitative information to medical practitioners, even if from different points of view. Indeed, shape analysis is aimed at studying the morphology of anatomical structures, while image processing is focused more on the tissue or functional information provided by the pixels/voxels intensities levels. Despite the progress obtained by research in both fields, a junction between these two complementary worlds is missing. When working with 3D models analyzing shape features, the information of the volume surrounding the structure is lost, since a segmentation process is needed to obtain the 3D shape model; however, the 3D nature of the anatomical structure is represented explicitly. With volume images, instead, the tissue information related to the imaged volume is the core of the analysis, while the shape and morphology of the structure are just implicitly represented, thus not clear enough. The aim of this Thesis work is the integration of these two approaches in order to increase the amount of information available for physicians, allowing a more accurate analysis of each patient. An augmented visualization tool able to provide information on both the anatomical structure shape and the surrounding volume through a hybrid representation, could reduce the gap between the two approaches and provide a more complete anatomical rendering of the subject. To this end, given a segmented anatomical district, we propose a novel mapping of volumetric data onto the segmented surface. The grey-levels of the image voxels are mapped through a volume-surface correspondence map, which defines a grey-level texture on the segmented surface. The resulting texture mapping is coherent to the local morphology of the segmented anatomical structure and provides an enhanced visual representation of the anatomical district. The integration of volume-based and surface-based information in a unique 3D representation also supports the identification and characterization of morphological landmarks and pathology evaluations. The main research contributions of the Ph.D. activities and Thesis are: \u2022 the development of a novel integration algorithm that combines surface-based (segmented 3D anatomical structure meshes) and volume-based (MRI volumes) information. The integration supports different criteria for the grey-levels mapping onto the segmented surface; \u2022 the development of methodological approaches for using the grey-levels mapping together with morphological analysis. The final goal is to solve problems in real clinical tasks, such as the identification of (patient-specific) ligament insertion sites on bones from segmented MR images, the characterization of the local morphology of bones/tissues, the early diagnosis, classification, and monitoring of muscle-skeletal pathologies; \u2022 the analysis of segmentation procedures, with a focus on the tissue classification process, in order to reduce operator dependency and to overcome the absence of a real gold standard for the evaluation of automatic segmentations; \u2022 the evaluation and comparison of (unsupervised) segmentation methods, finalized to define a novel segmentation method for low-field MR images, and for the local correction/improvement of a given segmentation. The proposed method is simple but effectively integrates information derived from medical image analysis and 3D shape analysis. Moreover, the algorithm is general enough to be applied to different anatomical districts independently of the segmentation method, imaging techniques (such as CT), or image resolution. The volume information can be integrated easily in different shape analysis applications, taking into consideration not only the morphology of the input shape but also the real context in which it is inserted, to solve clinical tasks. The results obtained by this combined analysis have been evaluated through statistical analysis

    Development of procedures for the design, optimization and manufacturing of customized orthopaedic and trauma implants: Geometrical/anatomical modelling from 3D medical imaging

    Get PDF
    Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)The introduction of imaging techniques in 1970 is one of the most relevant historical milestones in modern medicine. Medical imaging techniques have dramatically changed our understanding of the Human anatomy and physiology. The ability to non-invasively extract visual information allowed, not only the three-dimensional representation of the internal organs and musculo-skeletal system, but also the simulation of surgical procedures, the execution of computer aided surgeries, the development of more accurate biomechanical models, the development of custom-made implants, among others. The combination of the most advanced medical imaging systems with the most advanced CAD and CAM techniques, may allow the development of custom-made implants that meet patient-speci c traits. The geometrical and functional optimization of these devices may increase implant life-expectancy, especially in patients with marked deviations from the anatomical standards. In the implant customization protocol from medical image data, there are several steps that need to be followed in a sequential way, namely: Medical Image Processing and Recovering; Accurate Image Segmentation and 3D Surface Model Generation; Geometrical Customization based on CAD and CAE techniques; FEA Optimization of the Implant Geometry; and Manufacturing using CAD-CAM Technologies. This work aims to develop the necessary procedures for custom implant development from medical image data. This includes the extraction of highly accurate three-dimensional representation of the musculo-skeletal system from the Computed Tomography imaging, and the development of customized implants, given the speci c requirements of the target anatomy, and the applicable best practices found in the literature. A two-step segmentation protocol is proposed. In the rst step the region of interest is pre-segmented in order to obtain a good approximation to the desired geometry. Next, a fully automatic segmentation re nement is applied to obtain a more accurate representation of the target domain. The re nement step is composed by several sub-steps, more precisely, the recovery of the original image, considering the limiting resolution of the imaging system; image cropping; image interpolation; and segmentation re nement over the up-sampled domain. Highly accurate segmentations of the target domain were obtained with the proposed pipeline. The limiting factor to the accurate description of the domain accuracy is the image acquisition process, rather the following image processing, segmentation and surface meshing steps. The new segmentation pipeline was used in the development of three tailor-made implants, namely, a tibial nailing system, a mandibular implant, and a Total Hip Replacement system. Implants optimization is carried with Finite Element Analysis, considering the critical loading conditions that may be applied to each implant in working conditions. The new tibial nailing system is able of sustaining critical loads without implant failure; the new mandibular endoprosthesis that allows the recovery of the natural stress and strain elds observed in intact mandibles; and the Total Hip Replacement system that showed comparable strain shielding levels as commercially available stems. In summary, in the present thesis the necessary procedures for custom implant design are investigated, and new algorithms proposed. The guidelines for the characterization of the image acquisition, image processing, image segmentation and 3D reconstruction are presented and discussed. This new image processing pipeline is applied and validated in the development of the three abovementioned customized implants, for di erent medical applications and that satisfy speci c anatomical needs.Um dos principais marcos da história moderna da medicina e a introdução da imagem médica, em meados da década de 1970. As tecnologias de imagem permitiram aumentar e potenciar o nosso conhecimento acerca da anatomia e fisiologia do corpo Humano. A capacidade de obter informação imagiológica de forma não invasiva permitiu, não são a representação tridimensional de órgãos e do sistema músculo-esquelético, mas também a simulação de procedimentos cirúrgicos, a realização de cirurgias assistidas por computador, a criação de modelos biomecânicos mais realistas, a criação de implantes personalizados, entre outros. A conjugação dos sistemas mais avançados de imagem medica com as técnicas mais avançadas de modelação e maquinagem, pode permitir o desenvolvimento de implantes personalizados mais otimizados, que vão de encontro as especificidades de cada paciente. Por sua vez, a otimização geométrica e biomecânica destes dispositivos pode permitir, quer o aumento da sua longevidade, quer o tratamento de pessoas com estruturas anatómicas que se afastam dos padrões normais. O processo de modelação de implantes a partir da imagem medica passa por um conjunto de procedimentos a adotar, sequencialmente, ate ao produto final, a saber: Processamento e Recuperação de Imagem; Segmentação de Imagem e Reconstrução tridimensional da Região de Interesse; Modelação Geométrica do Implante; Simulação Numérica para a Otimização da Geometria; a Maquinagem do Implante. Este trabalho visa o desenvolvimento dos procedimentos necessários para a criação de implantes personalizados a partir da imagem medica, englobando a extração de modelos ósseos geométricos rigorosos a partir de imagens de Tomografia Computorizada e, a partir desses modelos, desenvolver implantes personalizados baseados nas melhores praticas existentes na literatura e que satisfaçam as especificidades da anatomia do paciente. Assim, apresenta-se e discute-se um novo procedimento de segmentação em dois passos. No primeiro e feita uma pre-segmentação que visa obter uma aproximação iniciala região de interesse. De seguida, um procedimento de refinamento da segmentação totalmente automático e aplicada a segmentação inicial para obter uma descrição mais precisa do domínio de interesse. O processo de refinamento da segmentação e constituído por vários procedimentos, designadamente: recuperação da imagem original, tendo em consideração a resolução limitante do sistema de imagem; o recorte da imagem na vizinhança da região pre-segmentada; a interpolação da região de interesse; e o refinamento da segmentação aplicando a técnica de segmentação Level-Sets sobre o domínio interpolado. O procedimento de segmentação permitiu extrair modelos extremamente precisos a partir da informação imagiológica. Os resultados revelam que o fator limitante a descrição do domínio e o processo de aquisição de imagem, em detrimento dos diversos passos de processamento subsequentes. O novo protocolo de segmentação foi utilizado no desenvolvimento de três implantes personalizados, a saber: um sistema de fixação interna para a tíbia; um implante mandibular; e um sistema para a Reconstrução Total da articulação da Anca. A otimização do comportamento mecânico dos implantes foi feita utilizado o Método dos Elementos Finitos, tendo em conta os carregamentos críticos a que estes podem estar sujeitos durante a sua vida útil. O sistema de fixação interna para a tíbia e capaz de suportar os carregamentos críticos, sem que a sua integridade mecânica seja comprometida; o implante mandibular permite recuperar os campos de tensão e deformação observados em mandíbulas intactas; e a Prótese Total da Anca apresenta níveis de strain shielding ao longo do fémur proximal comparáveis com os níveis observados em dispositivos comercialmente disponíveis. Em suma, nesta tese de Doutoramento são investigados e propostos novos procedimentos para o projeto de implantes feitos por medida. São apresentadas e discutidas as linhas orientadoras para a caracterização precisa do sistema de aquisição de imagem, para o processamento de imagem, para a segmentação, e para a reconstrução 3D das estruturas anatómicas a partir da imagem medica. Este conjunto de linhas orientadoras é aplicado e validado no desenvolvimento de três implantes personalizados, citados anteriormente, para aplicações médicas distintas e que satisfazem as necessidades anatómicas específicas de cada paciente.Fundação para a Ciência e Tecnologia (FCT

    Challenges and Opportunities of End-to-End Learning in Medical Image Classification

    Get PDF
    Das Paradigma des End-to-End Lernens hat in den letzten Jahren die Bilderkennung revolutioniert, aber die klinische Anwendung hinkt hinterher. Bildbasierte computergestützte Diagnosesysteme basieren immer noch weitgehend auf hochtechnischen und domänen-spezifischen Pipelines, die aus unabhängigen regelbasierten Modellen bestehen, welche die Teilaufgaben der Bildklassifikation wiederspiegeln: Lokalisation von auffälligen Regionen, Merkmalsextraktion und Entscheidungsfindung. Das Versprechen einer überlegenen Entscheidungsfindung beim End-to-End Lernen ergibt sich daraus, dass domänenspezifische Zwangsbedingungen von begrenzter Komplexität entfernt werden und stattdessen alle Systemkomponenten gleichzeitig, direkt anhand der Rohdaten, und im Hinblick auf die letztendliche Aufgabe optimiert werden. Die Gründe dafür, dass diese Vorteile noch nicht den Weg in die Klinik gefunden haben, d.h. die Herausforderungen, die sich bei der Entwicklung Deep Learning-basierter Diagnosesysteme stellen, sind vielfältig: Die Tatsache, dass die Generalisierungsfähigkeit von Lernalgorithmen davon abhängt, wie gut die verfügbaren Trainingsdaten die tatsächliche zugrundeliegende Datenverteilung abbilden, erweist sich in medizinische Anwendungen als tiefgreifendes Problem. Annotierte Datensätze in diesem Bereich sind notorisch klein, da für die Annotation eine kostspielige Beurteilung durch Experten erforderlich ist und die Zusammenlegung kleinerer Datensätze oft durch Datenschutzauflagen und Patientenrechte erschwert wird. Darüber hinaus weisen medizinische Datensätze drastisch unterschiedliche Eigenschaften im Bezug auf Bildmodalitäten, Bildgebungsprotokolle oder Anisotropien auf, und die oft mehrdeutige Evidenz in medizinischen Bildern kann sich auf inkonsistente oder fehlerhafte Trainingsannotationen übertragen. Während die Verschiebung von Datenverteilungen zwischen Forschungsumgebung und Realität zu einer verminderten Modellrobustheit führt und deshalb gegenwärtig als das Haupthindernis für die klinische Anwendung von Lernalgorithmen angesehen wird, wird dieser Graben oft noch durch Störfaktoren wie Hardwarelimitationen oder Granularität von gegebenen Annotation erweitert, die zu Diskrepanzen zwischen der modellierten Aufgabe und der zugrunde liegenden klinischen Fragestellung führen. Diese Arbeit untersucht das Potenzial des End-to-End-Lernens in klinischen Diagnosesystemen und präsentiert Beiträge zu einigen der wichtigsten Herausforderungen, die derzeit eine breite klinische Anwendung verhindern. Zunächst wird der letzten Teil der Klassifikations-Pipeline untersucht, die Kategorisierung in klinische Pathologien. Wir demonstrieren, wie das Ersetzen des gegenwärtigen klinischen Standards regelbasierter Entscheidungen durch eine groß angelegte Merkmalsextraktion gefolgt von lernbasierten Klassifikatoren die Brustkrebsklassifikation im MRT signifikant verbessert und eine Leistung auf menschlichem Level erzielt. Dieser Ansatz wird weiter anhand von kardiologischer Diagnose gezeigt. Zweitens ersetzen wir, dem Paradigma des End-to-End Lernens folgend, das biophysikalische Modell, das für die Bildnormalisierung in der MRT angewandt wird, sowie die Extraktion handgefertigter Merkmale, durch eine designierte CNN-Architektur und liefern eine eingehende Analyse, die das verborgene Potenzial der gelernten Bildnormalisierung und einen Komplementärwert der gelernten Merkmale gegenüber den handgefertigten Merkmalen aufdeckt. Während dieser Ansatz auf markierten Regionen arbeitet und daher auf manuelle Annotation angewiesen ist, beziehen wir im dritten Teil die Aufgabe der Lokalisierung dieser Regionen in den Lernprozess ein, um eine echte End-to-End-Diagnose baserend auf den Rohbildern zu ermöglichen. Dabei identifizieren wir eine weitgehend vernachlässigte Zwangslage zwischen dem Streben nach der Auswertung von Modellen auf klinisch relevanten Skalen auf der einen Seite, und der Optimierung für effizientes Training unter Datenknappheit auf der anderen Seite. Wir präsentieren ein Deep Learning Modell, das zur Auflösung dieses Kompromisses beiträgt, liefern umfangreiche Experimente auf drei medizinischen Datensätzen sowie eine Serie von Toy-Experimenten, die das Verhalten bei begrenzten Trainingsdaten im Detail untersuchen, und publiziren ein umfassendes Framework, das unter anderem die ersten 3D-Implementierungen gängiger Objekterkennungsmodelle umfasst. Wir identifizieren weitere Hebelpunkte in bestehenden End-to-End-Lernsystemen, bei denen Domänenwissen als Zwangsbedingung dienen kann, um die Robustheit von Modellen in der medizinischen Bildanalyse zu erhöhen, die letztendlich dazu beitragen sollen, den Weg für die Anwendung in der klinischen Praxis zu ebnen. Zu diesem Zweck gehen wir die Herausforderung fehlerhafter Trainingsannotationen an, indem wir die Klassifizierungskompnente in der End-to-End-Objekterkennung durch Regression ersetzen, was es ermöglicht, Modelle direkt auf der kontinuierlichen Skala der zugrunde liegenden pathologischen Prozesse zu trainieren und so die Robustheit der Modelle gegenüber fehlerhaften Trainingsannotationen zu erhöhen. Weiter adressieren wir die Herausforderung der Input-Heterogenitäten, mit denen trainierte Modelle konfrontiert sind, wenn sie an verschiedenen klinischen Orten eingesetzt werden, indem wir eine modellbasierte Domänenanpassung vorschlagen, die es ermöglicht, die ursprüngliche Trainingsdomäne aus veränderten Inputs wiederherzustellen und damit eine robuste Generalisierung zu gewährleisten. Schließlich befassen wir uns mit dem höchst unsystematischen, aufwendigen und subjektiven Trial-and-Error-Prozess zum Finden von robusten Hyperparametern für einen gegebene Aufgabe, indem wir Domänenwissen in ein Set systematischer Regeln überführen, die eine automatisierte und robuste Konfiguration von Deep Learning Modellen auf einer Vielzahl von medizinischen Datensetzen ermöglichen. Zusammenfassend zeigt die hier vorgestellte Arbeit das enorme Potenzial von End-to-End Lernalgorithmen im Vergleich zum klinischen Standard mehrteiliger und hochtechnisierter Diagnose-Pipelines auf, und präsentiert Lösungsansätze zu einigen der wichtigsten Herausforderungen für eine breite Anwendung unter realen Bedienungen wie Datenknappheit, Diskrepanz zwischen der vom Modell behandelten Aufgabe und der zugrunde liegenden klinischen Fragestellung, Mehrdeutigkeiten in Trainingsannotationen, oder Verschiebung von Datendomänen zwischen klinischen Standorten. Diese Beiträge können als Teil des übergreifende Zieles der Automatisierung von medizinischer Bildklassifikation gesehen werden - ein integraler Bestandteil des Wandels, der erforderlich ist, um die Zukunft des Gesundheitswesens zu gestalten

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Developing and Applying CAD-generated Image Markers to Assist Disease Diagnosis and Prognosis Prediction

    Get PDF
    Developing computer-aided detection and/or diagnosis (CAD) schemes has been an active research topic in medical imaging informatics (MII) with promising results in assisting clinicians in making better diagnostic and/or clinical decisions in the last two decades. To build robust CAD schemes, we need to develop state-of-the-art image processing and machine learning (ML) algorithms to optimize each step in the CAD pipeline, including detection and segmentation of the region of interest, optimal feature generation, followed by integration to ML classifiers. In my dissertation, I conducted multiple studies investigating the feasibility of developing several novel CAD schemes in the field of medicine concerning different purposes. The first study aims to investigate how to optimally develop a CAD scheme of contrast-enhanced digital mammography (CEDM) images to classify breast masses. CEDM includes both low energy (LE) and dual-energy subtracted (DES) images. A CAD scheme was applied to segment mass regions depicting LE and DES images separately. Optimal segmentation results generated from DES images were also mapped to LE images or vice versa. After computing image features, multilayer perceptron-based ML classifiers integrated with a correlation-based feature subset evaluator and leave-one-case-out cross-validation method were built to classify mass regions. The study demonstrated that DES images eliminated the overlapping effect of dense breast tissue, which helps improve mass segmentation accuracy. By mapping mass regions segmented from DES images to LE images, CAD yields significantly improved performance. The second study aims to develop a new quantitative image marker computed from the pre-intervention computed tomography perfusion (CTP) images and evaluate its feasibility to predict clinical outcome among acute ischemic stroke (AIS) patients undergoing endovascular mechanical thrombectomy after diagnosis of large vessel occlusion. A CAD scheme is first developed to pre-process CTP images of different scanning series for each study case, perform image segmentation, quantify contrast-enhanced blood volumes in bilateral cerebral hemispheres, and compute image features related to asymmetrical cerebral blood flow patterns based on the cumulative cerebral blood flow curves of two hemispheres. Next, image markers based on a single optimal feature and ML models fused with multi-features are developed and tested to classify AIS cases into two classes of good and poor prognosis based on the Modified Rankin Scale. The study results show that ML model trained using multiple features yields significantly higher classification performance than the image marker using the best single feature (p<0.01). This study demonstrates the feasibility of developing a new CAD scheme to predict the prognosis of AIS patients in the hyperacute stage, which has the potential to assist clinicians in optimally treating and managing AIS patients. The third study aims to develop and test a new CAD scheme to predict prognosis in aneurysmal subarachnoid hemorrhage (aSAH) patients using brain CT images. Each patient had two sets of CT images acquired at admission and prior to discharge. CAD scheme was applied to segment intracranial brain regions into four subregions, namely, cerebrospinal fluid (CSF), white matter (WM), gray matter (GM), and extraparenchymal blood (EPB), respectively. CAD then computed nine image features related to 5 volumes of the segmented sulci, EPB, CSF, WM, GM, and four volumetrical ratios to sulci. Subsequently, 16 ML models were built using multiple features computed either from CT images acquired at admission or prior to discharge to predict eight prognosis related parameters. The results show that ML models trained using CT images acquired at admission yielded higher accuracy to predict short-term clinical outcomes, while ML models trained using CT images acquired prior to discharge had higher accuracy in predicting long-term clinical outcomes. Thus, this study demonstrated the feasibility of predicting the prognosis of aSAH patients using new ML model-generated quantitative image markers. The fourth study aims to develop and test a new interactive computer-aided detection (ICAD) tool to quantitatively assess hemorrhage volumes. After loading each case, the ICAD tool first segments intracranial brain volume, performs CT labeling of each voxel. Next, contour-guided image-thresholding techniques based on CT Hounsfield Unit are used to estimate and segment hemorrhage-associated voxels (ICH). Next, two experienced neurology residents examine and correct the markings of ICH categorized into either intraparenchymal hemorrhage (IPH) or intraventricular hemorrhage (IVH) to obtain the true markings. Additionally, volumes and maximum two-dimensional diameter of each sub-type of hemorrhage are also computed for understanding ICH prognosis. The performance to segment hemorrhage regions between semi-automated ICAD and the verified neurology residents’ true markings is evaluated using dice similarity coefficient (DSC). The data analysis results in the study demonstrate that the new ICAD tool enables to segment and quantify ICH and other hemorrhage volumes with higher DSC. Finally, the fifth study aims to bridge the gap between traditional radiomics and deep learning systems by comparing and assessing these two technologies in classifying breast lesions. First, one CAD scheme is applied to segment lesions and compute radiomics features. In contrast, another scheme applies a pre-trained residual net architecture (ResNet50) as a transfer learning model to extract automated features. Next, the principal component algorithm processes both initially computed radiomics and automated features to create optimal feature vectors. Then, several support vector machine (SVM) classifiers are built using the optimized radiomics or automated features. This study indicates that (1) CAD built using only deep transfer learning yields higher classification performance than the traditional radiomic-based model, (2) SVM trained using the fused radiomics and automated features does not yield significantly higher AUC, and (3) radiomics and automated features contain highly correlated information in lesion classification. In summary, in all these studies, I developed and investigated several key concepts of CAD pipeline, including (i) pre-processing algorithms, (ii) automatic detection and segmentation schemes, (iii) feature extraction and optimization methods, and (iv) ML and data analysis models. All developed CAD models are embedded with interactive and visually aided graphical user interfaces (GUIs) to provide user functionality. These techniques present innovative approaches for building quantitative image markers to build optimal ML models. The study results indicate the underlying CAD scheme's potential application to assist radiologists in clinical settings for their assessments in diagnosing disease and improving their overall performance

    Computer-aided Detection of Breast Cancer in Digital Tomosynthesis Imaging Using Deep and Multiple Instance Learning

    Get PDF
    Breast cancer is the most common cancer among women in the world. Nevertheless, early detection of breast cancer improves the chance of successful treatment. Digital breast tomosynthesis (DBT) as a new tomographic technique was developed to minimize the limitations of conventional digital mammography screening. A DBT is a quasi-three-dimensional image that is reconstructed from a small number of two-dimensional (2D) low-dose X-ray images. The 2D X-ray images are acquired over a limited angular around the breast. Our research aims to introduce computer-aided detection (CAD) frameworks to detect early signs of breast cancer in DBTs. In this thesis, we propose three CAD frameworks for detection of breast cancer in DBTs. The first CAD framework is based on hand-crafted feature extraction. Concerning early signs of breast cancer: mass, micro-calcifications, and bilateral asymmetry between left and right breast, the system includes three separate channels to detect each sign. Next two CAD frameworks automatically learn complex patterns of 2D slices using the deep convolutional neural network and the deep cardinality-restricted Boltzmann machines. Finally, the CAD frameworks employ a multiple-instance learning approach with randomized trees algorithm to classify DBT images based on extracted information from 2D slices. The frameworks operate on 2D slices which are generated from DBT volumes. These frameworks are developed and evaluated using 5,040 2D image slices obtained from 87 DBT volumes. We demonstrate the validation and usefulness of the proposed CAD frameworks within empirical experiments for detecting breast cancer in DBTs

    Machine Learning towards General Medical Image Segmentation

    Get PDF
    The quality of patient care associated with diagnostic radiology is proportionate to a physician\u27s workload. Segmentation is a fundamental limiting precursor to diagnostic and therapeutic procedures. Advances in machine learning aims to increase diagnostic efficiency to replace single applications with generalized algorithms. We approached segmentation as a multitask shape regression problem, simultaneously predicting coordinates on an object\u27s contour while jointly capturing global shape information. Shape regression models inherent point correlations to recover ambiguous boundaries not supported by clear edges and region homogeneity. Its capabilities was investigated using multi-output support vector regression (MSVR) on head and neck (HaN) CT images. Subsequently, we incorporated multiplane and multimodality spinal images and presented the first deep learning multiapplication framework for shape regression, the holistic multitask regression network (HMR-Net). MSVR and HMR-Net\u27s performance were comparable or superior to state-of-the-art algorithms. Multiapplication frameworks bridges any technical knowledge gaps and increases workflow efficiency
    corecore