125 research outputs found

    Automating the multimodal analysis of musculoskeletal imaging in the presence of hip implants

    Get PDF
    In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition after hip replacement. In this thesis, I developed automated processing tools for the joint analysis of CT and MR images of patients with hip implants. In order to combine the multimodal information, a novel nonlinear registration algorithm was introduced, which imposes rigidity constraints on bony structures to ensure realistic deformation. I implemented and thoroughly validated a fully automated framework for the multimodal segmentation of healthy and pathological musculoskeletal structures, as well as implants. This framework combines the proposed registration algorithm with tailored image quality enhancement techniques and a multi-atlas-based segmentation approach, providing robustness against the large population anatomical variability and the presence of noise and artefacts in the images. The automation of muscle segmentation enabled the derivation of a measure of fatty infiltration, the Intramuscular Fat Fraction, useful to characterise the presence of muscle atrophy. The proposed imaging biomarker was shown to strongly correlate with the atrophy radiological score currently used in clinical practice. Finally, a preliminary work on multimodal metal artefact reduction, using an unsupervised deep learning strategy, showed promise for improving the postprocessing of CT and MR images heavily corrupted by metal artefact. This work represents a step forward towards the automation of image analysis in hip arthroplasty, supporting and quantitatively informing the decision-making process about patient’s management

    CT 상의 금속 허상물 제거를 위한 효율적인 빔 경화 교정 알고리즘

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2021. 2. 신영길.빔경화는 다색 X선을 사용하고 에너지 의존적인 물질 감쇠 계수를 이용하는 CT 시스템의 특성상 불가피한 현상이며, 이는 특히 금속 영역을 포함하는 프로젝션 상의 값을 오측정하여 결과적으로 CT 영상에 허상물을 유발한다. 금속 허상물 저감화는 CT 영상에 존재하는 이러한 허상물을 제거하고 가려진 실제 정보를 복원하는 과정이다. 영상을 통한 진단과 방사선치료를 위한 계획 수립에 있어서 정확한 CT 영상을 획득하기 위해 금속 허상물의 제거는 필수적이다. 반복적인 재구성에 의한 수치적 방법에 기반을 둔 효과적인 금속 허상물 제거에 관한 최신 연구들이 발표되었으나 무거운 계산량으로 인해 임상 실습에 적용이 어려운 상황이다. 본 논문에서는 이러한 계산적인 이슈를 해결하기 위한 효율적인 빔 경화 추정 모델과 이를 이용한 금속 허상물 저감화 방법을 제안한다. 제안한 모델은 금속 물체의 기하정보와 다색 X선이 물체를 통과하면서 발생하는 빔경화의 물리적인 특성을 반영한다. 모델에 필요한 대부분의 매개변수들은 수치학적인 방법으로 교정 전의 CT 영상과 CT 시스템으로부터 추가적인 최적화 과정 없이 획득한다. 빔경화 허상물과 관련된 매개 변수 중 단 하나만 재구성 이후의 영상 단계에서 선형 최적화를 통해 탐색된다. 또한 제안한 방법으로 교정된 결과 영상에 잔존하는 허상물들을 제거하기 위한 추가적인 두가지 개선 방법을 제시한다. 다수의 시뮬레이션 데이터와 실제 데이터를 사용하여 정성적 및 정량적 비교를 통해 제안 기법의 유효성이 체계적으로 평가되었다. 제안 알고리즘은 정확성 및 견고성 측면에서 유의미한 결과를 보여주었고, 기존의 기법들에 비해 향상된 결과 영상의 품질 뿐만 아니라 임상적으로 적용할만한 빠른 수행 시간을 보여주었다. 이 연구는 CT 영상을 통한 진단과 방사능 치료의 계획 수립을 위한 정확성 향상에 유의미한 의미를 갖는다.Beam hardening in X-ray computed tomography (CT) is an inevitable problem due to the characteristics of CT system that uses polychromatic X-rays and energy-dependent attenuation coefficients of materials. It causes artifacts in CT images as the result of underestimation on the projection data, especially on metal regions. Metal artifact reduction is the process of reducing the artifacts in CT and restoring the actual information hidden by the artifacts. In order to obtain exact CT images for more accurate diagnosis and treatment planning on radiotherapy in clinical fields, it is essential to reduce metal artifacts. State-of-the-art approaches on effectively reducing metal artifact based on numerical methods by iterative reconstruction have been presented. However, it is difficult to be applied in clinical practice due to a heavy computational burden. In this dissertation, we proposes an efficient beam-hardening estimation model and a metal artifact reduction method using this model to address this computational issue. The proposed model reflects the geometric information of metal objects and physical characteristics of beam hardening during the transmission of polychromatic X-ray through a material. Most of the associated parameters are numerically obtained from an initial uncorrected CT image and CT system without additional optimization. Only the unknown parameter related to beam-hardening artifact is fine-tuned by linear optimization, which is performed only in the reconstruction image domain. Two additional refinement methods are presented to reduce residual artifacts in the result image corrected by the proposed metal artifact reduction method. The effectiveness of the proposed method was systematically assessed through qualitative and quantitative comparisons using numerical simulations and real data. The proposed algorithm showed significant results in the aspects of accuracy and robustness. Compared to existing methods, it showed improved image quality as well as fast execution time that is clinically applicable. This work may have significant implications in improving the accuracy of diagnosis and treatment planning for radiotheraphy through CT imaging.Chapter 1 Introduction 1 1.1 Background and motivation 1 1.2 Scope and aim 5 1.3 Main contribution 6 1.4 Contents organization 8 Chapter 2 Related Works 9 2.1 CT physics 9 2.1.1 Fundamentals of X-ray 10 2.1.2 CT reconstruction algorithms 13 2.2 CT artifacts 18 2.2.1 Physics-based artifacts 19 2.2.2 Patient-based artifacts 21 2.3 Metal artifact reduction 22 2.3.1 Sinogram-completion based MAR 24 2.3.2 Sinogram-correction based MAR 27 2.3.3 Deep-learning based MAR 29 2.4 Summary 31 Chapter 3 Constrained Beam-hardening Estimator for Polychromatic X-ray 33 3.1 Characteristics of polychromatic X-ray 34 3.2 Constrained beam-hardening estimator 35 3.3 Summary 41 Chapter 4 Metal Artifact Reduction with Constrained Beam-hardening Estimator 43 4.1 Metal segmentation 44 4.2 X-ray transmission length 46 4.3 Artifact reduction with CBHE 48 4.3.1 Artifact estimation for a single type of metal 48 4.3.2 Artifact estimation for multiple types of metal 51 4.4 Refinement methods 54 4.4.1 Collaboration with ADN 54 4.4.2 Application of CBHE to bone 57 4.5 Summary 59 Chapter 5 Experimental Results 61 5.1 Data preparation and quantitative measures 62 5.2 Verification on constrained beam-hardening estimator 67 5.2.1 Accuracy 67 5.2.2 Robustness 72 5.3 Performance evaluations 81 5.3.1 Evaluation with simulated phantoms 81 5.3.2 Evaluation with hardware phantoms 86 5.3.3 Evaluation on refinement methods 91 Chapter 6 Conclusion 95 Bibliography 101 초록 115 Acknowledgements 117Docto

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing

    Quantitative imaging in radiation oncology

    Get PDF
    Artificially intelligent eyes, built on machine and deep learning technologies, can empower our capability of analysing patients’ images. By revealing information invisible at our eyes, we can build decision aids that help our clinicians to provide more effective treatment, while reducing side effects. The power of these decision aids is to be based on patient tumour biologically unique properties, referred to as biomarkers. To fully translate this technology into the clinic we need to overcome barriers related to the reliability of image-derived biomarkers, trustiness in AI algorithms and privacy-related issues that hamper the validation of the biomarkers. This thesis developed methodologies to solve the presented issues, defining a road map for the responsible usage of quantitative imaging into the clinic as decision support system for better patient care

    Data-driven quantitative photoacoustic tomography

    Get PDF
    Spatial information about the 3D distribution of blood oxygen saturation (sO2) in vivo is of clinical interest as it encodes important physiological information about tissue health/pathology. Photoacoustic tomography (PAT) is a biomedical imaging modality that, in principle, can be used to acquire this information. Images are formed by illuminating the sample with a laser pulse where, after multiple scattering events, the optical energy is absorbed. A subsequent rise in temperature induces an increase in pressure (the photoacoustic initial pressure p0) that propagates to the sample surface as an acoustic wave. These acoustic waves are detected as pressure time series by sensor arrays and used to reconstruct images of sample’s p0 distribution. This encodes information about the sample’s absorption distribution, and can be used to estimate sO2. However, an ill-posed nonlinear inverse problem stands in the way of acquiring estimates in vivo. Current approaches to solving this problem fall short of being widely and successfully applied to in vivo tissues due to their reliance on simplifying assumptions about the tissue, prior knowledge of its optical properties, or the formulation of a forward model accurately describing image acquisition with a specific imaging system. Here, we investigate the use of data-driven approaches (deep convolutional networks) to solve this problem. Networks only require a dataset of examples to learn a mapping from PAT data to images of the sO2 distribution. We show the results of training a 3D convolutional network to estimate the 3D sO2 distribution within model tissues from 3D multiwavelength simulated images. However, acquiring a realistic training set to enable successful in vivo application is non-trivial given the challenges associated with estimating ground truth sO2 distributions and the current limitations of simulating training data. We suggest/test several methods to 1) acquire more realistic training data or 2) improve network performance in the absence of adequate quantities of realistic training data. For 1) we describe how training data may be acquired from an organ perfusion system and outline a possible design. Separately, we describe how training data may be generated synthetically using a variant of generative adversarial networks called ambientGANs. For 2), we show how the accuracy of networks trained with limited training data can be improved with self-training. We also demonstrate how the domain gap between training and test sets can be minimised with unsupervised domain adaption to improve quantification accuracy. Overall, this thesis clarifies the advantages of data-driven approaches, and suggests concrete steps towards overcoming the challenges with in vivo application
    corecore