348 research outputs found

    Latent Disentanglement for the Analysis and Generation of Digital Human Shapes

    Get PDF
    Analysing and generating digital human shapes is crucial for a wide variety of applications ranging from movie production to healthcare. The most common approaches for the analysis and generation of digital human shapes involve the creation of statistical shape models. At the heart of these techniques is the definition of a mapping between shapes and a low-dimensional representation. However, making these representations interpretable is still an open challenge. This thesis explores latent disentanglement as a powerful technique to make the latent space of geometric deep learning based statistical shape models more structured and interpretable. In particular, it introduces two novel techniques to disentangle the latent representation of variational autoencoders and generative adversarial networks with respect to the local shape attributes characterising the identity of the generated body and head meshes. This work was inspired by a shape completion framework that was proposed as a viable alternative to intraoperative registration in minimally invasive surgery of the liver. In addition, one of these methods for latent disentanglement was also applied to plastic surgery, where it was shown to improve the diagnosis of craniofacial syndromes and aid surgical planning

    Semiautomated 3D liver segmentation using computed tomography and magnetic resonance imaging

    Get PDF
    Le foie est un organe vital ayant une capacité de régénération exceptionnelle et un rôle crucial dans le fonctionnement de l’organisme. L’évaluation du volume du foie est un outil important pouvant être utilisé comme marqueur biologique de sévérité de maladies hépatiques. La volumétrie du foie est indiquée avant les hépatectomies majeures, l’embolisation de la veine porte et la transplantation. La méthode la plus répandue sur la base d'examens de tomodensitométrie (TDM) et d'imagerie par résonance magnétique (IRM) consiste à délimiter le contour du foie sur plusieurs coupes consécutives, un processus appelé la «segmentation». Nous présentons la conception et la stratégie de validation pour une méthode de segmentation semi-automatisée développée à notre institution. Notre méthode représente une approche basée sur un modèle utilisant l’interpolation variationnelle de forme ainsi que l’optimisation de maillages de Laplace. La méthode a été conçue afin d’être compatible avec la TDM ainsi que l' IRM. Nous avons évalué la répétabilité, la fiabilité ainsi que l’efficacité de notre méthode semi-automatisée de segmentation avec deux études transversales conçues rétrospectivement. Les résultats de nos études de validation suggèrent que la méthode de segmentation confère une fiabilité et répétabilité comparables à la segmentation manuelle. De plus, cette méthode diminue de façon significative le temps d’interaction, la rendant ainsi adaptée à la pratique clinique courante. D’autres études pourraient incorporer la volumétrie afin de déterminer des marqueurs biologiques de maladie hépatique basés sur le volume tels que la présence de stéatose, de fer, ou encore la mesure de fibrose par unité de volume.The liver is a vital abdominal organ known for its remarkable regenerative capacity and fundamental role in organism viability. Assessment of liver volume is an important tool which physicians use as a biomarker of disease severity. Liver volumetry is clinically indicated prior to major hepatectomy, portal vein embolization and transplantation. The most popular method to determine liver volume from computed tomography (CT) and magnetic resonance imaging (MRI) examinations involves contouring the liver on consecutive imaging slices, a process called “segmentation”. Segmentation can be performed either manually or in an automated fashion. We present the design concept and validation strategy for an innovative semiautomated liver segmentation method developed at our institution. Our method represents a model-based approach using variational shape interpolation and Laplacian mesh optimization techniques. It is independent of training data, requires limited user interactions and is robust to a variety of pathological cases. Further, it was designed for compatibility with both CT and MRI examinations. We evaluated the repeatability, agreement and efficiency of our semiautomated method in two retrospective cross-sectional studies. The results of our validation studies suggest that semiautomated liver segmentation can provide strong agreement and repeatability when compared to manual segmentation. Further, segmentation automation significantly shortens interaction time, thus making it suitable for daily clinical practice. Future studies may incorporate liver volumetry to determine volume-averaged biomarkers of liver disease, such as such as fat, iron or fibrosis measurements per unit volume. Segmental volumetry could also be assessed based on subsegmentation of vascular anatomy

    Interactive Segmentation of 3D Medical Images with Implicit Surfaces

    Get PDF
    To cope with a variety of clinical applications, research in medical image processing has led to a large spectrum of segmentation techniques that extract anatomical structures from volumetric data acquired with 3D imaging modalities. Despite continuing advances in mathematical models for automatic segmentation, many medical practitioners still rely on 2D manual delineation, due to the lack of intuitive semi-automatic tools in 3D. In this thesis, we propose a methodology and associated numerical schemes enabling the development of 3D image segmentation tools that are reliable, fast and interactive. These properties are key factors for clinical acceptance. Our approach derives from the framework of variational methods: segmentation is obtained by solving an optimization problem that translates the expected properties of target objects in mathematical terms. Such variational methods involve three essential components that constitute our main research axes: an objective criterion, a shape representation and an optional set of constraints. As objective criterion, we propose a unified formulation that extends existing homogeneity measures in order to model the spatial variations of statistical properties that are frequently encountered in medical images, without compromising efficiency. Within this formulation, we explore several shape representations based on implicit surfaces with the objective to cover a broad range of typical anatomical structures. Firstly, to model tubular shapes in vascular imaging, we introduce convolution surfaces in the variational context of image segmentation. Secondly, compact shapes such as lesions are described with a new representation that generalizes Radial Basis Functions with non-Euclidean distances, which enables the design of basis functions that naturally align with salient image features. Finally, we estimate geometric non-rigid deformations of prior templates to recover structures that have a predictable shape such as whole organs. Interactivity is ensured by restricting admissible solutions with additional constraints. Translating user input into constraints on the sign of the implicit representation at prescribed points in the image leads us to consider inequality-constrained optimization

    Variational methods for shape and image registrations.

    Get PDF
    Estimating and analysis of deformation, either rigid or non-rigid, is an active area of research in various medical imaging and computer vision applications. Its importance stems from the inherent inter- and intra-variability in biological and biomedical object shapes and from the dynamic nature of the scenes usually dealt with in computer vision research. For instance, quantifying the growth of a tumor, recognizing a person\u27s face, tracking a facial expression, or retrieving an object inside a data base require the estimation of some sort of motion or deformation undergone by the object of interest. To solve these problems, and other similar problems, registration comes into play. This is the process of bringing into correspondences two or more data sets. Depending on the application at hand, these data sets can be for instance gray scale/color images or objects\u27 outlines. In the latter case, one talks about shape registration while in the former case, one talks about image/volume registration. In some situations, the combinations of different types of data can be used complementarily to establish point correspondences. One of most important image analysis tools that greatly benefits from the process of registration, and which will be addressed in this dissertation, is the image segmentation. This process consists of localizing objects in images. Several challenges are encountered in image segmentation, including noise, gray scale inhomogeneities, and occlusions. To cope with such issues, the shape information is often incorporated as a statistical model into the segmentation process. Building such statistical models requires a good and accurate shape alignment approach. In addition, segmenting anatomical structures can be accurately solved through the registration of the input data set with a predefined anatomical atlas. Variational approaches for shape/image registration and segmentation have received huge interest in the past few years. Unlike traditional discrete approaches, the variational methods are based on continuous modelling of the input data through the use of Partial Differential Equations (PDE). This brings into benefit the extensive literature on theory and numerical methods proposed to solve PDEs. This dissertation addresses the registration problem from a variational point of view, with more focus on shape registration. First, a novel variational framework for global-to-local shape registration is proposed. The input shapes are implicitly represented through their signed distance maps. A new Sumof- Squared-Differences (SSD) criterion which measures the disparity between the implicit representations of the input shapes, is introduced to recover the global alignment parameters. This new criteria has the advantages over some existing ones in accurately handling scale variations. In addition, the proposed alignment model is less expensive computationally. Complementary to the global registration field, the local deformation field is explicitly established between the two globally aligned shapes, by minimizing a new energy functional. This functional incrementally and simultaneously updates the displacement field while keeping the corresponding implicit representation of the globally warped source shape as close to a signed distance function as possible. This is done under some regularization constraints that enforce the smoothness of the recovered deformations. The overall process leads to a set of coupled set of equations that are simultaneously solved through a gradient descent scheme. Several applications, where the developed tools play a major role, are addressed throughout this dissertation. For instance, some insight is given as to how one can solve the challenging problem of three dimensional face recognition in the presence of facial expressions. Statistical modelling of shapes will be presented as a way of benefiting from the proposed shape registration framework. Second, this dissertation will visit th

    Generalizable deep learning based medical image segmentation

    Get PDF
    Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications. To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques. In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain. For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios. In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation. In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method. Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces

    복부 CT에서 간과 혈관 분할 기법

    Get PDF
    학위논문(박사)--서울대학교 대학원 :공과대학 컴퓨터공학부,2020. 2. 신영길.복부 전산화 단층 촬영 (CT) 영상에서 정확한 간 및 혈관 분할은 체적 측정, 치료 계획 수립 및 추가적인 증강 현실 기반 수술 가이드와 같은 컴퓨터 진단 보조 시스템을 구축하는데 필수적인 요소이다. 최근 들어 컨볼루셔널 인공 신경망 (CNN) 형태의 딥 러닝이 많이 적용되면서 의료 영상 분할의 성능이 향상되고 있지만, 실제 임상에 적용할 수 있는 높은 일반화 성능을 제공하기는 여전히 어렵다. 또한 물체의 경계는 전통적으로 영상 분할에서 매우 중요한 요소로 이용되었지만, CT 영상에서 간의 불분명한 경계를 추출하기가 어렵기 때문에 현대 CNN에서는 이를 사용하지 않고 있다. 간 혈관 분할 작업의 경우, 복잡한 혈관 영상으로부터 학습 데이터를 만들기 어렵기 때문에 딥 러닝을 적용하기가 어렵다. 또한 얇은 혈관 부분의 영상 밝기 대비가 약하여 원본 영상에서 식별하기가 매우 어렵다. 본 논문에서는 위 언급한 문제들을 해결하기 위해 일반화 성능이 향상된 CNN과 얇은 혈관을 포함하는 복잡한 간 혈관을 정확하게 분할하는 알고리즘을 제안한다. 간 분할 작업에서 우수한 일반화 성능을 갖는 CNN을 구축하기 위해, 내부적으로 간 모양을 추정하는 부분이 포함된 자동 컨텍스트 알고리즘을 제안한다. 또한, CNN을 사용한 학습에 경계선의 개념이 새롭게 제안된다. 모호한 경계부가 포함되어 있어 전체 경계 영역을 CNN에 훈련하는 것은 매우 어렵기 때문에 반복되는 학습 과정에서 인공 신경망이 스스로 예측한 확률에서 부정확하게 추정된 부분적 경계만을 사용하여 인공 신경망을 학습한다. 실험적 결과를 통해 제안된 CNN이 다른 최신 기법들보다 정확도가 우수하다는 것을 보인다. 또한, 제안된 CNN의 일반화 성능을 검증하기 위해 다양한 실험을 수행한다. 간 혈관 분할에서는 간 내부의 관심 영역을 지정하기 위해 앞서 획득한 간 영역을 활용한다. 정확한 간 혈관 분할을 위해 혈관 후보 점들을 추출하여 사용하는 알고리즘을 제안한다. 확실한 후보 점들을 얻기 위해, 삼차원 영상의 차원을 먼저 최대 강도 투영 기법을 통해 이차원으로 낮춘다. 이차원 영상에서는 복잡한 혈관의 구조가 보다 단순화될 수 있다. 이어서, 이차원 영상에서 혈관 분할을 수행하고 혈관 픽셀들은 원래의 삼차원 공간상으로 역 투영된다. 마지막으로, 전체 혈관의 분할을 위해 원본 영상과 혈관 후보 점들을 모두 사용하는 새로운 레벨 셋 기반 알고리즘을 제안한다. 제안된 알고리즘은 복잡한 구조가 단순화되고 얇은 혈관이 더 잘 보이는 이차원 영상에서 얻은 후보 점들을 사용하기 때문에 얇은 혈관 분할에서 높은 정확도를 보인다. 실험적 결과에 의하면 제안된 알고리즘은 잘못된 영역의 추출 없이 다른 레벨 셋 기반 알고리즘들보다 우수한 성능을 보인다. 제안된 알고리즘은 간과 혈관을 분할하는 새로운 방법을 제시한다. 제안된 자동 컨텍스트 구조는 사람이 디자인한 학습 과정이 일반화 성능을 크게 향상할 수 있다는 것을 보인다. 그리고 제안된 경계선 학습 기법으로 CNN을 사용한 영상 분할의 성능을 향상할 수 있음을 내포한다. 간 혈관의 분할은 이차원 최대 강도 투영 기반 이미지로부터 획득된 혈관 후보 점들을 통해 얇은 혈관들이 성공적으로 분할될 수 있음을 보인다. 본 논문에서 제안된 알고리즘은 간의 해부학적 분석과 자동화된 컴퓨터 진단 보조 시스템을 구축하는 데 매우 중요한 기술이다.Accurate liver and its vessel segmentation on abdominal computed tomography (CT) images is one of the most important prerequisites for computer-aided diagnosis (CAD) systems such as volumetric measurement, treatment planning, and further augmented reality-based surgical guide. In recent years, the application of deep learning in the form of convolutional neural network (CNN) has improved the performance of medical image segmentation, but it is difficult to provide high generalization performance for the actual clinical practice. Furthermore, although the contour features are an important factor in the image segmentation problem, they are hard to be employed on CNN due to many unclear boundaries on the image. In case of a liver vessel segmentation, a deep learning approach is impractical because it is difficult to obtain training data from complex vessel images. Furthermore, thin vessels are hard to be identified in the original image due to weak intensity contrasts and noise. In this dissertation, a CNN with high generalization performance and a contour learning scheme is first proposed for liver segmentation. Secondly, a liver vessel segmentation algorithm is presented that accurately segments even thin vessels. To build a CNN with high generalization performance, the auto-context algorithm is employed. The auto-context algorithm goes through two pipelines: the first predicts the overall area of a liver and the second predicts the final liver using the first prediction as a prior. This process improves generalization performance because the network internally estimates shape-prior. In addition to the auto-context, a contour learning method is proposed that uses only sparse contours rather than the entire contour. Sparse contours are obtained and trained by using only the mispredicted part of the network's final prediction. Experimental studies show that the proposed network is superior in accuracy to other modern networks. Multiple N-fold tests are also performed to verify the generalization performance. An algorithm for accurate liver vessel segmentation is also proposed by introducing vessel candidate points. To obtain confident vessel candidates, the 3D image is first reduced to 2D through maximum intensity projection. Subsequently, vessel segmentation is performed from the 2D images and the segmented pixels are back-projected into the original 3D space. Finally, a new level set function is proposed that utilizes both the original image and vessel candidate points. The proposed algorithm can segment thin vessels with high accuracy by mainly using vessel candidate points. The reliability of the points can be higher through robust segmentation in the projected 2D images where complex structures are simplified and thin vessels are more visible. Experimental results show that the proposed algorithm is superior to other active contour models. The proposed algorithms present a new method of segmenting the liver and its vessels. The auto-context algorithm shows that a human-designed curriculum (i.e., shape-prior learning) can improve generalization performance. The proposed contour learning technique can increase the accuracy of a CNN for image segmentation by focusing on its failures, represented by sparse contours. The vessel segmentation shows that minor vessel branches can be successfully segmented through vessel candidate points obtained by reducing the image dimension. The algorithms presented in this dissertation can be employed for later analysis of liver anatomy that requires accurate segmentation techniques.Chapter 1 Introduction 1 1.1 Background and motivation 1 1.2 Problem statement 3 1.3 Main contributions 6 1.4 Contents and organization 9 Chapter 2 Related Works 10 2.1 Overview 10 2.2 Convolutional neural networks 11 2.2.1 Architectures of convolutional neural networks 11 2.2.2 Convolutional neural networks in medical image segmentation 21 2.3 Liver and vessel segmentation 37 2.3.1 Classical methods for liver segmentation 37 2.3.2 Vascular image segmentation 40 2.3.3 Active contour models 46 2.3.4 Vessel topology-based active contour model 54 2.4 Motivation 60 Chapter 3 Liver Segmentation via Auto-Context Neural Network with Self-Supervised Contour Attention 62 3.1 Overview 62 3.2 Single-pass auto-context neural network 65 3.2.1 Skip-attention module 66 3.2.2 V-transition module 69 3.2.3 Liver-prior inference and auto-context 70 3.2.4 Understanding the network 74 3.3 Self-supervising contour attention 75 3.4 Learning the network 81 3.4.1 Overall loss function 81 3.4.2 Data augmentation 81 3.5 Experimental Results 83 3.5.1 Overview 83 3.5.2 Data configurations and target of comparison 84 3.5.3 Evaluation metric 85 3.5.4 Accuracy evaluation 87 3.5.5 Ablation study 93 3.5.6 Performance of generalization 110 3.5.7 Results from ground-truth variations 114 3.6 Discussion 116 Chapter 4 Liver Vessel Segmentation via Active Contour Model with Dense Vessel Candidates 119 4.1 Overview 119 4.2 Dense vessel candidates 124 4.2.1 Maximum intensity slab images 125 4.2.2 Segmentation of 2D vessel candidates and back-projection 130 4.3 Clustering of dense vessel candidates 135 4.3.1 Virtual gradient-assisted regional ACM 136 4.3.2 Localized regional ACM 142 4.4 Experimental results 145 4.4.1 Overview 145 4.4.2 Data configurations and environment 146 4.4.3 2D segmentation 146 4.4.4 ACM comparisons 149 4.4.5 Evaluation of bifurcation points 154 4.4.6 Computational performance 159 4.4.7 Ablation study 160 4.4.8 Parameter study 162 4.5 Application to portal vein analysis 164 4.6 Discussion 168 Chapter 5 Conclusion and Future Works 170 Bibliography 172 초록 197Docto

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces
    corecore