2,165 research outputs found
Atlas construction and image analysis using statistical cardiac models
International audienceThis paper presents a brief overview of current trends in the construction of population and multi-modal heart atlases in our group and their application to atlas-based cardiac image analysis. The technical challenges around the construction of these atlases are organized around two main axes: groupwise image registration of anatomical, motion and fiber images and construction of statistical shape models. Application-wise, this paper focuses on the extraction of atlas-based biomarkers for the detection of local shape or motion abnormalities, addressing several cardiac applications where the extracted information is used to study and grade different pathologies. The paper is concluded with a discussion about the role of statistical atlases in the integration of multiple information sources and the potential this can bring to in-silico simulations
Multimodal Image Fusion and Its Applications.
Image fusion integrates different modality images to provide comprehensive information of the image content, increasing interpretation capabilities and producing more reliable results. There are several advantages of combining multi-modal images, including improving geometric corrections, complementing data for improved classification, and enhancing features for analysis...etc.
This thesis develops the image fusion idea in the context of two domains: material microscopy and biomedical imaging. The proposed methods include image modeling, image indexing, image segmentation, and image registration. The common theme behind all proposed methods is the use of complementary information from multi-modal images to achieve better registration, feature extraction, and detection performances.
In material microscopy, we propose an anomaly-driven image fusion framework to perform the task of material microscopy image analysis and anomaly detection. This framework is based on a probabilistic model that enables us to index, process and characterize the data with systematic and well-developed statistical tools. In biomedical imaging, we focus on the multi-modal registration problem for functional MRI (fMRI) brain images which improves the performance of brain activation detection.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120701/1/yuhuic_1.pd
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Cardiac displacement tracking with data assimilation combining a biomechanical model and an automatic contour detection
International audienceData assimilation in computational models represents an essential step in building patient-specific simulations. This work aims at circumventing one major bottleneck in the practical use of data assimilation strategies in cardiac applications, namely, the difficulty of formulating and effectively computing adequate data-fitting term for cardiac imaging such as cine MRI. We here provide a proof-of-concept study of data assimilation based on automatic contour detection. The tissue motion simulated by the data assimilation framework is then assessed with displacements extracted from tagged MRI in six subjects, and the results illustrate the performance of the proposed method, including for circumferential displacements, which are not well extracted from cine MRI alone
Physics-Informed Computer Vision: A Review and Perspectives
Incorporation of physical information in machine learning frameworks are
opening and transforming many application domains. Here the learning process is
augmented through the induction of fundamental knowledge and governing physical
laws. In this work we explore their utility for computer vision tasks in
interpreting and understanding visual data. We present a systematic literature
review of formulation and approaches to computer vision tasks guided by
physical laws. We begin by decomposing the popular computer vision pipeline
into a taxonomy of stages and investigate approaches to incorporate governing
physical equations in each stage. Existing approaches in each task are analyzed
with regard to what governing physical processes are modeled, formulated and
how they are incorporated, i.e. modify data (observation bias), modify networks
(inductive bias), and modify losses (learning bias). The taxonomy offers a
unified view of the application of the physics-informed capability,
highlighting where physics-informed learning has been conducted and where the
gaps and opportunities are. Finally, we highlight open problems and challenges
to inform future research. While still in its early days, the study of
physics-informed computer vision has the promise to develop better computer
vision models that can improve physical plausibility, accuracy, data efficiency
and generalization in increasingly realistic applications
Machine Learning in Robotic Ultrasound Imaging: Challenges and Perspectives
This article reviews the recent advances in intelligent robotic ultrasound
(US) imaging systems. We commence by presenting the commonly employed robotic
mechanisms and control techniques in robotic US imaging, along with their
clinical applications. Subsequently, we focus on the deployment of machine
learning techniques in the development of robotic sonographers, emphasizing
crucial developments aimed at enhancing the intelligence of these systems. The
methods for achieving autonomous action reasoning are categorized into two sets
of approaches: those relying on implicit environmental data interpretation and
those using explicit interpretation. Throughout this exploration, we also
discuss practical challenges, including those related to the scarcity of
medical data, the need for a deeper understanding of the physical aspects
involved, and effective data representation approaches. Moreover, we conclude
by highlighting the open problems in the field and analyzing different possible
perspectives on how the community could move forward in this research area.Comment: Accepted by Annual Review of Control, Robotics, and Autonomous
System
Automatic segmentation of wall structures from cardiac images
One important topic in medical image analysis is segmenting wall structures from different cardiac medical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). This task is typically done by radiologists either manually or semi-automatically, which is a very time-consuming process. To reduce the laborious human efforts, automatic methods have become popular in this research. In this thesis, features insensitive to data variations are explored to segment the ventricles from CT images and extract the left atrium from MR images. As applications, the segmentation results are used to facilitate cardiac disease analysis. Specifically,
1. An automatic method is proposed to extract the ventricles from CT images by integrating surface decomposition with contour evolution techniques. In particular, the ventricles are first identified on a surface extracted from patient-specific image data. Then, the contour evolution is employed to refine the identified ventricles. The proposed method is robust to variations of ventricle shapes, volume coverages, and image quality.
2. A variational region-growing method is proposed to segment the left atrium from MR images. Because of the localized property of this formulation, the proposed method is insensitive to data variabilities that are hard to handle by globalized methods.
3. In applications, a geometrical computational framework is proposed to estimate the myocardial mass at risk caused by stenoses. In addition, the segmentation of the left atrium is used to identify scars for MR images of post-ablation.PhDCommittee Chair: Yezzi, Anthony; Committee Co-Chair: Tannenbaum, Allen; Committee Member: Egerstedt, Magnus ; Committee Member: Fedele, Francesco ; Committee Member: Stillman, Arthur; Committee Member: Vela,Patrici
Towards Robust and Accurate Image Registration by Incorporating Anatomical and Appearance Priors
Ph.DDOCTOR OF PHILOSOPH
Generalizable deep learning based medical image segmentation
Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications.
To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques.
In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain.
For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios.
In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation.
In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method.
Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces
- …