754 research outputs found
Numerical methods for coupled reconstruction and registration in digital breast tomosynthesis.
Digital Breast Tomosynthesis (DBT) provides an insight into the fine details of normal fibroglandular tissues and abnormal lesions by reconstructing a pseudo-3D image of the breast. In this respect, DBT overcomes a major limitation of conventional X-ray mam- mography by reducing the confounding effects caused by the superposition of breast tissue. In a breast cancer screening or diagnostic context, a radiologist is interested in detecting change, which might be indicative of malignant disease. To help automate this task image registration is required to establish spatial correspondence between time points. Typically, images, such as MRI or CT, are first reconstructed and then registered. This approach can be effective if reconstructing using a complete set of data. However, for ill-posed, limited-angle problems such as DBT, estimating the deformation is com- plicated by the significant artefacts associated with the reconstruction, leading to severe inaccuracies in the registration. This paper presents a mathematical framework, which couples the two tasks and jointly estimates both image intensities and the parameters of a transformation. Under this framework, we compare an iterative method and a simultaneous method, both of which tackle the problem of comparing DBT data by combining reconstruction of a pair of temporal volumes with their registration. We evaluate our methods using various computational digital phantoms, uncom- pressed breast MR images, and in-vivo DBT simulations. Firstly, we compare both iter- ative and simultaneous methods to the conventional, sequential method using an affine transformation model. We show that jointly estimating image intensities and parametric transformations gives superior results with respect to reconstruction fidelity and regis- tration accuracy. Also, we incorporate a non-rigid B-spline transformation model into our simultaneous method. The results demonstrate a visually plausible recovery of the deformation with preservation of the reconstruction fidelity
Multimodal breast imaging: Registration, visualization, and image synthesis
The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work
Reconstruction Methods for Free-Breathing Dynamic Contrast-Enhanced MRI
Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a valuable diagnostic tool due to the combination of anatomical and physiological information it provides. However, the sequential sampling of MRI presents an inherent tradeoff between spatial and temporal resolution. Compressed Sensing (CS) methods have been applied to undersampled MRI to reconstruct full-resolution images at sub-Nyquist sampling rates. In exchange for shorter data acquisition times, CS-MRI requires more computationally intensive iterative reconstruction methods.
We present several model-based image reconstruction (MBIR) methods to improve the spatial and temporal resolution of MR images and/or the computational time for multi-coil MRI reconstruction. We propose efficient variable splitting (VS) methods for support-constrained MRI reconstruction, image reconstruction and denoising with non-circulant boundary conditions, and improved temporal regularization for breast DCE-MRI. These proposed VS algorithms decouple the system model and sparsity terms of the convex optimization problem. By leveraging matrix structures in the system model and sparsifying operator, we perform alternating minimization over a list of auxiliary variables, each of which can be performed efficiently. We demonstrate the computational benefits of our proposed VS algorithms compared to similar proposed methods. We also demonstrate convergence guarantees for two proposed methods, ADMM-tridiag and ADMM-FP-tridiag. With simulation experiments, we demonstrate lower error in spatial and temporal dimensions for these VS methods compared to other object models.
We also propose a method for indirect motion compensation in 5D liver DCE-MRI. 5D MRI separates temporal changes due to contrast from anatomical changes due to respiratory motion into two distinct dimensions. This work applies a pre-computed motion model to perform motion-compensated regularization across the respiratory dimension and improve the conditioning of this highly sparse 5D reconstruction problem. We demonstrate a proof of concept using a digital phantom with contrast and respiratory changes, and we show preliminary results for motion model-informed regularization on in vivo patient data.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138498/1/mtle_1.pd
Deep learning in medical image registration: introduction and survey
Image registration (IR) is a process that deforms images to align them with
respect to a reference space, making it easier for medical practitioners to
examine various medical images in a standardized reference frame, such as
having the same rotation and scale. This document introduces image registration
using a simple numeric example. It provides a definition of image registration
along with a space-oriented symbolic representation. This review covers various
aspects of image transformations, including affine, deformable, invertible, and
bidirectional transformations, as well as medical image registration algorithms
such as Voxelmorph, Demons, SyN, Iterative Closest Point, and SynthMorph. It
also explores atlas-based registration and multistage image registration
techniques, including coarse-fine and pyramid approaches. Furthermore, this
survey paper discusses medical image registration taxonomies, datasets,
evaluation measures, such as correlation-based metrics, segmentation-based
metrics, processing time, and model size. It also explores applications in
image-guided surgery, motion tracking, and tumor diagnosis. Finally, the
document addresses future research directions, including the further
development of transformers
Discontinuity preserving image registration for breathing induced sliding organ motion
Image registration is a powerful tool in medical image analysis and facilitates
the clinical routine in several aspects. It became an indispensable device for
many medical applications including image-guided therapy systems. The
basic goal of image registration is to spatially align two images that show a
similar region of interest. More speci�cally, a displacement �eld respectively
a transformation is estimated, that relates the positions of the pixels or
feature points in one image to the corresponding positions in the other one.
The so gained alignment of the images assists the doctor in comparing and
diagnosing them. There exist di�erent kinds of image registration methods,
those which are capable to estimate a rigid transformation or more generally
an a�ne transformation between the images and those which are able to
capture a more complex motion by estimating a non-rigid transformation.
There are many well established non-rigid registration methods, but those
which are able to preserve discontinuities in the displacement �eld are rather
rare. These discontinuities appear in particular at organ boundaries during
the breathing induced organ motion.
In this thesis, we make use of the idea to combine motion segmentation
with registration to tackle the problem of preserving the discontinuities in
the resulting displacement �eld. We introduce a binary function to represent
the motion segmentation and the proposed discontinuity preserving
non-rigid registration method is then formulated in a variational framework.
Thus, an energy functional is de�ned and its minimisation with respect to
the displacement �eld and the motion segmentation will lead to the desired
result. In theory, one can prove that for the motion segmentation a global
minimiser of the energy functional can be found, if the displacement �eld
is given. The overall minimisation problem, however, is non-convex and a
suitable optimisation strategy has to be considered. Furthermore, depending
on whether we use the pure L1-norm or an approximation of it in the formulation
of the energy functional, we use di�erent numerical methods to solve
the minimisation problem. More speci�cally, when using an approximation
of the L1-norm, the minimisation of the energy functional with respect to the displacement �eld is performed through Brox et al.'s �xed point iteration
scheme, and the minimisation with respect to the motion segmentation
with the dual algorithm of Chambolle. On the other hand, when we make
use of the pure L1-norm in the energy functional, the primal-dual algorithm
of Chambolle and Pock is used for both, the minimisation with respect to
the displacement �eld and the motion segmentation. This approach is clearly
faster compared to the one using the approximation of the L1-norm and also
theoretically more appealing. Finally, to support the registration method
during the minimisation process, we incorporate additionally in a later approach
the information of certain landmark positions into the formulation of
the energy functional, that makes use of the pure L1-norm. Similarly as before,
the primal-dual algorithm of Chambolle and Pock is then used for both,
the minimisation with respect to the displacement �eld and the motion segmentation.
All the proposed non-rigid discontinuity preserving registration
methods delivered promising results for experiments with synthetic images
and real MR images of breathing induced liver motion
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum
of medical conditions. However, different modalities of medical imaging employ/apply
di erent contrast mechanisms and, consequently, provide different depictions of bodily
anatomy. As a result, there is a frequent problem where the same pathology can be
detected by one type of medical imaging while being missed by others. This problem brings
forward the importance of the development of image processing tools for integrating the
information provided by different imaging modalities via the process of information fusion.
One particularly important example of clinical application of such tools is in the diagnostic
management of breast cancer, which is a prevailing cause of cancer-related mortality in
women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and
Magnetic Resonance Imaging (MRI), which are both important throughout different stages
of detection, localization, and treatment of the disease. The sensitivity of mammography,
however, is known to be limited in the case of relatively dense breasts, while contrast enhanced
MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this
situation, it is critical to find reliable ways of fusing the mammography and MRI scans in
order to improve the sensitivity of the former while boosting the specificity of the latter.
Unfortunately, fusing the above types of medical images is known to be a difficult computational
problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital
mammograms are always planar (2-D). Moreover, mammograms are invariably acquired
under the force of compression paddles, thus making the breast anatomy undergo sizeable
deformations. In the case of MRI, on the other hand, the breast is rarely constrained and
imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely
di erent physical mechanisms, which produce distinct diagnostic contrasts which
are related in a non-trivial way. Under such conditions, the success of information fusion
depends on one's ability to establish spatial correspondences between mammograms
and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the
presence of spatial deformations (+SD). Solving the problem of information fusion in the
CMCD+SD setting is a very challenging analytical/computational problem, still in need
of efficient solutions.
In the literature, there is a lack of a generic and consistent solution to the problem of
fusing mammograms and breast MRIs and using their complementary information. Most
of the existing MRI to mammogram registration techniques are based on a biomechanical
approach which builds a speci c model for each patient to simulate the effect of mammographic
compression. The biomechanical model is not optimal as it ignores the common
characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all
patients. Regardless of the size, shape, or internal con guration of the breast tissue, one
can predict the major part of the deformation only by considering the geometry of the
breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical
modeling, we developed a new and relatively simple approach to estimate the
deformation and nd the correspondences. We consider the total deformation to consist of
two components: a large-magnitude global deformation due to mammographic compression
and a residual deformation of relatively smaller amplitude. We propose a much simpler
way of predicting the global deformation which compares favorably to FEM in terms of
its accuracy. The residual deformation, on the other hand, is recovered in a variational
framework using an elastic transformation model.
The proposed algorithm provides us with a computational pipeline that takes breast
MRIs and mammograms as inputs and returns the spatial transformation which establishes
the correspondences between them. This spatial transformation can be applied in different
applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving
the quality of surgical care) and correlating between different types of mammograms.
We investigate the performance of our proposed pipeline on the application of enhancing
mammograms by means of MRIs and we have shown improvements over the state of the
art
Recommended from our members
Variational Multi-Task Models for Image Analysis: Applications to Magnetic Resonance Imaging
This thesis deals with the study and development of several variational multi-task models for solving inverse problems in imaging, with a particular focus on Magnetic Resonance Imaging (MRI). In most image processing problems, one usually deals with the reconstruction task, i.e., the task of reconstructing an image from indirect measurements, and then performs various operations, one after the other (i.e. sequentially), to improve the quality of the reconstruction and to extract useful information.
However, recent developments in a variational context, have shown that performing those tasks jointly (i.e. in a multi-task framework) offers great benefits, and this is the perspective that we follow in this thesis. We go beyond traditional sequential approaches and set a new basis for variational multi-task methods for MRI analysis. We demonstrate that by sharing representation between tasks and carefully interconnecting them, one can create synergies across challenging problems and reduce error propagation.
More precisely, firstly we propose a multi-task variational model to tackle the problems of image reconstruction and image segmentation using non-convex Bregman iteration. We describe theoretical and numerical details of the problem and its optimisation scheme. Moreover, we show that our multi-task model achieves better results in several examples and MRI applications than existing approaches in the same context.
Secondly, we show that our approach can be extended to a multi-task reconstruction and segmentation model for the nonlinear inverse problem of velocity-encoded MRI. In this context, the aim is to estimate not only the magnitude from MRI data, but also the phase and its flow information, whilst simultaneously identify regions of interest through the segmentation task.
Finally, we go beyond two-task frameworks and introduce for the first time a variational multi-task model to handle three imaging tasks. To this end, we design a variational multi-task framework addressing reconstruction, super-resolution and registration for improving the quality of MRI reconstruction. We demonstrate that our model is theoretically well-motivated and it outperforms sequential models whilst requiring less computational cost. Furthermore, we show through experimental results the potential of this approach for clinical applications
Advanced Algorithms for 3D Medical Image Data Fusion in Specific Medical Problems
Fúze obrazu je dnes jednou z nejběžnějších avšak stále velmi diskutovanou oblastí v lékařském zobrazování a hraje důležitou roli ve všech oblastech lékařské péče jako je diagnóza, léčba a chirurgie. V této dizertační práci jsou představeny tři projekty, které jsou velmi úzce spojeny s oblastí fúze medicínských dat. První projekt pojednává o 3D CT subtrakční angiografii dolních končetin. V práci je využito kombinace kontrastních a nekontrastních dat pro získání kompletního cévního stromu. Druhý projekt se zabývá fúzí DTI a T1 váhovaných MRI dat mozku. Cílem tohoto projektu je zkombinovat stukturální a funkční informace, které umožňují zlepšit znalosti konektivity v mozkové tkáni. Třetí projekt se zabývá metastázemi v CT časových datech páteře. Tento projekt je zaměřen na studium vývoje metastáz uvnitř obratlů ve fúzované časové řadě snímků. Tato dizertační práce představuje novou metodologii pro klasifikaci těchto metastáz. Všechny projekty zmíněné v této dizertační práci byly řešeny v rámci pracovní skupiny zabývající se analýzou lékařských dat, kterou vedl pan Prof. Jiří Jan. Tato dizertační práce obsahuje registrační část prvního a klasifikační část třetího projektu. Druhý projekt je představen kompletně. Další část prvního a třetího projektu, obsahující specifické předzpracování dat, jsou obsaženy v disertační práci mého kolegy Ing. Romana Petera.Image fusion is one of today´s most common and still challenging tasks in medical imaging and it plays crucial role in all areas of medical care such as diagnosis, treatment and surgery. Three projects crucially dependent on image fusion are introduced in this thesis. The first project deals with the 3D CT subtraction angiography of lower limbs. It combines pre-contrast and contrast enhanced data to extract the blood vessel tree. The second project fuses the DTI and T1-weighted MRI brain data. The aim of this project is to combine the brain structural and functional information that purvey improved knowledge about intrinsic brain connectivity. The third project deals with the time series of CT spine data where the metastases occur. In this project the progression of metastases within the vertebrae is studied based on fusion of the successive elements of the image series. This thesis introduces new methodology of classifying metastatic tissue. All the projects mentioned in this thesis have been solved by the medical image analysis group led by Prof. Jiří Jan. This dissertation concerns primarily the registration part of the first project and the classification part of the third project. The second project is described completely. The other parts of the first and third project, including the specific preprocessing of the data, are introduced in detail in the dissertation thesis of my colleague Roman Peter, M.Sc.
Efficient dense non-rigid registration using the free-form deformation framework
Medical image registration consists of finding spatial correspondences between two images or more. It
is a powerful tool which is commonly used in various medical image processing tasks. Even though
medical image registration has been an active topic of research for the last two decades, significant
challenges in the field remain to be solved. This thesis addresses some of these challenges through
extensions to the Free-Form Deformation (FFD) registration framework, which is one of the most widely
used and well-established non-rigid registration algorithm.
Medical image registration is a computationally expensive task because of the high degrees of freedom
of the non-rigid transformations. In this work, the FFD algorithm has been re-factored to enable
fast processing, while maintaining the accuracy of the results. In addition, parallel computing paradigms
have been employed to provide near real-time image registration capabilities. Further modifications have
been performed to improve the registration robustness to artifacts such as tissues non-uniformity. The
plausibility of the generated deformation field has been improved through the use of bio-mechanical
models based regularization. Additionally, diffeomorphic extensions to the algorithm were also developed.
The work presented in this thesis has been extensively validated using brain magnetic resonance
imaging of patients diagnosed with dementia or patients undergoing brain resection. It has also been
applied to lung X-ray computed tomography and imaging of small animals.
Alongside with this thesis an open-source package, NiftyReg, has been developed to release the
presented work to the medical imaging community
- …