800 research outputs found
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Recommended from our members
State of the Art of Level Set Methods in Segmentation and Registration of Medical Imaging Modalities
Segmentation of medical images is an important step in various applications such as visualization, quantitative analysis and image-guided surgery. Numerous segmentation methods have been developed in the past two decades for extraction of organ contours on medical images. Low-level segmentation methods, such as pixel-based clustering, region growing, and filter-based edge detection, require additional pre-processing and post-processing as well as considerable amounts of expert intervention or information of the objects of interest. Furthermore the subsequent analysis of segmented objects is hampered by the primitive, pixel or voxel level representations from those region-based segmentation. Deformable models, on the other hand, provide an explicit representation of the boundary and the shape of the object. They combine several desirable features such as inherent connectivity and smoothness, which counteract noise and boundary irregularities, as well as the ability to incorporate knowledge about the object of interest. However, parametric deformable models have two main limitations. First, in situations where the initial model and desired object boundary differ greatly in size and shape, the model must be re-parameterized dynamically to faithfully recover the object boundary. The second limitation is that it has difficulty dealing with topological adaptation such as splitting or merging model parts, a useful property for recovering either multiple objects or objects with unknown topology. This difficulty is caused by the fact that a new parameterization must be constructed whenever topology change occurs, which requires sophisticated schemes. Level set deformable models, also referred to as geometric deformable models, provide an elegant solution to address the primary limitations of parametric deformable models. These methods have drawn a great deal of attention since their introduction in 1988. Advantages of the contour implicit formulation of the deformable model over parametric formulation include: (1) no parameterization of the contour, (2) topological flexibility, (3) good numerical stability, (4) straightforward extension of the 2D formulation to n-D. Recent reviews on the subject include papers from Suri. In this chapter we give a general overview of the level set segmentation methods with emphasize on new frameworks recently introduced in the context of medical imaging problems. We then introduce novel approaches that aim at combining segmentation and registration in a level set formulation. Finally we review a selective set of clinical works with detailed validation of the level set methods for several clinical applications
Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation
In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is
required for subsurface visualisation to characterise the state of the tissue.
However, scanning of large tissue surfaces in the presence of deformation is a
challenging task for the surgeon. Recently, robot-assisted local tissue
scanning has been investigated for motion stabilisation of imaging probes to
facilitate the capturing of good quality images and reduce the surgeon's
cognitive load. Nonetheless, these approaches require the tissue surface to be
static or deform with periodic motion. To eliminate these assumptions, we
propose a visual servoing framework for autonomous tissue scanning, able to
deal with free-form tissue deformation. The 3D structure of the surgical scene
is recovered and a feature-based method is proposed to estimate the motion of
the tissue in real-time. A desired scanning trajectory is manually defined on a
reference frame and continuously updated using projective geometry to follow
the tissue motion and control the movement of the robotic arm. The advantage of
the proposed method is that it does not require the learning of the tissue
motion prior to scanning and can deal with free-form deformation. We deployed
this framework on the da Vinci surgical robot using the da Vinci Research Kit
(dVRK) for Ultrasound tissue scanning. Since the framework does not rely on
information from the Ultrasound data, it can be easily extended to other
probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
SearchMorph:Multi-scale Correlation Iterative Network for Deformable Registration
Deformable image registration can obtain dynamic information about images,
which is of great significance in medical image analysis. The unsupervised deep
learning registration method can quickly achieve high registration accuracy
without labels. However, these methods generally suffer from uncorrelated
features, poor ability to register large deformations and details, and
unnatural deformation fields. To address the issues above, we propose an
unsupervised multi-scale correlation iterative registration network
(SearchMorph). In the proposed network, we introduce a correlation layer to
strengthen the relevance between features and construct a correlation pyramid
to provide multi-scale relevance information for the network. We also design a
deformation field iterator, which improves the ability of the model to register
details and large deformations through the search module and GRU while ensuring
that the deformation field is realistic. We use single-temporal brain MR images
and multi-temporal echocardiographic sequences to evaluate the model's ability
to register large deformations and details. The experimental results
demonstrate that the method in this paper achieves the highest registration
accuracy and the lowest folding point ratio using a short elapsed time to
state-of-the-art
Meta-Learning Initializations for Interactive Medical Image Registration
We present a meta-learning framework for interactive medical image
registration. Our proposed framework comprises three components: a
learning-based medical image registration algorithm, a form of user interaction
that refines registration at inference, and a meta-learning protocol that
learns a rapidly adaptable network initialization. This paper describes a
specific algorithm that implements the registration, interaction and
meta-learning protocol for our exemplar clinical application: registration of
magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled
transrectal ultrasound (TRUS) images. Our approach obtains comparable
registration error (4.26 mm) to the best-performing non-interactive
learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the
data, and occurring in real-time during acquisition. Applying sparsely sampled
data to non-interactive methods yields higher registration errors (6.26 mm),
demonstrating the effectiveness of interactive MR-TRUS registration, which may
be applied intraoperatively given the real-time nature of the adaptation
process.Comment: 11 pages, 10 figures. Paper accepted to IEEE Transactions on Medical
Imaging (October 26 2022
Multiscale medial shape-based analysis of image objects
pre-printMedial representation of a three-dimensional (3-D) object or an ensemble of 3-D objects involves capturing the object interior as a locus of medial atoms, each atom being two vectors of equal length joined at the tail at the medial point. Medial representation has a variety of beneficial properties, among the most important of which are 1) its inherent geometry, provides an object-intrinsic coordinate system and thus provides correspondence between instances of the object in and near the object(s); 2) it captures the object interior and is, thus, very suitable for deformation; and 3) it provides the basis for an intuitive object-based multiscale sequence leading to efficiency of segmentation algorithms and trainability of statistical characterizations with limited training sets. As a result of these properties, medial representation is particularly suitable for the following image analysis tasks; how each operates will be described and will be illustrated by results: 1) segmentation of objects and object complexes via deformable models; 2) segmentation of tubular trees, e.g., of blood vessels, by following height ridges of measures of fit of medial atoms to target images; 3) object-based image registration via medial loci of such blood vessel trees; 4) statistical characterization of shape differences between control and pathological classes of structures. These analysis tasks are made possible by a new form of medial representation called m-reps, which is described
Recommended from our members
LV Volume Quantification via Spatiotemporal Analysis of Real-Time 3-D Echocardiography
This paper presents a method of four-dimensional (4-D) (3-D+Time) space-frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets. These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients. This process attenuates speckle noise while preserving cardiac structure location. The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated. Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spatiotemporal analysis tool are reported for six patient cases providing measures of left ventricular volumes and ejection fraction
Medical image registration using unsupervised deep neural network: A scoping literature review
In medicine, image registration is vital in image-guided interventions and
other clinical applications. However, it is a difficult subject to be addressed
which by the advent of machine learning, there have been considerable progress
in algorithmic performance has recently been achieved for medical image
registration in this area. The implementation of deep neural networks provides
an opportunity for some medical applications such as conducting image
registration in less time with high accuracy, playing a key role in countering
tumors during the operation. The current study presents a comprehensive scoping
review on the state-of-the-art literature of medical image registration studies
based on unsupervised deep neural networks is conducted, encompassing all the
related studies published in this field to this date. Here, we have tried to
summarize the latest developments and applications of unsupervised deep
learning-based registration methods in the medical field. Fundamental and main
concepts, techniques, statistical analysis from different viewpoints,
novelties, and future directions are elaborately discussed and conveyed in the
current comprehensive scoping review. Besides, this review hopes to help those
active readers, who are riveted by this field, achieve deep insight into this
exciting field
- …