6,410 research outputs found
To Learn or Not to Learn Features for Deformable Registration?
Feature-based registration has been popular with a variety of features
ranging from voxel intensity to Self-Similarity Context (SSC). In this paper,
we examine the question on how features learnt using various Deep Learning (DL)
frameworks can be used for deformable registration and whether this feature
learning is necessary or not. We investigate the use of features learned by
different DL methods in the current state-of-the-art discrete registration
framework and analyze its performance on 2 publicly available datasets. We draw
insights into the type of DL framework useful for feature learning and the
impact, if any, of the complexity of different DL models and brain parcellation
methods on the performance of discrete registration. Our results indicate that
the registration performance with DL features and SSC are comparable and stable
across datasets whereas this does not hold for low level features.Comment: 9 pages, 4 figure
Symmetric image registration with directly calculated inverse deformation field
This paper presents a novel technique for a symmetric deformable image registration based on a new method for fast and accurate direct inversion of a large motion model deformation field. The proposed image registration algorithm maintain a one-to-one mapping between registered images by symmetrically warping them to each other, and by ensuring the inverse consistency criterion at each iteration. This makes the final estimation of forward and backward deformation fields anatomically plausible. The quantitative validation of the method has been performed on magnetic resonance data obtained for a pelvis area demonstrating applicability of the method to adaptive prostate radiotherapy. The experiments demonstrate the improved robustness in terms of inverse consistency error when compared to previously proposed methods for symmetric image registration
Precise localization for aerial inspection using augmented reality markers
The final publication is available at link.springer.comThis chapter is devoted to explaining a method for precise localization using augmented reality markers. This method can achieve precision of less of 5 mm in position at a distance of 0.7 m, using a visual mark of 17 mm × 17 mm, and it can be used by controller when the aerial robot is doing a manipulation task. The localization method is based on optimizing the alignment of deformable contours from textureless images working from the raw vertexes of the observed contour. The algorithm optimizes the alignment of the XOR area computed by means of computer graphics clipping techniques. The method can run at 25 frames per second.Peer ReviewedPostprint (author's final draft
A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching
We propose a combinatorial solution for the problem of non-rigidly matching a
3D shape to 3D image data. To this end, we model the shape as a triangular mesh
and allow each triangle of this mesh to be rigidly transformed to achieve a
suitable matching to the image. By penalising the distance and the relative
rotation between neighbouring triangles our matching compromises between image
and shape information. In this paper, we resolve two major challenges: Firstly,
we address the resulting large and NP-hard combinatorial problem with a
suitable graph-theoretic approach. Secondly, we propose an efficient
discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge
this is the first combinatorial formulation for non-rigid 3D shape-to-image
matching. In contrast to existing local (gradient descent) optimisation
methods, we obtain solutions that do not require a good initialisation and that
are within a bound of the optimal solution. We evaluate the proposed method on
the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image
registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
- …