16,284 research outputs found
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
Registration and Fusion of Multi-Spectral Images Using a Novel Edge Descriptor
In this paper we introduce a fully end-to-end approach for multi-spectral
image registration and fusion. Our method for fusion combines images from
different spectral channels into a single fused image by different approaches
for low and high frequency signals. A prerequisite of fusion is a stage of
geometric alignment between the spectral bands, commonly referred to as
registration. Unfortunately, common methods for image registration of a single
spectral channel do not yield reasonable results on images from different
modalities. For that end, we introduce a new algorithm for multi-spectral image
registration, based on a novel edge descriptor of feature points. Our method
achieves an accurate alignment of a level that allows us to further fuse the
images. As our experiments show, we produce a high quality of multi-spectral
image registration and fusion under many challenging scenarios
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Registration of Standardized Histological Images in Feature Space
In this paper, we propose three novel and important methods for the
registration of histological images for 3D reconstruction. First, possible
intensity variations and nonstandardness in images are corrected by an
intensity standardization process which maps the image scale into a standard
scale where the similar intensities correspond to similar tissues meaning.
Second, 2D histological images are mapped into a feature space where continuous
variables are used as high confidence image features for accurate registration.
Third, we propose an automatic best reference slice selection algorithm that
improves reconstruction quality based on both image entropy and mean square
error of the registration process. We demonstrate that the choice of reference
slice has a significant impact on registration error, standardization, feature
space and entropy information. After 2D histological slices are registered
through an affine transformation with respect to an automatically chosen
reference, the 3D volume is reconstructed by co-registering 2D slices
elastically.Comment: SPIE Medical Imaging 2008 - submissio
- …