40,994 research outputs found
Automated multimodal volume registration based on supervised 3D anatomical landmark detection
We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm
Keypoint Transfer for Fast Whole-Body Segmentation
We introduce an approach for image segmentation based on sparse
correspondences between keypoints in testing and training images. Keypoints
represent automatically identified distinctive image locations, where each
keypoint correspondence suggests a transformation between images. We use these
correspondences to transfer label maps of entire organs from the training
images to the test image. The keypoint transfer algorithm includes three steps:
(i) keypoint matching, (ii) voting-based keypoint labeling, and (iii)
keypoint-based probabilistic transfer of organ segmentations. We report
segmentation results for abdominal organs in whole-body CT and MRI, as well as
in contrast-enhanced CT and MRI. Our method offers a speed-up of about three
orders of magnitude in comparison to common multi-atlas segmentation, while
achieving an accuracy that compares favorably. Moreover, keypoint transfer does
not require the registration to an atlas or a training phase. Finally, the
method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin
Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications
This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each boneâs edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems
2D Reconstruction of Small Intestine's Interior Wall
Examining and interpreting of a large number of wireless endoscopic images
from the gastrointestinal tract is a tiresome task for physicians. A practical
solution is to automatically construct a two dimensional representation of the
gastrointestinal tract for easy inspection. However, little has been done on
wireless endoscopic image stitching, let alone systematic investigation. The
proposed new wireless endoscopic image stitching method consists of two main
steps to improve the accuracy and efficiency of image registration. First, the
keypoints are extracted by Principle Component Analysis and Scale Invariant
Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood
Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable
keypoints. Second, the optimal transformation parameters obtained from first
step are fed to the Normalised Mutual Information (NMI) algorithm as an initial
solution. With modified Marquardt-Levenberg search strategy in a multiscale
framework, the NMI can find the optimal transformation parameters in the
shortest time. The proposed methodology has been tested on two different
datasets - one with real wireless endoscopic images and another with images
obtained from Micro-Ball (a new wireless cubic endoscopy system with six image
sensors). The results have demonstrated the accuracy and robustness of the
proposed methodology both visually and quantitatively.Comment: Journal draf
Atlas-Based Prostate Segmentation Using an Hybrid Registration
Purpose: This paper presents the preliminary results of a semi-automatic
method for prostate segmentation of Magnetic Resonance Images (MRI) which aims
to be incorporated in a navigation system for prostate brachytherapy. Methods:
The method is based on the registration of an anatomical atlas computed from a
population of 18 MRI exams onto a patient image. An hybrid registration
framework which couples an intensity-based registration with a robust
point-matching algorithm is used for both atlas building and atlas
registration. Results: The method has been validated on the same dataset that
the one used to construct the atlas using the "leave-one-out method". Results
gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect
to expert segmentations. Conclusions: We think that this segmentation tool may
be a very valuable help to the clinician for routine quantitative image
exploitation.Comment: International Journal of Computer Assisted Radiology and Surgery
(2008) 000-99
A statistical shape model for deformable surface
This short paper presents a deformable surface registration scheme which is based on the statistical shape
modelling technique. The method consists of two major processing stages, model building and model
fitting. A statistical shape model is first built using a set of training data. Then the model is deformed and
matched to the new data by a modified iterative closest point (ICP) registration process. The proposed
method is tested on real 3-D facial data from BU-3DFE database. It is shown that proposed method can
achieve a reasonable result on surface registration, and can be used for patient position monitoring in
radiation therapy and potentially can be used for monitoring of the radiation therapy progress for head and
neck patients by analysis of facial articulation
- âŠ