160 research outputs found

    Intensity-Based Registration of Freehand 3D Ultrasound and CT-scan Images of the Kidney

    Full text link
    This paper presents a method to register a pre-operative Computed-Tomography (CT) volume to a sparse set of intra-operative Ultra-Sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 Degree Of Freedom (DOF) transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images have been preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, Correlation Ratio (CR) turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, namely Powell-Brent's. The resulting matching transforms are compared to a standard rigid surface registration involving segmentation, regarding both accuracy and repeatability. The obtained results are presented and discussed

    Parallel mutual information estimation for inferring gene regulatory networks on GPUs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity.</p> <p>Results</p> <p>We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time.</p> <p>Conclusions</p> <p>CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.</p

    Feasibility of Using Ultra-High Field (7 T) MRI for Clinical Surgical Targeting

    Get PDF
    The advantages of ultra-high magnetic field (7 Tesla) MRI for basic science research and neuroscience applications have proven invaluable. Structural and functional MR images of the human brain acquired at 7 T exhibit rich information content with potential utility for clinical applications. However, (1) substantial increases in susceptibility artifacts, and (2) geometrical distortions at 7 T would be detrimental for stereotactic surgeries such as deep brain stimulation (DBS), which typically use 1.5 T images for surgical planning. Here, we explore whether these issues can be addressed, making feasible the use of 7 T MRI to guide surgical planning. Twelve patients with Parkinson's disease, candidates for DBS, were scanned on a standard clinical 1.5 T MRI and a 7 T MRI scanner. Qualitative and quantitative assessments of global and regional distortion were evaluated based on anatomical landmarks and transformation matrix values. Our analyses show that distances between identical landmarks on 1.5 T vs. 7 T, in the mid-brain region, were less than one voxel, indicating a successful co-registration between the 1.5 T and 7 T images under these specific imaging parameter sets. On regional analysis, the central part of the brain showed minimal distortion, while inferior and frontal areas exhibited larger distortion due to proximity to air-filled cavities. We conclude that 7 T MR images of the central brain regions have comparable distortions to that observed on a 1.5 T MRI, and that clinical applications targeting structures such as the STN, are feasible with information-rich 7 T imaging

    Mutual information based registration of medical images

    No full text

    Crowd disagreement of medical images is informative

    No full text
    \u3cp\u3eClassifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://figshare.com/s/5cbbce14647b66286544.\u3c/p\u3

    Pulmonary CT registration through supervised learning with convolutional neural networks

    Get PDF
    \u3cp\u3eDeformable image registration can be time consuming and often needs extensive parameterization to perform well on a specific application. We present a deformable registration method based on a 3-D convolutional neural network, together with a framework for training such a network. The network directly learns transformations between pairs of 3-D images. The network is trained on synthetic random transformations which are applied to a small set of representative images for the desired application. Training, therefore, does not require manually annotated ground truth information on the deformation. The framework for the generation of transformations for training uses a sequence of multiple transformations at different scales that are applied to the image. This way, complex transformations with large displacements can be modeled without folding or tearing images. The methodology is demonstrated on public data sets of inhale-exhale lung CT image pairs which come with landmarks for evaluation of the registration quality. We show that a small training set can be used to train the network, while still allowing generalization to a separate pulmonary CT data set containing data from a different patient group, acquired using a different scanner and scan protocol. This approach results in an accurate and very fast deformable registration method, without a requirement for parameterization at test time or manually annotated data for training.\u3c/p\u3

    Supervised local error estimation for nonlinear image registration using convolutional neural networks

    No full text
    \u3cp\u3eError estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.\u3c/p\u3
    • 

    corecore