18,009 research outputs found

    Learning Deep Similarity Metric for 3D MR-TRUS Registration

    Full text link
    Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. Methods: This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. Results: The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric based approach obtained a mean TRE of 3.86mm (with an initial TRE of 16mm) for this challenging problem. Conclusion: A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.Comment: To appear on IJCAR

    Automatic Image Registration in Infrared-Visible Videos using Polygon Vertices

    Full text link
    In this paper, an automatic method is proposed to perform image registration in visible and infrared pair of video sequences for multiple targets. In multimodal image analysis like image fusion systems, color and IR sensors are placed close to each other and capture a same scene simultaneously, but the videos are not properly aligned by default because of different fields of view, image capturing information, working principle and other camera specifications. Because the scenes are usually not planar, alignment needs to be performed continuously by extracting relevant common information. In this paper, we approximate the shape of the targets by polygons and use affine transformation for aligning the two video sequences. After background subtraction, keypoints on the contour of the foreground blobs are detected using DCE (Discrete Curve Evolution)technique. These keypoints are then described by the local shape at each point of the obtained polygon. The keypoints are matched based on the convexity of polygon's vertices and Euclidean distance between them. Only good matches for each local shape polygon in a frame, are kept. To achieve a global affine transformation that maximises the overlapping of infrared and visible foreground pixels, the matched keypoints of each local shape polygon are stored temporally in a buffer for a few number of frames. The matrix is evaluated at each frame using the temporal buffer and the best matrix is selected, based on an overlapping ratio criterion. Our experimental results demonstrate that this method can provide highly accurate registered images and that we outperform a previous related method

    Three-dimensional reconstruction of the tissue-specific multielemental distribution within Ceriodaphnia dubia via multimodal registration using laser ablation ICP-mass spectrometry and X-ray spectroscopic techniques

    Get PDF
    In this work, the three-dimensional elemental, distribution profile within the freshwater crustacean Ceriodaphnia dubia was constructed at a spatial resolution down to S mu m via a data, fusion approach employing state-of-the-art laser ablation inductively coupled plasma-time-of-flight mass spectrometry (LAICP-TOFMS) and laboratory-based absorption microcomputed tomography (mu-CT). C. dubia was exposed to elevated Cu, Ni, and Zn concentrations, chemically fixed, dehydrated, stained, and embedded, prior to mu-CT analysis. Subsequently, the sample was cut into 5 pm thin sections that were subjected to LA-ICPTOFMS imaging. Multimodal image registration was performed to spatially align the 2D LA-ICP-TOFMS images relative to the Corresponding slices of the 3D mu-CT reconstruction. Mass channels corresponding to the isotopes of a single element were merged to improve the signal-to-noise ratios within the elemental images. In order to aid the visual interpretation of the data, LA-ICP-TOEMS data wete projected onto the mu-CT voxels representing tissue. Additionally, the image resolution and elemental sensitivity were compared to those obtained with synchrotron radiation based 3D confocal mu-X-ray fluorescence imaging upon a chemically fixed and air-dried C. dubia specimen
    corecore