13,385 research outputs found

    Mesh-to-raster based non-rigid registration of multi-modal images

    Full text link
    Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix

    Face Recognition from Sequential Sparse 3D Data via Deep Registration

    Full text link
    Previous works have shown that face recognition with high accurate 3D data is more reliable and insensitive to pose and illumination variations. Recently, low-cost and portable 3D acquisition techniques like ToF(Time of Flight) and DoE based structured light systems enable us to access 3D data easily, e.g., via a mobile phone. However, such devices only provide sparse(limited speckles in structured light system) and noisy 3D data which can not support face recognition directly. In this paper, we aim at achieving high-performance face recognition for devices equipped with such modules which is very meaningful in practice as such devices will be very popular. We propose a framework to perform face recognition by fusing a sequence of low-quality 3D data. As 3D data are sparse and noisy which can not be well handled by conventional methods like the ICP algorithm, we design a PointNet-like Deep Registration Network(DRNet) which works with ordered 3D point coordinates while preserving the ability of mining local structures via convolution. Meanwhile we develop a novel loss function to optimize our DRNet based on the quaternion expression which obviously outperforms other widely used functions. For face recognition, we design a deep convolutional network which takes the fused 3D depth-map as input based on AMSoftmax model. Experiments show that our DRNet can achieve rotation error 0.95{\deg} and translation error 0.28mm for registration. The face recognition on fused data also achieves rank-1 accuracy 99.2% , FAR-0.001 97.5% on Bosphorus dataset which is comparable with state-of-the-art high-quality data based recognition performance.Comment: To be appeared in ICB201

    Medical image registration by neural networks: a regression-based registration approach

    Get PDF
    This thesis focuses on the development and evaluation of a registration-by-regression approach for the 3D/2D registration of coronary Computed Tomography Angiography (CTA) and X-ray angiography. This regression-based method relates image features of 2D projection images to the transformation parameters of the 3D image by a nonlinear regression. It treats registration as a regression problem, as an alternative for the traditional iterative approach that often comes with high computational costs and limited capture range. First we presented a survey of the methods with a regression-based registration approach for medical applications, as well as a summary of their main characteristics (Chapter 2). Second, we studied the registration methodology, addressing the input features and the choice of regression model (Chapter 3 and Chapter 4). For that purpose, we evaluated different options using simulated X-ray images generated from coronary artery tree models derived from 3D CTA scans. We also compared the registration-by-regression results with a method based on iterative optimization. Different image features of 2D projections and seven regression techniques were considered. The regression approach for simulated X-rays was shown to be slightly less accurate, but much more robust than the method based on an iterative optimization approach. Neural Networks obtained accurate results and showed to be robust to large initial misalignment. Third, we evaluated the registration-by-regression method using clinical data, integrating the 3D preoperative CTA of the coronary arteries with intraoperative 2D X-ray angiography images (Chapter 5). For the evaluation of the image registration, a gold standard registration was established using an exhaustive search followed by a multi-observer visual scoring procedure. The influence of preprocessing options for the simulated images and the real X-rays was studied. Several image features were also compared. The coronary registration–by-regression results were not satisfactory, resembling manual initialization accuracy. Therefore, the proposed method for this concrete problem and in its current configuration is not sufficiently accurate to be used in the clinical practice. The framework developed enables us to better understand the dependency of the proposed method on the differences between simulated and real images. The main difficulty lies in the substantial differences in appearance between the images used for training (simulated X-rays from 3D coronary models) and the actual images obtained during the intervention (real X-ray angiography). We suggest alternative solutions and recommend to evaluate the registration-by-regression approach in other applications where training data is available that has similar appearance to the eventual test data

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation
    • …
    corecore