3 research outputs found

    Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid Constrained Semi-Supervised Learning and Dual-UNet

    Full text link
    Catheter segmentation in 3D ultrasound is important for computer-assisted cardiac intervention. However, a large amount of labeled images are required to train a successful deep convolutional neural network (CNN) to segment the catheter, which is expensive and time-consuming. In this paper, we propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method, but nevertheless achieves better performance. Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation and which can efficiently localize the target catheter. With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data. To train the Dual-UNet with limited labeled images and leverage information of unlabeled images, we propose a novel semi-supervised scheme, which exploits unlabeled images based on hybrid constraints from predictions. Experiments show the proposed scheme achieves a higher performance than state-of-the-art semi-supervised methods, while it demonstrates that our method is able to learn from large-scale unlabeled images.Comment: Accepted by MICCAI 202

    Catheter detection in 3D ultrasound using triplanar-based convolutional neural networks

    No full text
    3D Ultrasound (US) image-based catheter detection can potentially decrease the cost on extra equipment and training. Meanwhile, accurate catheter detection enables to decrease the operation duration and improves its outcome. In this paper, we propose a catheter detection method based on convolutional neural networks (CNNs) in 3D US. Voxels in US images are classified as catheter (or not) using triplanar-based CNNs. Our proposed CNN employs two-stage training with weighted loss function, which can cope with highly imbalanced training data and improves classification accuracy. When compared to state-of-the-art handcrafted features on ex-vivo datasets, our proposed method improves the F2-score with at least 31%. Based on classified volumes, the catheters are localized with an average position error of smaller than 3 voxels in the examined datasets, indicating that catheters are always detected in noisy and low-resolution images
    corecore