10 research outputs found

    A Novel Sep-Unet Architecture of Convolutional Neural Networks to Improve Dermoscopic Image Segmentation by Training Parameters Reduction

    Get PDF
    Nowadays, we use dermoscopic images as one of the imaging methods in diagnosis of skin lesions such as skin cancer. But due to the noise and other problems, including hair artifacts around the lesion, this issue requires automatic and reliable segmentation methods. The diversity in the color and structure of the skin lesions is a challenging reason for automatic skin lesion segmentation. In this study, we used convolutional neural networks (CNN) as an efficient method for dermoscopic image segmentation. The main goal of this research is to recommend a novel architecture of deep neural networks for the injured lesion in dermoscopic images which has been improved by the convolutional layers based on the separable layers. By convolutional layers and the specific operations on the kernel of them, the velocity of the algorithm increases and the training parameters decrease. Additionally, we used a suitable preprocessing method to enter the images into the neural network. Suitable structure of the convolutional layers, separable convolutional layers and transposed convolution in the down sampling and up sampling parts, have made the structure of the mentioned neural network. This algorithm is named Sep-unet and could segment the images with 98% dice coefficient

    AutoSNAP: Automatically Learning Neural Architectures for Instrument Pose Estimation

    Full text link
    Despite recent successes, the advances in Deep Learning have not yet been fully translated to Computer Assisted Intervention (CAI) problems such as pose estimation of surgical instruments. Currently, neural architectures for classification and segmentation tasks are adopted ignoring significant discrepancies between CAI and these tasks. We propose an automatic framework (AutoSNAP) for instrument pose estimation problems, which discovers and learns the architectures for neural networks. We introduce 1)~an efficient testing environment for pose estimation, 2)~a powerful architecture representation based on novel Symbolic Neural Architecture Patterns (SNAPs), and 3)~an optimization of the architecture using an efficient search scheme. Using AutoSNAP, we discover an improved architecture (SNAPNet) which outperforms both the hand-engineered i3PosNet and the state-of-the-art architecture search method DARTS.Comment: Accepted at MICCAI 2020 Preparing code for release at https://github.com/MECLabTUDA/AutoSNA

    Towards Robust Partially Supervised Multi-Structure Medical Image Segmentation on Small-Scale Data

    Get PDF
    The data-driven nature of deep learning (DL) models for semantic segmentation requires a large number of pixel-level annotations. However, large-scale and fully labeled medical datasets are often unavailable for practical tasks. Recently, partially supervised methods have been proposed to utilize images with incomplete labels in the medical domain. To bridge the methodological gaps in partially supervised learning (PSL) under data scarcity, we propose Vicinal Labels Under Uncertainty (VLUU), a simple yet efficient framework utilizing the human structure similarity for partially supervised medical image segmentation. Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels. We systematically evaluate VLUU under the challenges of small-scale data, dataset shift, and class imbalance on two commonly used segmentation datasets for the tasks of chest organ segmentation and optic disc-and-cup segmentation. The experimental results show that VLUU can consistently outperform previous partially supervised models in these settings. Our research suggests a new research direction in label-efficient deep learning with partial supervision.Comment: Accepted by Applied Soft Computin

    Deep Learning-based Radiomics Framework for Multi-Modality PET-CT Images

    Get PDF
    Multimodal positron emission tomography - computed tomography (PET-CT) imaging is widely regarded as the imaging modality of choice for cancer management. This is because PET-CT combines the high sensitivity of PET in detecting regions of abnormal functions and the specificity of CT in depicting the underlying anatomy of where the abnormal functions are occurring. Radiomics is an emerging research field that enables the extraction and analysis of quantitative features from medical images, providing valuable insights into the underlying pathophysiology that cannot be discerned by the naked eyes. This information is capable of assisting decision-making in clinical practice, leading to better personalised treatment planning, patient outcome prediction, and therapy response assessment. The aim of this thesis is to propose a new deep learning-based radiomics framework for multimodal PET-CT images. The proposed framework comprises of three methods: 1) a tumour segmentation method via a self-supervision enabled false positive and false negative reduction network; 2) a constrained hierarchical multi-modality feature learning is constructed to predict the patient outcome with multimodal PET-CT images; 3) an automatic neural architecture search method to automatically find the optimal network architecture for both patient outcome prediction and tumour segmentation. Extensive experiments have been conducted on three datasets, including one public soft-tissue sarcomas dataset, one public challenge dataset, and one in-house lung cancer data. The results demonstrated that the proposed methods obtained better performance in all tasks when compared to the state-of-the-art methods
    corecore