3 research outputs found

    A multi-stage GAN for multi-organ chest X-ray image generation and segmentation

    Full text link
    Multi-organ segmentation of X-ray images is of fundamental importance for computer aided diagnosis systems. However, the most advanced semantic segmentation methods rely on deep learning and require a huge amount of labeled images, which are rarely available due to both the high cost of human resources and the time required for labeling. In this paper, we present a novel multi-stage generation algorithm based on Generative Adversarial Networks (GANs) that can produce synthetic images along with their semantic labels and can be used for data augmentation. The main feature of the method is that, unlike other approaches, generation occurs in several stages, which simplifies the procedure and allows it to be used on very small datasets. The method has been evaluated on the segmentation of chest radiographic images, showing promising results. The multistage approach achieves state-of-the-art and, when very few images are used to train the GANs, outperforms the corresponding single-stage approach

    Fusion of Visual and Anamnestic Data for the Classification of Skin Lesions with Deep Learning

    No full text
    Early diagnosis of skin lesions is essential for the positive outcome of the disease, which can only be resolved with surgical treatment. In this manuscript, a deep learning method is proposed for the classification of cutaneous lesions based on their visual appearance and on the patient’s anamnestic data. These include age and gender of the patient and position of the lesion. The classifier discriminates between benign and malignant lesions, mimicking a typical procedure in dermatological diagnostics. Good preliminary results on the ISIC Dataset demonstrate the importance of the information fusion process, which significantly improves the classification accuracy
    corecore