31,180 research outputs found

    Multi-Adversarial Variational Autoencoder Networks

    Full text link
    The unsupervised training of GANs and VAEs has enabled them to generate realistic images mimicking real-world distributions and perform image-based unsupervised clustering or semi-supervised classification. Combining the power of these two generative models, we introduce Multi-Adversarial Variational autoEncoder Networks (MAVENs), a novel network architecture that incorporates an ensemble of discriminators in a VAE-GAN network, with simultaneous adversarial learning and variational inference. We apply MAVENs to the generation of synthetic images and propose a new distribution measure to quantify the quality of the generated images. Our experimental results using datasets from the computer vision and medical imaging domains---Street View House Numbers, CIFAR-10, and Chest X-Ray datasets---demonstrate competitive performance against state-of-the-art semi-supervised models both in image generation and classification tasks

    Multiclass Classification Application using SVM Kernel to Classify Chest X-ray Images Based on Nodule Location in Lung Zones

    Get PDF
    Support Vector Machine (SVM) has long been known as an excellent approach for image classification. While many studies have reported on its achievement, yet it still weak to handle multiclass classification problem because it is originally designed as a binary classification technique. It is challenging task to transform SVM to solve multiclass problems like classifying chest X-ray images based on the lung zone location. Classified X-ray images improved image retrieval hence reducing time taken to assessed back the images. Realizing this difficulties, therefore, we proposed an application method for multiclass classification using SVM kernel to classify chest X-ray images based on nodule location in lung zones. The multiclass classification experiment is performed using four popular SVM kernels namely linear, polynomial, radial based function (RBF) and sigmoid. Overall, we obtained high classification accuracy (>90%) for three classifiers that are RBF, polynomial and linear kernel while sigmoid kernel classifier is only moderately good at 82.7% accuracy. Besides, values in the confusion matrices revealed that the RBF and polynomial classifiers managed to classify test data into all classification classes. Conversely, classifiers based on linear and sigmoid kernel have missed at least one classification class. Since each classifier work differently based on their kernel types, we noticed that it is better to view them as a complimentary rather than treating them as competing options. This condition also revealed that we can modify the original SVM classification method to handle multiclass classification problem

    Deep Learning-based Patient Re-identification Is able to Exploit the Biometric Nature of Medical Chest X-ray Data

    Full text link
    With the rise and ever-increasing potential of deep learning techniques in recent years, publicly available medical datasets became a key factor to enable reproducible development of diagnostic algorithms in the medical domain. Medical data contains sensitive patient-related information and is therefore usually anonymized by removing patient identifiers, e.g., patient names before publication. To the best of our knowledge, we are the first to show that a well-trained deep learning system is able to recover the patient identity from chest X-ray data. We demonstrate this using the publicly available large-scale ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images from 30,805 unique patients. Our verification system is able to identify whether two frontal chest X-ray images are from the same person with an AUC of 0.9940 and a classification accuracy of 95.55%. We further highlight that the proposed system is able to reveal the same person even ten and more years after the initial scan. When pursuing a retrieval approach, we observe an mAP@R of 0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to 0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks on external datasets such as CheXpert and the COVID-19 Image Data Collection. Based on this high identification rate, a potential attacker may leak patient-related information and additionally cross-reference images to obtain more information. Thus, there is a great risk of sensitive content falling into unauthorized hands or being disseminated against the will of the concerned patients. Especially during the COVID-19 pandemic, numerous chest X-ray datasets have been published to advance research. Therefore, such data may be vulnerable to potential attacks by deep learning-based re-identification algorithms.Comment: Published in Scientific Report

    PadChest: A large chest x-ray image dataset with multi-label annotated reports

    Get PDF
    We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/
    corecore