11,235 research outputs found

    Thoracic Disease Identification and Localization with Limited Supervision

    Full text link
    Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks.Comment: Conference on Computer Vision and Pattern Recognition 2018 (CVPR 2018). V1: CVPR submission; V2: +supplementary; V3: CVPR camera-ready; V4: correction, update reference baseline results according to their latest post; V5: minor correction; V6: Identification results using NIH data splits and various image model

    Utilizing Chest X-rays for Age Prediction and Gender Classification

    Get PDF
    In this paper we present a framework for automatically predicting the gender and age of a patient using chest x-rays (CXRs). The work of this paper derives from common situations in medical imaging where the gender/age of a patient might be missing or in situations where the x-ray is of poor quality, thus leaving the medical practitioner unable to treat the patient appropriately. The proposed framework comprises of training a large CNN which jointly outputs the gender/age of a CXR. For feature extraction, transfer learning was employed using the EfficientNetB0 architecture, with a custom trainable top layer for both classification and prediction. This framework was applied to a combination of publicly available data, which collectively represent a heterogeneous dataset showing a variation in terms of race, location, patient's health, and quality of image. Our results are robust with respect to these factors, as none of them was used as input to improve the results. In conclusion, Deep Learning can be implemented in the medical imaging domain for automatically predicting characteristics of a patient
    • …
    corecore