56 research outputs found

    Radiology and Global Health: The Case for a New Subspecialty

    Get PDF
    In high- and medium-income countries, the use of radiology has grown substantially in the last several decades. But in the developing world, access to medical imaging remains a critical problem. Unlike more structured efforts in the field of global health, interventions in global radiology have been largely unplanned, fragmented and sometimes irrelevant to the needs of the recipient society, and have not resulted in any significant progress. Access to medical imaging around the world remains dismal. There is a therefore a clear and urgent need for the radiology community to develop a vision for global radiology, beginning with defining the scope of the subject and establishing measurable goals. Agreement must be reached to declare global radiology as a bona fide subspecialty of radiology. This should soon be followed by the establishment of divisions of Global Radiology in academic radiology departments. Resident and medical students should be taught how physicians in low -income countries practice medicine without access to adequate radiology. As part of training and electives, residents and medical students should accompany global health teams to countries where the need for radiology services is great. Global scholar exchange and sabbatical opportunities should be offered to staff radiologists. Successful implementation of a unified vision of global radiology has the potential to improve access to medical imaging on a large scale. Radiology journals dedicated to the promotion of global radiology can play an important role in providing forums of discussion, analyses and sharing of field experiences. In this discussion we have attempted to make a case for assigning global radiology a subspecialty status

    Cross-Modal Data Programming Enables Rapid Medical Machine Learning

    Full text link
    Labeling training datasets has become a key barrier to building medical machine learning models. One strategy is to generate training labels programmatically, for example by applying natural language processing pipelines to text reports associated with imaging studies. We propose cross-modal data programming, which generalizes this intuitive strategy in a theoretically-grounded way that enables simpler, clinician-driven input, reduces required labeling time, and improves with additional unlabeled data. In this approach, clinicians generate training labels for models defined over a target modality (e.g. images or time series) by writing rules over an auxiliary modality (e.g. text reports). The resulting technical challenge consists of estimating the accuracies and correlations of these rules; we extend a recent unsupervised generative modeling technique to handle this cross-modal setting in a provably consistent way. Across four applications in radiography, computed tomography, and electroencephalography, and using only several hours of clinician time, our approach matches or exceeds the efficacy of physician-months of hand-labeling with statistical significance, demonstrating a fundamentally faster and more flexible way of building machine learning models in medicine

    BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys

    Full text link
    Rapid progress has been made in instruction-learning for image editing with natural-language instruction, as exemplified by InstructPix2Pix. In biomedicine, such methods can be applied to counterfactual image generation, which helps differentiate causal structure from spurious correlation and facilitate robust image interpretation for disease progression modeling. However, generic image-editing models are ill-suited for the biomedical domain, and counterfactual biomedical image generation is largely underexplored. In this paper, we present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning from multimodal patient journeys. Given a patient with two biomedical images taken at different time points, we use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression. The resulting triples (prior image, progression description, new image) are then used to train a latent diffusion model for counterfactual biomedical image generation. Given the relative scarcity of image time series data, we introduce a two-stage curriculum that first pretrains the denoising network using the much more abundant single image-report pairs (with dummy prior image), and then continues training using the counterfactual triples. Experiments using the standard MIMIC-CXR dataset demonstrate the promise of our method. In a comprehensive battery of tests on counterfactual medical image generation, BiomedJourney substantially outperforms prior state-of-the-art methods in instruction image editing and medical image generation such as InstructPix2Pix and RoentGen. To facilitate future study in counterfactual medical generation, we plan to release our instruction-learning code and pretrained models.Comment: Project page & demo: https://aka.ms/biomedjourne

    INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis

    Full text link
    Synthesizing information from multiple data sources plays a crucial role in the practice of modern medicine. Current applications of artificial intelligence in medicine often focus on single-modality data due to a lack of publicly available, multimodal medical datasets. To address this limitation, we introduce INSPECT, which contains de-identified longitudinal records from a large cohort of patients at risk for pulmonary embolism (PE), along with ground truth labels for multiple outcomes. INSPECT contains data from 19,402 patients, including CT images, radiology report impression sections, and structured electronic health record (EHR) data (i.e. demographics, diagnoses, procedures, vitals, and medications). Using INSPECT, we develop and release a benchmark for evaluating several baseline modeling approaches on a variety of important PE related tasks. We evaluate image-only, EHR-only, and multimodal fusion models. Trained models and the de-identified dataset are made available for non-commercial use under a data use agreement. To the best of our knowledge, INSPECT is the largest multimodal dataset integrating 3D medical imaging and EHR for reproducible methods evaluation and research
    • …
    corecore