70 research outputs found

    Radiologists remember mountains better than radiographs, or do they?

    Get PDF
    Expertise with encoding material has been shown to aid long-term memory for that material. It is not clear how relevant this expertise is for image memorability (e.g. radiologists’ memory for radiographs), and how robust over time. In two studies we tested scene memory using a standard long-term memory paradigm. One compared the performance of radiologists to naïve observers on two image sets, chest radiographs and everyday scenes, and the other radiologists’ memory with immediate as opposed to delayed recognition test using musculoskeletal radiographs and forest scenes. Radiologists’ memory was better than novices for images of expertise but no different for everyday scenes. With heterogeneity of images sets equated, radiologists’ expertize with radiographs afforded them better memory for the musculoskeletal radiographs than forest scenes. Enhanced memory for images of expertise disappeared over time resulting in chance level performance for both image sets after weeks of delay. Expertise with the material is important for visual memorability but not to the same extent as idiosyncratic detail and variability of the image set. Similar memory decline with time for images of expertise as for every day scenes further suggests that extended familiarity with an image is not a robust factor for visual memorability

    Comparative effectiveness of intensity modulated radiation therapy to 3-dimensional conformal radiation in locally advanced lung cancer: pathological and clinical outcomes.

    Get PDF
    OBJECTIVE: Intensity-modulated radiotherapy (IMRT) has better normal-tissue sparing compared with 3-dimensional conformal radiation (3DCRT). We sought to assess the impact of radiation technique on pathological and clinical outcomes in locally advanced non-small cell lung cancer (LANSCLC) treated with a trimodality strategy. METHODS: Retrospective review of LANSCLC patients treated from August 2012 to August 2018 at Sheba Medical Center, Israel. The trimodality strategy consisted of concomitant chemoradiation to 60 Gray (Gy) followed by completion surgery. The planning target volume (PTV) was defined by co-registered PET/CT. Here we compare the pathological regression, surgical margin status, local control rates (LC), disease free (DFS) and overall survival (OS) between 3DCRT and IMRT. RESULTS: Our cohort consisted of 74 patients with mean age 62.9 years, male in 51/74 (69%), adenocarcinoma in 46/74 (62.1%), stage 3 in 59/74 (79.7%) and chemotherapy in 72/74 (97.3%). Radiation mean dose: 59.2 Gy (SD ± 3.8). Radiation technique : 3DCRT in 51/74 (68.9%), IMRT in 23/74 (31%). Other variables were similar between groups.Major pathological response (including pathological complete response or less than 10% residual tumor cells) was similar: 32/51 (62.7%) in 3DCRT and 15/23 (65.2%) in IMRT, p=0.83. Pathological complete response (pCR) rates were similar: 17/51 (33.3%) in 3DCRT and 8/23 (34.8%) in IMRT, p=0.9. Surgical margins were negative in 46/51 (90.1%) in 3DCRT vs. 17/19 (89.4%) in IMRT (p=1.0).The 2-year LC rates were 81.6% (95% CI 69-89.4%); DFS 58.3% (95% CI 45.5-69%) and 3-year OS 70% (95% CI57-80%). Comparing radiation techniques, there were no significant differences in LC (p=0.94), DFS (p=0.33) and OS (p=0.72). CONCLUSION: When used to treat LANSCLC in the neoadjuvant setting, both IMRT and 3DCRT produce comparable pathological and clinical outcomes. ADVANCES IN KNOWLEDGE: This study validates the real-world effectiveness of IMRT compared to 3DCRT

    The IASLC Lung Cancer Staging Project: A Renewed Call to Participation

    Get PDF
    Over the past two decades, the International Association for the Study of Lung Cancer (IASLC) Staging Project has been a steady source of evidence-based recommendations for the TNM classification for lung cancer published by the Union for International Cancer Control and the American Joint Committee on Cancer. The Staging and Prognostic Factors Committee of the IASLC is now issuing a call for participation in the next phase of the project, which is designed to inform the ninth edition of the TNM classification for lung cancer. Following the case recruitment model for the eighth edition database, volunteer site participants are asked to submit data on patients whose lung cancer was diagnosed between January 1, 2011, and December 31, 2019, to the project by means of a secure, electronic data capture system provided by Cancer Research And Biostatistics in Seattle, Washington. Alternatively, participants may transfer existing data sets. The continued success of the IASLC Staging Project in achieving its objectives will depend on the extent of international participation, the degree to which cases are entered directly into the electronic data capture system, and how closely externally submitted cases conform to the data elements for the project

    Invited Commentary

    No full text

    T1 Lung Cancers: Sensitivity of Diagnosis with Fluorodeoxyglucose PET

    No full text

    Multimodal fusion models for pulmonary embolism mortality prediction

    No full text
    Abstract Pulmonary embolism (PE) is a common, life threatening cardiovascular emergency. Risk stratification is one of the core principles of acute PE management and determines the choice of diagnostic and therapeutic strategies. In routine clinical practice, clinicians rely on the patient’s electronic health record (EHR) to provide a context for their medical imaging interpretation. Most deep learning models for radiology applications only consider pixel-value information without the clinical context. Only a few integrate both clinical and imaging data. In this work, we develop and compare multimodal fusion models that can utilize multimodal data by combining both volumetric pixel data and clinical patient data for automatic risk stratification of PE. Our best performing model is an intermediate fusion model that incorporates both bilinear attention and TabNet, and can be trained in an end-to-end manner. The results show that multimodality boosts performance by up to 14% with an area under the curve (AUC) of 0.96 for assessing PE severity, with a sensitivity of 90% and specificity of 94%, thus pointing to the value of using multimodal data to automatically assess PE severity
    • …
    corecore