19 research outputs found

    Featureadapted fast slant stack

    Get PDF
    ABSTRACT (N log(N ) ), where N = n 2 is the number of pixels. This new method leads to an efficient implementation of both Feature-adapted Radon and Beamlet transforms, that outperforms our previous work

    Vinculin–actin interaction couples actin retrograde flow to focal adhesions, but is dispensable for focal adhesion growth

    Get PDF
    In migrating cells, integrin-based focal adhesions (FAs) assemble in protruding lamellipodia in association with rapid filamentous actin (F-actin) assembly and retrograde flow. How dynamic F-actin is coupled to FA is not known. We analyzed the role of vinculin in integrating F-actin and FA dynamics by vinculin gene disruption in primary fibroblasts. Vinculin slowed F-actin flow in maturing FA to establish a lamellipodium–lamellum border and generate high extracellular matrix (ECM) traction forces. In addition, vinculin promoted nascent FA formation and turnover in lamellipodia and inhibited the frequency and rate of FA maturation. Characterization of a vinculin point mutant that specifically disrupts F-actin binding showed that vinculin–F-actin interaction is critical for these functions. However, FA growth rate correlated with F-actin flow speed independently of vinculin. Thus, vinculin functions as a molecular clutch, organizing leading edge F-actin, generating ECM traction, and promoting FA formation and turnover, but vinculin is dispensible for FA growth

    Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome associated with COVID-19: An Emulated Target Trial Analysis.

    Get PDF
    RATIONALE: Whether COVID patients may benefit from extracorporeal membrane oxygenation (ECMO) compared with conventional invasive mechanical ventilation (IMV) remains unknown. OBJECTIVES: To estimate the effect of ECMO on 90-Day mortality vs IMV only Methods: Among 4,244 critically ill adult patients with COVID-19 included in a multicenter cohort study, we emulated a target trial comparing the treatment strategies of initiating ECMO vs. no ECMO within 7 days of IMV in patients with severe acute respiratory distress syndrome (PaO2/FiO2 <80 or PaCO2 ≥60 mmHg). We controlled for confounding using a multivariable Cox model based on predefined variables. MAIN RESULTS: 1,235 patients met the full eligibility criteria for the emulated trial, among whom 164 patients initiated ECMO. The ECMO strategy had a higher survival probability at Day-7 from the onset of eligibility criteria (87% vs 83%, risk difference: 4%, 95% CI 0;9%) which decreased during follow-up (survival at Day-90: 63% vs 65%, risk difference: -2%, 95% CI -10;5%). However, ECMO was associated with higher survival when performed in high-volume ECMO centers or in regions where a specific ECMO network organization was set up to handle high demand, and when initiated within the first 4 days of MV and in profoundly hypoxemic patients. CONCLUSIONS: In an emulated trial based on a nationwide COVID-19 cohort, we found differential survival over time of an ECMO compared with a no-ECMO strategy. However, ECMO was consistently associated with better outcomes when performed in high-volume centers and in regions with ECMO capacities specifically organized to handle high demand. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/)

    Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging

    No full text
    Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method

    Diagnosis with Confidence: Deep Learning for Reliable Classification of Squamous Lesions of the Upper Aerodigestive Tract

    No full text
    Abstract Diagnosis of head and neck squamous dysplasia and carcinomas is critical for patient care, cure and follow-up. It can be challenging, especially for intraepithelial lesions. Even though the last WHO classification simplified the grading of dysplasia with only two grades (except for oral or oropharyngeal lesions), the inter and intra-observer variability remains substantial, especially for non-specialized pathologists. In this study we investigated the potential of deep learning to assist the pathologist with automatic and reliable classification of head and neck squamous lesions following the 2022 WHO classification system for the hypopharynx, larynx, trachea and parapharyngeal space. We created, for the first time, a large scale database of histological samples intended for developing an automatic diagnostic tool. We developed and trained a weakly supervised model performing classification from whole slides images. A dual blind review was carried out to define a gold standard test set on which our model was able to classify lesions with high accuracy on every class (average AUC: 0.878 (95% CI: [0.834-0.918])). Finally, we defined a confidence score for the model predictions, which can be used to identify ambiguous or difficult cases. When the algorithm is applied as a screening tool, such cases can then be submitted to pathologists in priority. Our results demonstrate that the model, associated with confidence measurements, can help in the difficult task of classifying head and neck squamous lesions

    Diagnosis with confidence: deep learning for reliable classification of laryngeal dysplasia

    No full text
    International audienceBackground Diagnosis of head and neck (HN) squamous dysplasias and carcinomas is critical for patient care, cure, and follow‐up. It can be challenging, especially for grading intraepithelial lesions. Despite recent simplification in the last WHO grading system, the inter‐ and intraobserver variability remains substantial, particularly for nonspecialized pathologists, exhibiting the need for new tools to support pathologists. Methods In this study we investigated the potential of deep learning to assist the pathologist with automatic and reliable classification of HN lesions following the 2022 WHO classification system. We created, for the first time, a large‐scale database of histological samples (>2000 slides) intended for developing an automatic diagnostic tool. We developed and trained a weakly supervised model performing classification from whole‐slide images (WSI). We evaluated our model on both internal and external test sets and we defined and validated a new confidence score to assess the predictions that can be used to identify difficult cases. Results Our model demonstrated high classification accuracy across all lesion types on both internal and external test sets (respectively average area under the curve [AUC]: 0.878 (95% confidence interval [CI]: [0.834–0.918]) and 0.886 (95% CI: [0.813–0.947])) and the confidence score allowed for accurate differentiation between reliable and uncertain predictions. Conclusion Our results demonstrate that the model, associated with confidence measurements, can help in the difficult task of classifying HN squamous lesions by limiting variability and detecting ambiguous cases, taking us one step closer to a wider adoption of AI‐based assistive tools

    Automatic grading of cervical biopsies by combining full and self-supervision

    No full text
    Abstract In computational pathology, the application of Deep Learning to the analysis of Whole Slide Images (WSI) has provided results of unprecedented quality. Due to their enormous size, WSIs have to be split into small images (tiles) which are first encoded and whose representations are then agglomerated in order to solve prediction tasks, such as prognosis or treatment response. The choice of the encoding strategy plays a key role in such algorithms. Current approaches include the use of encodings trained on unrelated data sources, full supervision or self-supervision. In particular, self-supervised learning (SSL) offers a great opportunity to exploit all the unlabelled data available. However, it often requires large computational resources and can be challenging to train. On the other end of the spectrum, fully-supervised methods make use of valuable prior knowledge about the data but involve a costly amount of expert time. This paper proposes a framework to reconcile SSL and full supervision and measures the trade-off between long SSL training and annotation effort, showing that a combination of both has the potential to substantially increase performance. On a recently organized challenge on grading Cervical Biopsies, we show that our mixed supervision scheme reaches high performance (weighted accuracy (WA): 0.945), outperforming both SSL (WA: 0.927) and transfer learning from ImageNet (WA: 0.877). We further provide insights and guidelines to train a clinically impactful classifier with a limited expert and/or computational workload budget. We expect that the combination of full and self-supervision is an interesting strategy for many tasks in computational pathology and will be widely adopted by the field
    corecore