63 research outputs found

    FastPathology: An open-source platform for deep learning-based research and decision support in digital pathology

    Get PDF
    Deep convolutional neural networks (CNNs) are the current state-of-the-art for digital analysis of histopathological images. The large size of whole-slide microscopy images (WSIs) requires advanced memory handling to read, display and process these images. There are several open-source platforms for working with WSIs, but few support deployment of CNN models. These applications use third-party solutions for inference, making them less user-friendly and unsuitable for high-performance image analysis. To make deployment of CNNs user-friendly and feasible on low-end machines, we have developed a new platform, FastPathology, using the FAST framework and C++. It minimizes memory usage for reading and processing WSIs, deployment of CNN models, and real-time interactive visualization of results. Runtime experiments were conducted on four different use cases, using different architectures, inference engines, hardware configurations and operating systems. Memory usage for reading, visualizing, zooming and panning a WSI were measured, using FastPathology and three existing platforms. FastPathology performed similarly in terms of memory to the other C++ based application, while using considerably less than the two Java-based platforms. The choice of neural network model, inference engine, hardware and processors influenced runtime considerably. Thus, FastPathology includes all steps needed for efficient visualization and processing of WSIs in a single application, including inference of CNNs with real-time display of the results. Source code, binary releases and test data can be found online on GitHub at https://github.com/SINTEFMedtek/FAST-Pathology/.Comment: 12 pages, 4 figures, submitted to IEEE Acces

    Code-Free Development and Deployment of Deep Segmentation Models for Digital Pathology

    Get PDF
    Application of deep learning on histopathological whole slide images (WSIs) holds promise of improving diagnostic efficiency and reproducibility but is largely dependent on the ability to write computer code or purchase commercial solutions. We present a code-free pipeline utilizing free-to-use, open-source software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep learning-based segmentation models for computational pathology. We demonstrate the pipeline on a use case of separating epithelium from stroma in colonic mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin (HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed through active learning using the pipeline. On a hold-out test set of 36 HE and 21 CD3-stained WSIs a mean intersection over union score of 95.5 and 95.3% was achieved on epithelium segmentation. We demonstrate pathologist-level segmentation accuracy and clinical acceptable runtime performance and show that pathologists without programming experience can create near state-of-the-art segmentation solutions for histopathological WSIs using only free-to-use software. The study further demonstrates the strength of open-source solutions in its ability to create generalizable, open pipelines, of which trained models and predictions can seamlessly be exported in open formats and thereby used in external solutions. All scripts, trained models, a video tutorial, and the full dataset of 251 WSIs with ~31 k epithelium annotations are made openly available at to accelerate research in the field.Peer reviewe

    H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images

    Get PDF
    Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took ~ 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering

    H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images

    Get PDF
    Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took ~ 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering.publishedVersio

    Noninvasive intracranial pressure assessment by optic nerve sheath diameter : automated measurements as an alternative to clinician-performed measurements

    Get PDF
    DATA AVAILABILITY STATEMENT : The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.INTRODUCTION : Optic nerve sheath diameter (ONSD) has shown promise as a noninvasive parameter for estimating intracranial pressure (ICP). In this study, we evaluated a novel automated method of measuring the ONSD in transorbital ultrasound imaging. METHODS : From adult traumatic brain injury (TBI) patients with invasive ICP monitoring, bedside manual ONSD measurements and ultrasound videos of the optic nerve sheath complex were simultaneously acquired. Automatic ONSD measurements were obtained by the processing of the ultrasound videos by a novel software based on a machine learning approach for segmentation of the optic nerve sheath. Agreement between manual and automated measurements, as well as their correlation to invasive ICP, was evaluated. Furthermore, the ability to distinguish dichotomized ICP formanual and automaticmeasurements of ONSD was compared, both for ICP dichotomized at 20 mmHg and at the 50th percentile ( 14 mmHg). Finally, we performed an exploratory subgroup analysis based on the software’s judgment of optic nerve axis alignment to elucidate the reasons for variation in the agreement between automatic and manual measurements. RESULTS : A total of 43 ultrasound examinations were performed on 25 adult patients with TBI, resulting in 86 image sequences covering the right and left eyes. The median pairwise difference between automatically and manually measured ONSD was 0.06mm (IQR −0.44 to 0.38mm; p = 0.80). The manually measured ONSD showed a positive correlation with ICP, while automatically measured ONSD showed a trend toward, but not a statistically significant correlation with ICP. When examining the ability to distinguish dichotomized ICP, manual and automatic measurements performed with similar accuracy both for an ICP cuto at 20 mmHg (manual: AUC 0.74, 95% CI 0.58–0.88; automatic: AUC 0.83, 95% CI 0.66–0.93) and for an ICP cuto at 14 mmHg (manual: AUC 0.70, 95% CI 0.52–0.85; automatic: AUC 0.68, 95% CI 0.48–0.83). In the exploratory subgroup analysis, we found that the agreement between measurements was higher in the subgroup where the automatic software evaluated the optic nerve axis alignment as good as compared to intermediate/poor. CONCLUSION: The novel automatedmethod ofmeasuring theONSD on the ultrasound videos using segmentation of the optic nerve sheath showed a reasonable agreement with manual measurements and performed equally well in distinguishing high and low ICP.The South- Eastern Norway Regional Health Authority.http://www.frontiersin.org/Neurologyam2024SurgerySDG-03:Good heatlh and well-bein

    Real-Time Echocardiography Guidance for Optimized Apical Standard Views

    Get PDF
    Measurements of cardiac function such as left ventricular ejection fraction and myocardial strain are typically based on 2-D ultrasound imaging. The reliability of these measurements depends on the correct pose of the transducer such that the 2-D imaging plane properly aligns with the heart for standard measurement views and is thus dependent on the operator's skills. We propose a deep learning tool that suggests transducer movements to help users navigate toward the required standard views while scanning. The tool can simplify echocardiography for less experienced users and improve image standardization for more experienced users. Training data were generated by slicing 3-D ultrasound volumes, which permits simulation of the movements of a 2-D transducer. Neural networks were further trained to calculate the transducer position in a regression fashion. The method was validated and tested on 2-D images from several data sets representative of a prospective clinical setting. The method proposed the adequate transducer movement 75% of the time when averaging over all degrees of freedom and 95% of the time when considering transducer rotation solely. Real-time application examples illustrate the direct relation between the transducer movements, the ultrasound image and the provided feedback.publishedVersio

    Deep Learning for Improved Precision and Reproducibility of Left Ventricular Strain in Echocardiography: A Test-Retest Study

    Get PDF
    Aims: Assessment of left ventricular (LV) function by echocardiography is hampered by modest test-retest reproducibility. A novel artificial intelligence (AI) method based on deep learning provides fully automated measurements of LV global longitudinal strain (GLS) and may improve the clinical utility of echocardiography by reducing user-related variability. The aim of this study was to assess within-patient test-retest reproducibility of LV GLS measured by the novel AI method in repeated echocardiograms recorded by different echocardiographers and to compare the results to manual measurements. Methods: Two test-retest data sets (n = 40 and n = 32) were obtained at separate centers. Repeated recordings were acquired in immediate succession by 2 different echocardiographers at each center. For each data set, 4 readers measured GLS in both recordings using a semiautomatic method to construct test-retest interreader and intrareader scenarios. Agreement, mean absolute difference, and minimal detectable change (MDC) were compared to analyses by AI. In a subset of 10 patients, beat-to-beat variability in 3 cardiac cycles was assessed by 2 readers and AI. Results: Test-retest variability was lower with AI compared with interreader scenarios (data set I: MDC = 3.7 vs 5.5, mean absolute difference = 1.4 vs 2.1, respectively; data set II: MDC = 3.9 vs 5.2, mean absolute difference = 1.6 vs 1.9, respectively; all P < .05). There was bias in GLS measurements in 13 of 24 test-retest interreader scenarios (largest bias, 3.2 strain units). In contrast, there was no bias in measurements by AI. Beat-to-beat MDCs were 1.5, 2.1, and 2.3 for AI and the 2 readers, respectively. Processing time for analyses of GLS by the AI method was 7.9 ± 2.8 seconds. Conclusion: A fast AI method for automated measurements of LV GLS reduced test-retest variability and removed bias between readers in both test-retest data sets. By improving the precision and reproducibility, AI may increase the clinical utility of echocardiography.publishedVersio

    Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    Full text link
    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.Rudyanto, RD.; Kerkstra, S.; Van Rikxoort, EM.; Fetita, C.; Brillet, P.; Lefevre, C.; Xue, W.... (2014). Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study. Medical Image Analysis. 18(7):1217-1232. doi:10.1016/j.media.2014.07.003S1217123218

    GPU-Based Airway Tree Segmentation and Centerline Extraction

    No full text
    Lung cancer is one of the deadliest and most common types of cancer inNorway. Early and precise diagnosis is crucial for improving the survivalrate. Diagnosis is often done by extracting a tissue sample in the lung throughthe mouth and throat. It is difficult to navigate to the tissue because of thecomplexity of the airways inside the lung and the reduced visibility. Our goalis to make a program that can automatically extract a map of the Airwaysdirectly from X-ray Computer Tomography(CT) images of the patient. Thisis a complex task and requires time consuming processing.In this thesis we explore different methods for extracting the Airways fromCT images. We also investigate parallel processing and the usage of moderngraphic processing units for speeding up the computations. We rate severalmethods in terms of reported performance and the possibility of parallelprocessing. The best rated method is implemented in a parallel frameworkcalled Open Computing Language.The results shows that our implementation is able to extract large parts ofthe Airway Tree, but struggles with the smaller airways and airways thatdeviate from a perfect circular cross-section. Our implementation is ableto process a full CT scan using less than a minute with a modern graphicprocessing units. The implementation is very general and is able to extractother tubular structures as well. To show this we also run our implementationon a Magnetic Resonance Angio dataset for finding blood vessels in the brainand achieve good results.We see a lot of potential in this method for extracting tubular structures. Themethod struggles the most with noise handling and tubes that deviate froma circular cross-sectional shape. We believe that this can be improved byusing another method than ridge traversal for the centerline extraction step.Because this is a local greedy algorithm, it often terminates prematurely dueto noise and other image artifacts
    • 

    corecore