4 research outputs found
Tissue Cross-Section and Pen Marking Segmentation in Whole Slide Images
Tissue segmentation is a routine preprocessing step to reduce the
computational cost of whole slide image (WSI) analysis by excluding background
regions. Traditional image processing techniques are commonly used for tissue
segmentation, but often require manual adjustments to parameter values for
atypical cases, fail to exclude all slide and scanning artifacts from the
background, and are unable to segment adipose tissue. Pen marking artifacts in
particular can be a potential source of bias for subsequent analyses if not
removed. In addition, several applications require the separation of individual
cross-sections, which can be challenging due to tissue fragmentation and
adjacent positioning. To address these problems, we develop a convolutional
neural network for tissue and pen marking segmentation using a dataset of 200
H&E stained WSIs. For separating tissue cross-sections, we propose a novel
post-processing method based on clustering predicted centroid locations of the
cross-sections in a 2D histogram. On an independent test set, the model
achieved a mean Dice score of 0.9810.033 for tissue segmentation and a
mean Dice score of 0.9120.090 for pen marking segmentation. The mean
absolute difference between the number of annotated and separated
cross-sections was 0.0750.350. Our results demonstrate that the proposed
model can accurately segment H&E stained tissue cross-sections and pen markings
in WSIs while being robust to many common slide and scanning artifacts. The
model with trained model parameters and post-processing method are made
publicly available as a Python package called SlideSegmenter.Comment: 6 pages, 3 figure
Corneal Pachymetry by AS-OCT after Descemet's Membrane Endothelial Keratoplasty
Corneal thickness (pachymetry) maps can be used to monitor restoration of
corneal endothelial function, for example after Descemet's membrane endothelial
keratoplasty (DMEK). Automated delineation of the corneal interfaces in
anterior segment optical coherence tomography (AS-OCT) can be challenging for
corneas that are irregularly shaped due to pathology, or as a consequence of
surgery, leading to incorrect thickness measurements. In this research, deep
learning is used to automatically delineate the corneal interfaces and measure
corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three
different deep learning strategies were developed based on 960 B-scans from 50
patients. On an independent test set of 320 B-scans, corneal thickness could be
measured with an error of 13.98 to 15.50 micrometer for the central 9 mm range,
which is less than 3% of the average corneal thickness. The accurate thickness
measurements were used to construct detailed pachymetry maps. Moreover,
follow-up scans could be registered based on anatomical landmarks to obtain
differential pachymetry maps. These maps may enable a more comprehensive
understanding of the restoration of the endothelial function after DMEK, where
thickness often varies throughout different regions of the cornea, and
subsequently contribute to a standardized postoperative regime.Comment: Fixed typo in abstract: The development set consists of 960 B-scans
from 50 patients (instead of 68). The B-scans from the other 18 patients were
used for testing onl
Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound
Lung ultrasound (LUS) is an important imaging modality used by emergency
physicians to assess pulmonary congestion at the patient bedside. B-line
artifacts in LUS videos are key findings associated with pulmonary congestion.
Not only can the interpretation of LUS be challenging for novice operators, but
visual quantification of B-lines remains subject to observer variability. In
this work, we investigate the strengths and weaknesses of multiple deep
learning approaches for automated B-line detection and localization in LUS
videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising
1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines.
Based on this dataset, we present a benchmark of established deep learning
methods applied to the task of B-line detection. To pave the way for
interpretable quantification of B-lines, we propose a novel "single-point"
approach to B-line localization using only the point of origin. Our results
show that (a) the area under the receiver operating characteristic curve ranges
from 0.864 to 0.955 for the benchmarked detection methods, (b) within this
range, the best performance is achieved by models that leverage multiple
successive frames as input, and (c) the proposed single-point approach for
B-line localization reaches an F1-score of 0.65, performing on par with the
inter-observer agreement. The dataset and developed methods can facilitate
further biomedical research on automated interpretation of lung ultrasound with
the potential to expand the clinical utility.Comment: 10 pages, 4 figure