24 research outputs found
Expert-quality Dataset Labeling via Gamified Crowdsourcing on Point-of-Care Lung Ultrasound Data
Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound
Lung ultrasound (LUS) is an important imaging modality used by emergency
physicians to assess pulmonary congestion at the patient bedside. B-line
artifacts in LUS videos are key findings associated with pulmonary congestion.
Not only can the interpretation of LUS be challenging for novice operators, but
visual quantification of B-lines remains subject to observer variability. In
this work, we investigate the strengths and weaknesses of multiple deep
learning approaches for automated B-line detection and localization in LUS
videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising
1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines.
Based on this dataset, we present a benchmark of established deep learning
methods applied to the task of B-line detection. To pave the way for
interpretable quantification of B-lines, we propose a novel "single-point"
approach to B-line localization using only the point of origin. Our results
show that (a) the area under the receiver operating characteristic curve ranges
from 0.864 to 0.955 for the benchmarked detection methods, (b) within this
range, the best performance is achieved by models that leverage multiple
successive frames as input, and (c) the proposed single-point approach for
B-line localization reaches an F1-score of 0.65, performing on par with the
inter-observer agreement. The dataset and developed methods can facilitate
further biomedical research on automated interpretation of lung ultrasound with
the potential to expand the clinical utility.Comment: 10 pages, 4 figure
Mediastinal monophasic synovial sarcoma with pericardial extension causing hemodynamic instability
Primary pleuropulmonary synovial sarcoma on fluorodeoxyglucose positron emission tomography-computed tomography scan
Utility and reproducibility of 3-dimensional printed models in pre-operative planning of complex thoracic tumors
Background and objectives3D-printed models are increasingly used for surgical planning. We assessed the utility, accuracy, and reproducibility of 3D printing to assist visualization of complex thoracic tumors for surgical planning.MethodsModels were created from pre-operative images for three patients using a standard radiology 3D workstation. Operating surgeons assessed model utility using the Gillespie scale (1 = inferior to 4 = superior), and accuracy compared to intraoperative findings. Model variability was assessed for one patient for whom two models were created independently. The models were compared subjectively by surgeons and quantitatively based on overlap of depicted tissues, and differences in tumor volume and proximity to tissues.ResultsModels were superior to imaging and 3D visualization for surgical planning (mean score = 3.4), particularly for determining surgical approach (score = 4) and resectability (score = 3.7). Model accuracy was good to excellent. In the two models created for one patient, tissue volumes overlapped by >86.5%, and tumor volume and area of tissues ≤1 mm to the tumor differed by <15% and <1.8 cm2 , respectively. Surgeons considered these differences to have negligible effect on surgical planning.Conclusion3D printing assists surgical planning for complex thoracic tumors. Models can be created by radiologists using routine practice tools with sufficient accuracy and clinically negligible variability