18 research outputs found
Quantifying Graft Detachment after Descemet's Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks
Purpose: We developed a method to automatically locate and quantify graft
detachment after Descemet's Membrane Endothelial Keratoplasty (DMEK) in
Anterior Segment Optical Coherence Tomography (AS-OCT) scans. Methods: 1280
AS-OCT B-scans were annotated by a DMEK expert. Using the annotations, a deep
learning pipeline was developed to localize scleral spur, center the AS-OCT
B-scans and segment the detached graft sections. Detachment segmentation model
performance was evaluated per B-scan by comparing (1) length of detachment and
(2) horizontal projection of the detached sections with the expert annotations.
Horizontal projections were used to construct graft detachment maps. All final
evaluations were done on a test set that was set apart during training of the
models. A second DMEK expert annotated the test set to determine inter-rater
performance. Results: Mean scleral spur localization error was 0.155 mm,
whereas the inter-rater difference was 0.090 mm. The estimated graft detachment
lengths were in 69% of the cases within a 10-pixel (~150{\mu}m) difference from
the ground truth (77% for the second DMEK expert). Dice scores for the
horizontal projections of all B-scans with detachments were 0.896 and 0.880 for
our model and the second DMEK expert respectively. Conclusion: Our deep
learning model can be used to automatically and instantly localize graft
detachment in AS-OCT B-scans. Horizontal detachment projections can be
determined with the same accuracy as a human DMEK expert, allowing for the
construction of accurate graft detachment maps. Translational Relevance:
Automated localization and quantification of graft detachment can support DMEK
research and standardize clinical decision making.Comment: To be published in Translational Vision Science & Technolog
Corneal Pachymetry by AS-OCT after Descemet's Membrane Endothelial Keratoplasty
Corneal thickness (pachymetry) maps can be used to monitor restoration of
corneal endothelial function, for example after Descemet's membrane endothelial
keratoplasty (DMEK). Automated delineation of the corneal interfaces in
anterior segment optical coherence tomography (AS-OCT) can be challenging for
corneas that are irregularly shaped due to pathology, or as a consequence of
surgery, leading to incorrect thickness measurements. In this research, deep
learning is used to automatically delineate the corneal interfaces and measure
corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three
different deep learning strategies were developed based on 960 B-scans from 50
patients. On an independent test set of 320 B-scans, corneal thickness could be
measured with an error of 13.98 to 15.50 micrometer for the central 9 mm range,
which is less than 3% of the average corneal thickness. The accurate thickness
measurements were used to construct detailed pachymetry maps. Moreover,
follow-up scans could be registered based on anatomical landmarks to obtain
differential pachymetry maps. These maps may enable a more comprehensive
understanding of the restoration of the endothelial function after DMEK, where
thickness often varies throughout different regions of the cornea, and
subsequently contribute to a standardized postoperative regime.Comment: Fixed typo in abstract: The development set consists of 960 B-scans
from 50 patients (instead of 68). The B-scans from the other 18 patients were
used for testing onl
Direct Classification of Type 2 Diabetes From Retinal Fundus Images in a Population-based Sample From The Maastricht Study
Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to
blindness and cardiovascular disease. Information about early stage T2D might
be present in retinal fundus images, but to what extent these images can be
used for a screening setting is still unknown. In this study, deep neural
networks were employed to differentiate between fundus images from individuals
with and without T2D. We investigated three methods to achieve high
classification performance, measured by the area under the receiver operating
curve (ROC-AUC). A multi-target learning approach to simultaneously output
retinal biomarkers as well as T2D works best (AUC = 0.746 [0.001]).
Furthermore, the classification performance can be improved when images with
high prediction uncertainty are referred to a specialist. We also show that the
combination of images of the left and right eye per individual can further
improve the classification performance (AUC = 0.758 [0.003]), using a
simple averaging approach. The results are promising, suggesting the
feasibility of screening for T2D from retinal fundus images.Comment: to be published in the proceeding of SPIE - Medical Imaging 2020, 6
pages, 1 figur
Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound
Lung ultrasound (LUS) is an important imaging modality used by emergency
physicians to assess pulmonary congestion at the patient bedside. B-line
artifacts in LUS videos are key findings associated with pulmonary congestion.
Not only can the interpretation of LUS be challenging for novice operators, but
visual quantification of B-lines remains subject to observer variability. In
this work, we investigate the strengths and weaknesses of multiple deep
learning approaches for automated B-line detection and localization in LUS
videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising
1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines.
Based on this dataset, we present a benchmark of established deep learning
methods applied to the task of B-line detection. To pave the way for
interpretable quantification of B-lines, we propose a novel "single-point"
approach to B-line localization using only the point of origin. Our results
show that (a) the area under the receiver operating characteristic curve ranges
from 0.864 to 0.955 for the benchmarked detection methods, (b) within this
range, the best performance is achieved by models that leverage multiple
successive frames as input, and (c) the proposed single-point approach for
B-line localization reaches an F1-score of 0.65, performing on par with the
inter-observer agreement. The dataset and developed methods can facilitate
further biomedical research on automated interpretation of lung ultrasound with
the potential to expand the clinical utility.Comment: 10 pages, 4 figure
A medical device-grade T1 and ECV phantom for global T1 mapping quality assurance - the T Mapping and ECV Standardization in cardiovascular magnetic resonance (T1MES) program
T mapping and extracellular volume (ECV) have the potential to guide patient care and serve as surrogate end-points in clinical trials, but measurements differ between cardiovascular magnetic resonance (CMR) scanners and pulse sequences. To help deliver T mapping to global clinical care, we developed a phantom-based quality assurance (QA) system for verification of measurement stability over time at individual sites, with further aims of generalization of results across sites, vendor systems, software versions and imaging sequences. We thus created T1MES: The T1 Mapping and ECV Standardization Program.
A design collaboration consisting of a specialist MRI small-medium enterprise, clinicians, physicists and national metrology institutes was formed. A phantom was designed covering clinically relevant ranges of T and T in blood and myocardium, pre and post-contrast, for 1.5 T and 3 T. Reproducible mass manufacture was established. The device received regulatory clearance by the Food and Drug Administration (FDA) and Conformité Européene (CE) marking.
The T1MES phantom is an agarose gel-based phantom using nickel chloride as the paramagnetic relaxation modifier. It was reproducibly specified and mass-produced with a rigorously repeatable process. Each phantom contains nine differently-doped agarose gel tubes embedded in a gel/beads matrix. Phantoms were free of air bubbles and susceptibility artifacts at both field strengths and T maps were free from off-resonance artifacts. The incorporation of high-density polyethylene beads in the main gel fill was effective at flattening the field. T and T values measured in T1MES showed coefficients of variation of 1 % or less between repeat scans indicating good short-term reproducibility. Temperature dependency experiments confirmed that over the range 15-30 °C the short-T tubes were more stable with temperature than the long-T tubes. A batch of 69 phantoms was mass-produced with random sampling of ten of these showing coefficients of variations for T of 0.64 ± 0.45 % and 0.49 ± 0.34 % at 1.5 T and 3 T respectively.
The T1MES program has developed a T mapping phantom to CE/FDA manufacturing standards. An initial 69 phantoms with a multi-vendor user manual are now being scanned fortnightly in centers worldwide. Future results will explore T mapping sequences, platform performance, stability and the potential for standardization.This project has been funded by a European Association of Cardiovascular Imaging (EACVI part of the ESC) Imaging Research Grant, a UK National Institute of Health Research (NIHR) Biomedical Research Center (BRC) Cardiometabolic Research Grant at University College London (UCL, #BRC/ 199/JM/101320), and a Barts Charity Research Grant (#1107/2356/MRC0140). G.C. is supported by the National Institute for Health Research Rare Diseases Translational Research Collaboration (NIHR RD-TRC) and by the NIHR UCL Hospitals Biomedical Research Center. J.C.M. is directly and indirectly supported by the UCL Hospitals NIHR BRC and Biomedical Research Unit at Barts Hospital respectively. This work was in part supported by an NIHR BRC award to Cambridge University Hospitals NHS Foundation Trust and NIHR Cardiovascular Biomedical Research Unit support at Royal Brompton Hospital London UK
Quantifying Graft Detachment after Descemet's Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks
Abstract Purpose: We developed a method to automatically locate and quantify graft detachment after Descemet's membrane endothelial keratoplasty (DMEK) in anterior segment optical coherence tomography (AS-OCT) scans. Methods: A total of 1280 AS-OCT B-scans were annotated by a DMEK expert. Using the annotations, a deep learning pipeline was developed to localize scleral spur, center the AS-OCT B-scans and segment the detached graft sections. Detachment segmentation model performance was evaluated per B-scan by comparing (1) length of detachment and (2) horizontal projection of the detached sections with the expert annotations. Horizontal projections were used to construct graft detachment maps. All final evaluations were done on a test set that was set apart during training of the models. A second DMEK expert annotated the test set to determine interrater performance. Results: Mean scleral spur localization error was 0.155 mm, whereas the interrater difference was 0.090 mm. The estimated graft detachment lengths were in 69% of the cases within a 10-pixel (∼150 µm) difference from the ground truth (77% for the second DMEK expert). Dice scores for the horizontal projections of all B-scans with detachments were 0.896 and 0.880 for our model and the second DMEK expert, respectively. Conclusions: Our deep learning model can be used to automatically and instantly localize graft detachment in AS-OCT B-scans. Horizontal detachment projections can be determined with the same accuracy as a human DMEK expert, allowing for the construction of accurate graft detachment maps. Translational Relevance: Automated localization and quantification of graft detachment can support DMEK research and standardize clinical decision-making
Few‐shot learning for satellite characterisation from synthetic inverse synthetic aperture radar images
Abstract Space situational awareness systems primarily focus on detecting and tracking space objects, providing crucial positional data. However, understanding the complex space domain requires characterising satellites, often involving estimation of bus and solar panel sizes. While inverse synthetic aperture radar allows satellite visualisation, developing deep learning models for substructure segmentation in inverse synthetic aperture radar images is challenging due to the high costs and hardware requirements. The authors present a framework addressing the scarcity of inverse synthetic aperture radar data through synthetic training data. The authors approach utilises a few‐shot domain adaptation technique, leveraging thousands of rapidly simulated low‐fidelity inverse synthetic aperture radar images and a small set of inverse synthetic aperture radar images from the target domain. The authors validate their framework by simulating a real‐case scenario, fine‐tuning a deep learning‐based segmentation model using four inverse synthetic aperture radar images generated through the backprojection algorithm from simulated raw radar data (simulated at the analogue‐to‐digital converter level) as the target domain. The authors results demonstrate the effectiveness of the proposed framework, significantly improving inverse synthetic aperture radar image segmentation across diverse domains. This enhancement enables accurate characterisation of satellite bus and solar panel sizes as well as their orientation, even when the images are sourced from different domains
TOPAAS : een structurele aanpak voor faalkansanalyse van software intensieve systemen
Rijkswaterstaat is bezig om op alle primaire waterkeringen en andere kunstwerken probabilistisch beheer te introduceren. Centraal in de aanpak van probabilistisch beheer is de risicoanalyse, die sturend is in de testintervallen, gegarandeerde reparatietijd en modificaties. Ook het falen van de gebruikte software is gemodelleerd. Voor de initiële inschatting van de faalkans van de software is de TDT-methode ontwikkeld. In praktijk blijkt deze onbetrouwbare resultaten te leveren.
In opdracht van Rijkswaterstaat heeft een consortium van Det Norske Veritas, Movares, Technische Universiteit Eindhoven, Logica, Refis en Intermedion een verbeterde methode ontwikkeld die zowel richtlijnen geeft voor het modelleren van softwarefalen in foutenbomen als het schatten van
de faalkans van een taakuitvoering door een softwaremodule.
Deze methode is gerapporteerd in [8] en TOPAAS genoemd. Aan de hand hiervan zijn een aantal experimenten (pilots) uitgevoerd. De resultaten van die pilots zijn beschreven in een evaluatie [16] en deze evaluatie doet een aantal aanbevelingen voor verbetering. In deze tweede versie van [8] zijn
de aanbevelingen verwerkt. Ook is de tekst hier en daar redactioneel aangepast, met name ter verduidelijking voor de niet-ICT’er. Tevens wordt aanbevolen een korte handleiding voor het toepassen van TOPAAS te maken.
Kern van TOPAAS is dat software in modulen kan worden opgedeeld en dat het (mogelijk) falen van deze modulen in een foutenboom als basisgebeurtenissen kunnen worden opgenomen. Falen van een softwaremodule kan vervolgens opgedeeld worden in falen ten gevolge van onverwachtheid van input en het falen van de beslislogica van de softwaremodule zelf.
Schatten van de faalkans van een softwarecomponent (module) is moeilijk: er zijn wel methoden, maar die vereisen zonder uitzondering input die vaak niet (voldoende) voorhanden is. Om toch te komen tot een faalkansschatting van een softwaremodule wordt op basis van expert opinion een schatting gemaakt, waarbij het Bayesiaanse gedachtegoed wordt gevolgd. Deze schatting is vervolgens vervat in een parametermodel, waarbij de factoren die in ogenschouw worden genomen voortkomen uit de expertgroep en internationaal onderzoek. De invloed van de factoren is
ingeschat door experts en vervolgens gekalibreerd met een twintigtal referentieprojecten. Conclusie is dat de uitkomsten van het parametermodel een zeer sterke correlatie vertoont met de inschatting van de experts.
Concluderend mag men stellen dat deze methode, bij afwezigheid van betere wijzen van schatting, een redelijk betrouwbare Bayesiaanse schatting van de faalkans levert