85 research outputs found

    Utilization of deep learning to quantify fluid volume of neovascular age-related macular degeneration patients based on swept-source OCT imaging: The ONTARIO study.

    Get PDF
    PURPOSE: To evaluate the predictive ability of a deep learning-based algorithm to determine long-term best-corrected distance visual acuity (BCVA) outcomes in neovascular age-related macular degeneration (nARMD) patients using baseline swept-source optical coherence tomography (SS-OCT) and OCT-angiography (OCT-A) data. METHODS: In this phase IV, retrospective, proof of concept, single center study, SS-OCT data from 17 previously treated nARMD eyes was used to assess retinal layer thicknesses, as well as quantify intraretinal fluid (IRF), subretinal fluid (SRF), and serous pigment epithelium detachments (PEDs) using a novel deep learning-based, macular fluid segmentation algorithm. Baseline OCT and OCT-A morphological features and fluid measurements were correlated using the Pearson correlation coefficient (PCC) to changes in BCVA from baseline to week 52. RESULTS: Total retinal fluid (IRF, SRF and PED) volume at baseline had the strongest correlation to improvement in BCVA at month 12 (PCC = 0.652, p = 0.005). Fluid was subsequently sub-categorized into IRF, SRF and PED, with PED volume having the next highest correlation (PCC = 0.648, p = 0.005) to BCVA improvement. Average total retinal thickness in isolation demonstrated poor correlation (PCC = 0.334, p = 0.189). When two features, mean choroidal neovascular membranes (CNVM) size and total fluid volume, were combined and correlated with visual outcomes, the highest correlation increased to PCC = 0.695 (p = 0.002). CONCLUSIONS: In isolation, total fluid volume most closely correlates with change in BCVA values between baseline and week 52. In combination with complimentary information from OCT-A, an improvement in the linear correlation score was observed. Average total retinal thickness provided a lower correlation, and thus provides a lower predictive outcome than alternative metrics assessed. Clinically, a machine-learning approach to analyzing fluid metrics in combination with lesion size may provide an advantage in personalizing therapy and predicting BCVA outcomes at week 52

    The Virgin Hodegetria: an Iconic Formula for Miracle Illustrations in the West?

    No full text

    Head tracking using stereo

    No full text

    Automated Deep Learning-based Multi-class Fluid Segmentation in Swept-Source Optical Coherence Tomography Images

    Full text link
    AbstractPurposeTo evaluate the performance of a deep learning-based, fully automated, multi-class, macular fluid segmentation algorithm relative to expert annotations in a heterogeneous population of confirmed wet age-related macular degeneration (wAMD) subjects.MethodsTwenty-two swept-source optical coherence tomography (SS-OCT) volumes of the macula from 22 from different individuals with wAMD were manually annotated by two expert graders. These results were compared using cross-validation (CV) to automated segmentations using a deep learning-based algorithm encoding spatial information about retinal tissue as an additional input to the network. The algorithm detects and delineates fluid regions in the OCT data, differentiating between intra- and sub-retinal fluid (IRF, SRF), as well as fluid resulting from in serous pigment epithelial detachments (PED). Standard metrics for fluid detection and quantification were used to evaluate performance.ResultsThe per slice receiver operating characteristic (ROC) area under the curves (AUCs) for each of these fluid types were 0.90, 0.94 and 0.94 for IRF, SRF and PED, respectively. Per volume results were 0.94 and 0.88 for IRF and PED (SRF being present in all cases). The correlation of fluid volume between the expert graders and the algorithm were 0.99 for IRF, 0.99 for SRF and 0.82 for PED.ConclusionsAutomated, deep learning-based segmentation is able to accurately detect and quantify different macular fluid types in SS-OCT data on par with expert graders.</jats:sec

    A unified graphical models framework for automated human embryo tracking in time lapse microscopy

    No full text
    Time lapse microscopy has emerged as an important modality for studying early human embryo development. Detection of certain events can provide insight into embryo health and fate. Embryo tracking is challenged by a high dimensional search space, weak features, outliers, occlusions, missing data, multiple interacting deformable targets, changing topology, and a weak motion model. We address these with a data driven approach that uses a rich set of discriminative image and geometric features and their spatiotemporal context. We pose the mitosis detection problem as augmented simultaneous segmentation and classification in a conditional random field framework that combines tracking based and tracking free elements. For 275 clinical image sequences we measured division events during the first 48 hours of embryo development to within 30 minutes resulting in an improvement of 24.2% over a tracking-based approach and a 35.7% improvement over a tracking-free approach, and more than an order of magnitude improvement over a traditional particle filter, demonstrating the success of our framework

    The Alabama Chronic Respiratory Disease Program

    No full text

    A Unified Graphical Models Framework for Automated Mitosis Detection in Human Embryos

    No full text
    Abstract—Time lapse microscopy has emerged as an important modality for studying human embryo development, as mitosis events can provide insight into embryo health and fate. Mi-tosis detection can happen through tracking of embryonic cells (tracking based), or from low level image features and classifiers (tracking free). Tracking based approaches are challenged by high dimensional search space, weak features, outliers, missing data, multiple deformable targets, and weak motion model. Tracking free approaches are data driven and complement tracking based approaches. We pose mitosis detection as augmented simultaneous segmentation and classification in a conditional random field (CRF) framework that combines both approaches. It uses a rich set of discriminative features and their spatiotemporal context. It performs a dual pass approximate inference that addresses the high dimensionality of tracking and combines results from both components. For 312 clinical sequences we measured divi-sion events to within 30 min and observed an improvement of 25.6 % and a 32.9 % improvement over purely tracking based and tracking free approach respectively, and close to an order of magnitude over a traditional particle filter. While our work was motivated by human embryo development, it can be extended to other detection problems in image sequences of evolving cell populations. Index Terms—Data driven Monte Carlo, embryo tracking, graphical models, mitosis detection. I

    Person Detection and Head Tracking to Detect Falls in Depth Maps

    No full text
    corecore