3 research outputs found

    Development of Machine-Learning Algorithms to Predict Attainment of Minimal Clinically Important Difference After Hip Arthroscopy for Femoroacetabular Impingement Yield Fair Performance and Limited Clinical Utility

    Get PDF
    \ua9 2023 The Author(s). Purpose: To determine whether machine learning (ML) techniques developed using registry data could predict which patients will achieve minimum clinically important difference (MCID) on the International Hip Outcome Tool 12 (iHOT-12) patient-reported outcome measures (PROMs) after arthroscopic management of femoroacetabular impingement syndrome (FAIS). And secondly to determine which preoperative factors contribute to the predictive power of these models. Methods: A retrospective cohort of patients was selected from the UK\u27s Non-Arthroplasty Hip Registry. Inclusion criteria were a diagnosis of FAIS, management via an arthroscopic procedure, and a minimum follow-up of 6 months after index surgery from August 2012 to June 2021. Exclusion criteria were for non-arthroscopic procedures and patients without FAIS. ML models were developed to predict MCID attainment. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC). Results: In total, 1,917 patients were included. The random forest, logistic regression, neural network, support vector machine, and gradient boosting models had AUROC 0.75 (0.68-0.81), 0.69 (0.63-0.76), 0.69 (0.63-0.76), 0.70 (0.64-0.77), and 0.70 (0.64-0.77), respectively. Demographic factors and disease features did not confer a high predictive performance. Baseline PROM scores alone provided comparable predictive performance to the whole dataset models. Both EuroQoL 5-Dimension 5-Level and iHOT-12 baseline scores and iHOT-12 baseline scores alone provided AUROC of 0.74 (0.68-0.80) and 0.72 (0.65-0.78), respectively, with random forest models. Conclusions: ML models were able to predict with fair accuracy attainment of MCID on the iHOT-12 at 6-month postoperative assessment. The most successful models used all patient variables, all baseline PROMs, and baseline iHOT-12 responses. These models are not sufficiently accurate to warrant routine use in the clinic currently. Level of Evidence: Level III, retrospective cohort design; prognostic study

    Accurate delineation of individual tree crowns in tropical forests from aerial RGB imagery using Mask R‐CNN

    Get PDF
    Tropical forests are a major component of the global carbon cycle and home to two-thirds of terrestrial species. Upper-canopy trees store the majority of forest carbon and can be vulnerable to drought events and storms. Monitoring their growth and mortality is essential to understanding forest resilience to climate change, but in the context of forest carbon storage, large trees are underrepresented in traditional field surveys, so estimates are poorly constrained. Aerial photographs provide spectral and textural information to discriminate between tree crowns in diverse, complex tropical canopies, potentially opening the door to landscape monitoring of large trees. Here we describe a new deep convolutional neural network method, Detectree2, which builds on the Mask R-CNN computer vision framework to recognize the irregular edges of individual tree crowns from airborne RGB imagery. We trained and evaluated this model with 3797 manually delineated tree crowns at three sites in Malaysian Borneo and one site in French Guiana. As an example application, we combined the delineations with repeat lidar surveys (taken between 3 and 6 years apart) of the four sites to estimate the growth and mortality of upper-canopy trees. Detectree2 delineated 65 000 upper-canopy trees across 14 km2 of aerial images. The skill of the automatic method in delineating unseen test trees was good (F1 score = 0.64) and for the tallest category of trees was excellent (F1 score = 0.74). As predicted from previous field studies, we found that growth rate declined with tree height and tall trees had higher mortality rates than intermediate-size trees. Our approach demonstrates that deep learning methods can automatically segment trees in widely accessible RGB imagery. This tool (provided as an open-source Python package) has many potential applications in forest ecology and conservation, from estimating carbon stocks to monitoring forest phenology and restoration
    corecore