23 research outputs found

    Surgical Skill Assessment on In-Vivo Clinical Data via the Clearness of Operating Field

    Full text link
    Surgical skill assessment is important for surgery training and quality control. Prior works on this task largely focus on basic surgical tasks such as suturing and knot tying performed in simulation settings. In contrast, surgical skill assessment is studied in this paper on a real clinical dataset, which consists of fifty-seven in-vivo laparoscopic surgeries and corresponding skill scores annotated by six surgeons. From analyses on this dataset, the clearness of operating field (COF) is identified as a good proxy for overall surgical skills, given its strong correlation with overall skills and high inter-annotator consistency. Then an objective and automated framework based on neural network is proposed to predict surgical skills through the proxy of COF. The neural network is jointly trained with a supervised regression loss and an unsupervised rank loss. In experiments, the proposed method achieves 0.55 Spearman's correlation with the ground truth of overall technical skill, which is even comparable with the human performance of junior surgeons.Comment: MICCAI 201

    Hydrogeological and hydrogeochemical aspects of the Jalo area, Libya

    No full text
    Online access for this thesis was created in part with support from the Institute of Museum and Library Services (IMLS) administered by the Nevada State Library, Archives and Public Records through the Library Services and Technology Act (LSTA). To obtain a high quality image or document please contact the DeLaMare Library at https://unr.libanswers.com/ or call: 775-784-6945.Domestic, agricultural, and industrial needs drive the Jalo area of Libya to understand its groundwater aquifer system

    Early-, late-, and very late-term prediction of target lesion failure in coronary artery stent patients: An international multi-site study.

    No full text
    The main intervention for coronary artery disease is stent implantation. We aim to predict post-intervention target lesion failure (TLF) months before its onset, an extremely challenging task in clinics. This post-intervention decision support tool helps physicians to identify at-risk patients much earlier and to inform their follow-up care. We developed a novel machine-learning model with three components: a TLF predictor at discharge via a combination of nine conventional models and a super-learner, a risk score predictor for time-to-TLF, and an update function to manage the size of the at-risk cohort. We collected data in a prospective study from 120 medical centers in over 25 countries. All 1975 patients were enrolled during Phase I (2016–2020) and were followed up for five years post-intervention. During Phase I, 151 patients (7.6%) developed TLF, which we used for training. Additionally, 12 patients developed TLF after Phase I (right-censored). Our algorithm successfully classifies 1635 patients as not at risk (TNR = 90.23%) and predicts TLF for 86 patients (TPR = 52.76%), outperforming its training by identifying 33% of the right-censored patients. We also compare our model against five state of the art models, outperforming them all. Our prediction tool is able to optimize for both achieving higher sensitivity and maintaining a reasonable size for the at-risk cohort over time

    Artificial intelligence for prognostic scores in oncology: A benchmarking study.

    No full text
    Introduction: Prognostic scores are important tools in oncology to facilitate clinical decision-making based on patient characteristics. To date, classic survival analysis using Cox proportional hazards regression has been employed in the development of these prognostic scores. With the advance of analytical models, this study aimed to determine if more complex machine-learning algorithms could outperform classical survival analysis methods. Methods: In this benchmarking study, two datasets were used to develop and compare different prognostic models for overall survival in pan-cancer populations: a nationwide EHR-derived de-identified database for training and in-sample testing and the OAK (phase III clinical trial) dataset for out-of-sample testing. A real-world database comprised 136K first-line treated cancer patients across multiple cancer types and was split into a 90% training and 10% testing dataset, respectively. The OAK dataset comprised 1,187 patients diagnosed with non-small cell lung cancer. To assess the effect of the covariate number on prognostic performance, we formed three feature sets with 27, 44 and 88 covariates. In terms of methods, we benchmarked ROPRO, a prognostic score based on the Cox model, against eight complex machine-learning models: regularized Cox, Random Survival Forests (RSF), Gradient Boosting (GB), DeepSurv (DS), Autoencoder (AE) and Super Learner (SL). The C-index was used as the performance metric to compare different models. Results: For in-sample testing on the real-world database the resulting C-index [95% CI] values for RSF 0.720 [0.716, 0.725], GB 0.722 [0.718, 0.727], DS 0.721 [0.717, 0.726] and lastly, SL 0.723 [0.718, 0.728] showed significantly better performance as compared to ROPRO 0.701 [0.696, 0.706]. Similar results were derived across all feature sets. However, for the out-of-sample validation on OAK, the stronger performance of the more complex models was not apparent anymore. Consistently, the increase in the number of prognostic covariates did not lead to an increase in model performance. Discussion: The stronger performance of the more complex models did not generalize when applied to an out-of-sample dataset. We hypothesize that future research may benefit by adding multimodal data to exploit advantages of more complex models
    corecore