3 research outputs found

    Preserving privacy in surgical video analysis using a deep learning classifier to identify out-of-body scenes in endoscopic videos

    Get PDF
    Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean +/- standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 +/- 0.07% and 99.71 +/- 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis

    Analyse et caractérisation temps réel de vidéos chirurgicales. Application à la chirurgie de la cataracte

    Get PDF
    Huge amounts of medical data are recorded every day. Those data could be very helpful for medical practice. The LaTIM has acquired solid know-how about the analysis of those data for decision support. In this PhD thesis, we propose to reuse annotated surgical videos previously recorded and stored in a dataset, for computer-aided surgery. To be able to provide relevant information, we first need to recognize which surgical gesture is being performed at each instant of the surgery, based on the monitoring video. This challenging task is the aim of this thesis. We propose an automatic solution to analyze cataract surgeries, in real time, while the video is being recorded. A content based video retrieval (CBVR) method is used to categorize the monitoring video, in combination with a statistical model of the surgical process to bring contextual information. The system performs an on-line analysis of the surgical process at two levels of description for a complete and precise analysis. The methods developed during this thesis have been evaluated in a dataset of cataract surgery videos collected at Brest University Hospital. Promising results were obtained for the automatic analysis of cataract surgeries and surgical gesture recognition. The statistical model allows an analysis which is both fine-tuned and comprehensive. The general approach proposed in this thesis could be easily used for computer aided surgery, by providing recommendations or video sequence examples. The method could also be used to annotate videos for indexing purposes.L'objectif de cette thèse est de fournir aux chirurgiens des aides opératoires en temps réel. Nous nous appuyons pour cela sur des vidéos préalablement archivées et interprétées. Pour que cette aide soit pertinente, il est tout d'abord nécessaire de reconnaître, à chaque instant, le geste pratiqué par le chirurgien. Ce point est essentiel et fait l'objet de cette thèse. Différentes méthodes ont été développées et évaluées, autour de la reconnaissance automatique du geste chirurgical. Nous nous sommes appuyés sur des méthodes de catégorisation (recherche des cas les plus proches basée sur l'extraction du contenu visuel) et des modèles statistiques du processus chirurgical. Les réflexions menées ont permis d'aboutir à une analyse automatique de la chirurgie à plusieurs niveaux de description. L'évaluation des méthodes a été effectuée sur une base de données de vidéos de chirurgies de la cataracte, collectées grâce à une forte collaboration avec le service d'ophtalmologie du CHRU de Brest. Des résultats encourageants ont été obtenus pour la reconnaissance automatique du geste chirurgical. Le modèle statistique multi-échelles développé permet une analyse fine et complète de la chirurgie. L'approche proposée est très générale et devrait permettre d'alerter le chirurgien sur les déroulements opératoires à risques, et lui fournir des recommandations en temps réel sur des conduites à tenir reconnues. Les méthodes développées permettront également d'indexer automatiquement des vidéos chirurgicales archivées
    corecore