5 research outputs found

    Learning Semantic and Visual Similarity for Endomicroscopy Video Retrieval

    Get PDF
    Traditional Content-Based Image Retrieval (CBIR) systems only deliver visual outputs that are not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval that computes a visual signature for each video. In this study, we first leverage semantic ground-truth data to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that our visual-word-based semantic signatures enable a recall performance which is significantly higher than those of several state-of-the-art methods in CBIR. In a second step, we propose to improve retrieval relevance by learning, from a perceived similarity ground truth, an adjusted similarity distance. Our distance learning method allows to improve, with statistical significance, the correlation with the perceived similarity. Our resulting retrieval system is efficient in providing both visual and semantic information that are correlated with each other and clinically interpretable by the endoscopists

    Assessing emphysema in CT scans of the lungs:Using machine learning, crowdsourcing and visual similarity

    Get PDF

    Learning Semantic and Visual Similarity for Endomicroscopy Video Retrieval

    No full text

    Analyse et caractérisation temps réel de vidéos chirurgicales. Application à la chirurgie de la cataracte

    Get PDF
    Huge amounts of medical data are recorded every day. Those data could be very helpful for medical practice. The LaTIM has acquired solid know-how about the analysis of those data for decision support. In this PhD thesis, we propose to reuse annotated surgical videos previously recorded and stored in a dataset, for computer-aided surgery. To be able to provide relevant information, we first need to recognize which surgical gesture is being performed at each instant of the surgery, based on the monitoring video. This challenging task is the aim of this thesis. We propose an automatic solution to analyze cataract surgeries, in real time, while the video is being recorded. A content based video retrieval (CBVR) method is used to categorize the monitoring video, in combination with a statistical model of the surgical process to bring contextual information. The system performs an on-line analysis of the surgical process at two levels of description for a complete and precise analysis. The methods developed during this thesis have been evaluated in a dataset of cataract surgery videos collected at Brest University Hospital. Promising results were obtained for the automatic analysis of cataract surgeries and surgical gesture recognition. The statistical model allows an analysis which is both fine-tuned and comprehensive. The general approach proposed in this thesis could be easily used for computer aided surgery, by providing recommendations or video sequence examples. The method could also be used to annotate videos for indexing purposes.L'objectif de cette thèse est de fournir aux chirurgiens des aides opératoires en temps réel. Nous nous appuyons pour cela sur des vidéos préalablement archivées et interprétées. Pour que cette aide soit pertinente, il est tout d'abord nécessaire de reconnaître, à chaque instant, le geste pratiqué par le chirurgien. Ce point est essentiel et fait l'objet de cette thèse. Différentes méthodes ont été développées et évaluées, autour de la reconnaissance automatique du geste chirurgical. Nous nous sommes appuyés sur des méthodes de catégorisation (recherche des cas les plus proches basée sur l'extraction du contenu visuel) et des modèles statistiques du processus chirurgical. Les réflexions menées ont permis d'aboutir à une analyse automatique de la chirurgie à plusieurs niveaux de description. L'évaluation des méthodes a été effectuée sur une base de données de vidéos de chirurgies de la cataracte, collectées grâce à une forte collaboration avec le service d'ophtalmologie du CHRU de Brest. Des résultats encourageants ont été obtenus pour la reconnaissance automatique du geste chirurgical. Le modèle statistique multi-échelles développé permet une analyse fine et complète de la chirurgie. L'approche proposée est très générale et devrait permettre d'alerter le chirurgien sur les déroulements opératoires à risques, et lui fournir des recommandations en temps réel sur des conduites à tenir reconnues. Les méthodes développées permettront également d'indexer automatiquement des vidéos chirurgicales archivées
    corecore