20 research outputs found

    Pixel-Wise Recognition for Holistic Surgical Scene Understanding

    Full text link
    This paper presents the Holistic and Multi-Granular Surgical Scene Understanding of Prostatectomies (GraSP) dataset, a curated benchmark that models surgical scene understanding as a hierarchy of complementary tasks with varying levels of granularity. Our approach enables a multi-level comprehension of surgical activities, encompassing long-term tasks such as surgical phases and steps recognition and short-term tasks including surgical instrument segmentation and atomic visual actions detection. To exploit our proposed benchmark, we introduce the Transformers for Actions, Phases, Steps, and Instrument Segmentation (TAPIS) model, a general architecture that combines a global video feature extractor with localized region proposals from an instrument segmentation model to tackle the multi-granularity of our benchmark. Through extensive experimentation, we demonstrate the impact of including segmentation annotations in short-term recognition tasks, highlight the varying granularity requirements of each task, and establish TAPIS's superiority over previously proposed baselines and conventional CNN-based models. Additionally, we validate the robustness of our method across multiple public benchmarks, confirming the reliability and applicability of our dataset. This work represents a significant step forward in Endoscopic Vision, offering a novel and comprehensive framework for future research towards a holistic understanding of surgical procedures.Comment: Preprint submitted to Medical Image Analysis. Official extension of previous MICCAI 2022 (https://link.springer.com/chapter/10.1007/978-3-031-16449-1_42) and ISBI 2023 (https://ieeexplore.ieee.org/document/10230819) orals. Data and codes are available at https://github.com/BCV-Uniandes/GraS

    Hierarchical, informed and robust machine learning for surgical tool management

    Get PDF
    This thesis focuses on the development of a computer vision and deep learning based system for the intelligent management of surgical tools. The work accomplished included the development of a new dataset, creation of state of the art techniques to cope with volume, variety and vision problems, and designing or adapting algorithms to address specific surgical tool recognition issues. The system was trained to cope with a wide variety of tools, with very subtle differences in shapes, and was designed to work with high volumes, as well as varying illuminations and backgrounds. Methodology that was adopted in this thesis included the creation of a surgical tool image dataset and development of a surgical tool attribute matrix or knowledge-base. This was significant because there are no large scale publicly available surgical tool datasets, nor are there established annotations or datasets of textual descriptions of surgical tools that can be used for machine learning. The work resulted in the development of a new hierarchical architecture for multi-level predictions at surgical speciality, pack, set and tool level. Additional work evaluated the use of synthetic data to improve robustness of the CNN, and the infusion of knowledge to improve predictive performance

    Surgical Data Science - from Concepts toward Clinical Translation

    Get PDF
    Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process

    Model-driven and Data-driven Methods for Recognizing Compositional Interactions from Videos

    Get PDF
    The ability to accurately understand how humans interact with their surroundings is critical for many vision based intelligent systems. Compared to simple atomic actions (eg. raise hand), many interactions found in our daily lives are defined as a composition of an atomic action with a variety of arguments (eg. pick up a pen). Despite recent progress in the literature, there still remains fundamental challenges unique to recognizing interactions from videos. First, most of the action recognition literature assumes a problem setting where a pre-defined set of action labels is supported by a large and relatively balanced set of training examples for those labels. There are many realistic cases where this data assumption breaks down, either because the application demands fine-grained classification of a potentially combinatorial number of activities, and/or because the problem at hand is an “open-set” problem where new labels may be defined at test time. Second, many deep video models often simply represent video as a three-dimensional tensor and ignore the differences in spatial and temporal dimensions during the representation learning stage. As a result, data-driven bottom-up action models frequently over-fit to the static content of the video and fail to accurately capture the dynamic changes in relations among actors in the video. In this dissertation, we address the aforementioned challenges of recognizing fine-grained interactions from videos by developing solutions that explicitly represent interactions as compositions of simpler static and dynamic elements. By exploiting the power of composition, our ``detection by description'' framework expresses a very rich space of interactions using only a small set of static visual attributes and a few dynamic patterns. A definition of an interaction is constructed on the fly from first-principles state machines which leverage bottom-up deep-learned components such as object detectors. Compared to existing model-driven methods for video understanding, we introduce the notion of dynamic action signatures which allows a practitioner to express the expected temporal behavior of various elements of an interaction. We show that our model-driven approach using dynamic action signatures outperforms other zero-shot methods on multiple public action classification benchmarks and even some fully supervised baselines under realistic problem settings. Next, we extend our approach to a setting where the static and dynamic action signatures are not given by the user but rather learned from data. We do so by borrowing ideas from data-driven, two-stream action recognition and model-driven, structured human-object interaction detection. The key idea behind our approach is that we can learn the static and dynamic decomposition of an interaction using a dual-pathway network by leveraging object detections. To do so, we introduce the Motion Guided Attention Fusion mechanism which transfers the motion-centric features learned using object detections to the representation learned from the RGB-based motion pathway. Finally, we conclude with a comprehensive case study on vision based activity detection applied to video surveillance. Using the methods presented in this dissertation, we step towards an intelligent vision system that can detect a particular interaction instance only given a description from a user and depart from requiring massive dataset of labeled training videos. Moreover, as our framework naturally defines a decompositional structure of activities into detectable static/visual attributes, we show that we can simulate necessary training data to acquire attribute detectors when the desired detector is otherwise unavailable. Our approach achieves competitive or superior performance over existing approaches for recognizing fine-grained interactions from realistic videos

    Online courses for healthcare professionals: is there a role for social learning?

    Get PDF
    Background: All UK postgraduate medical trainees receive supervision from trained supervisors. Training has traditionally been delivered via face to face courses, but with increasing time pressures and complex shift patterns, access to these is difficult. To meet this challenge, we developed a two-week massive open online course (MOOC) for faculty development of clinical supervisors. Summary of Work: The MOOC was developed by a group of experienced medical educators and delivered via the FutureLearn (FL) platform which promotes social learning through interaction. This facilitates building of communities of practice, learner interaction and collaboration. We explored learner perceptions of the course, in particular the value of social learning in the context of busy healthcare professionals. We analysed responses to pre- and post-course surveys for each run of the MOOC in 2015, FL course statistics, and learner discussion board comments. Summary of Results: Over 2015, 7,225 learners registered for the course, though 6% left the course without starting. Of the 3,055 learners who began the course, 35% (1073/3055) were social learners who interacted with other participants. Around 31% (960/3055) learners participated fully in the course; this is significantly higher than the FL average of 22%. Survey responses suggest that 68% learners worked full-time, with over 75% accessing the course at home or while commuting, using laptops, smart phones and tablet devices. Discussion: Learners found the course very accessible due to the bite-sized videos, animations, etc which were manageable at the end of a busy working day. Inter-professional discussions and social learning made the learning environment more engaging. Discussion were rated as high quality as they facilitated sharing of narratives and personal reflections, as well as relevant resources. Conclusion: Social learning added value to the course by promoting sharing of resources and improved interaction between learners within the online environment. Take Home Messages: 1) MOOCs can provide faculty development efficiently with a few caveats. 2) Social learning added a new dimension to the online environment

    LWA 2013. Lernen, Wissen & Adaptivität ; Workshop Proceedings Bamberg, 7.-9. October 2013

    Get PDF
    LWA Workshop Proceedings: LWA stands for "Lernen, Wissen, Adaption" (Learning, Knowledge, Adaptation). It is the joint forum of four special interest groups of the German Computer Science Society (GI). Following the tradition of the last years, LWA provides a joint forum for experienced and for young researchers, to bring insights to recent trends, technologies and applications, and to promote interaction among the SIGs
    corecore