276 research outputs found

    Computer-Assisted Liver Surgery: from preoperative 3D patient modelling to peroperative guidance

    Get PDF
    La chirurgie représente le meilleur taux de survie pour les cancers hépatiques. Le traitement d’images médicales peut apporter une importante amélioration dans la prise en charge en guidant le geste chirurgical. Nous présentons ici une nouvelle procédure chirurgicale assistée par ordinateur incluant la modélisation 3D préopératoire du patient, suivie par une planification chirurgicale virtuelle et finalisée par un guidage peropératoire réalisé par réalité augmentée (RA). Les premières évaluations incluant des applications cliniques valident le bénéfice attendu. La prochaine étape consistera à automatiser le système de réalité augmentée peropératoire par le développement d’une salle d’opération hybride.Surgery has the best survival rate in hepatic cancer. However, such interventions cannot be undertaken for all patients as the eligibility rules for liver surgery lack accuracy and may include many exceptions. Medical image processing can lead to a major improvement of patient care by guiding the surgical gesture. We present here a new computer-assisted surgical procedure including preoperative 3D patient modelling, followed by virtual surgical planning and finalized by intraoperative computer guidance through the use of augmented reality. First evaluations including the clinical application validate the awaited benefit. The next step will consist in automating the intraoperative augmented reality system thanks to the development of a Hybrid surgical OP-room

    Multiple Synchronous Squamous Cell Cancers of the Skin and Esophagus: Differential Management of Primary Versus Secondary Tumor

    Get PDF
    Multiple primary tumors are uncommon in patients with squamous cell esophageal cancer. Conventional imaging methods have limitations in detecting those tumors. Although 18-F-fluoro-deoxyglucose-positron emission tomography scanner increases the detection of multiple synchronous tumors in patients with other malignancies, its contribution in patients with squamous cell esophageal cancer has not been assessed as it is not systematically performed. The detection of synchronous skin squamous cell tumors in patients with squamous cell esophageal cancer presents a challenge for making diagnostic and therapeutic decisions. A metastatic tumor leads to palliative management, whereas the diagnosis of a primary skin tumor requires curative treatment of both squamous cell tumors. Pathological evaluation appears crucial in the decision

    ST(OR)2: Spatio-Temporal Object Level Reasoning for Activity Recognition in the Operating Room

    Full text link
    Surgical robotics holds much promise for improving patient safety and clinician experience in the Operating Room (OR). However, it also comes with new challenges, requiring strong team coordination and effective OR management. Automatic detection of surgical activities is a key requirement for developing AI-based intelligent tools to tackle these challenges. The current state-of-the-art surgical activity recognition methods however operate on image-based representations and depend on large-scale labeled datasets whose collection is time-consuming and resource-expensive. This work proposes a new sample-efficient and object-based approach for surgical activity recognition in the OR. Our method focuses on the geometric arrangements between clinicians and surgical devices, thus utilizing the significant object interaction dynamics in the OR. We conduct experiments in a low-data regime study for long video activity recognition. We also benchmark our method againstother object-centric approaches on clip-level action classification and show superior performance

    Latent Graph Representations for Critical View of Safety Assessment

    Full text link
    Assessing the critical view of safety in laparoscopic cholecystectomy requires accurate identification and localization of key anatomical structures, reasoning about their geometric relationships to one another, and determining the quality of their exposure. Prior works have approached this task by including semantic segmentation as an intermediate step, using predicted segmentation masks to then predict the CVS. While these methods are effective, they rely on extremely expensive ground-truth segmentation annotations and tend to fail when the predicted segmentation is incorrect, limiting generalization. In this work, we propose a method for CVS prediction wherein we first represent a surgical image using a disentangled latent scene graph, then process this representation using a graph neural network. Our graph representations explicitly encode semantic information - object location, class information, geometric relations - to improve anatomy-driven reasoning, as well as visual features to retain differentiability and thereby provide robustness to semantic errors. Finally, to address annotation cost, we propose to train our method using only bounding box annotations, incorporating an auxiliary image reconstruction objective to learn fine-grained object boundaries. We show that our method not only outperforms several baseline methods when trained with bounding box annotations, but also scales effectively when trained with segmentation masks, maintaining state-of-the-art performance.Comment: 12 pages, 4 figure
    corecore