284 research outputs found
A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery
Introduction
The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures.
Methods
This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation.
Results
The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure.
Discussion
Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process
Recommended from our members
Development of a Training Tool for Endotracheal Intubation: Distributed Augmented Reality
The authors introduce a tool referred to as the Ultimate Intubation Head (UIH) to train medical practitioners’ hand-eye coordination in performing endotracheal intubation with the help of augmented reality methods. In this paper we describe the integration of a deployable UIH and present methods for augmented reality registration of real and virtual anatomical models. The assessment of the 52 degrees field of view optics of the custom-designed and built head-mounted display is less than 1.5 arc minutes in the amount of blur and astigmatism, the two limiting optical aberrations. Distortion is less than 2.5%. Preliminary results of the registration of a physical phantom mandible on its virtual counterpart yields less than 3mm rms. in registration. Finally we describe an approach to distributed visualization where a given training procedure may be visualized and shared at various remote locations. Basic assessments of delays within two scenarios of data distribution were conducted and reported
DEFORM'06 - Proceedings of the Workshop on Image Registration in Deformable Environments
Preface These are the proceedings of DEFORM'06, the Workshop on Image Registration in Deformable Environments, associated to BMVC'06, the 17th British Machine Vision Conference, held in Edinburgh, UK, in September 2006. The goal of DEFORM'06 was to bring together people from different domains having interests in deformable image registration. In response to our Call for Papers, we received 17 submissions and selected 8 for oral presentation at the workshop. In addition to the regular papers, Andrew Fitzgibbon from Microsoft Research Cambridge gave an invited talk at the workshop. The conference website including online proceedings remains open, see http://comsee.univ-bpclermont.fr/events/DEFORM06. We would like to thank the BMVC'06 co-chairs, Mike Chantler, Manuel Trucco and especially Bob Fisher for is great help in the local arrangements, Andrew Fitzgibbon, and the Programme Committee members who provided insightful reviews of the submitted papers. Special thanks go to Marc Richetin, head of the CNRS Research Federation TIMS, which sponsored the workshop. August 2006 Adrien Bartoli Nassir Navab Vincent Lepeti
Recommended from our members
Application of Augmented Reality to Visualizing Anatomical Airways
Visualizing information in three dimensions provides an increased understanding of the data presented. Furthermore, the ability to manipulate or interact with data visualized in three dimensions is superior. Within the medical community, augmented reality is being used for interactive, three-dimensional (3D) visualization. This type of visualization, which enhances the real world with computer generated information, requires a display device, a computer to generate the 3D data, and a system to track the user. In addition to these requirements, however, the hardware must be properly integrated to insure correct visualization. To this end, we present components of an integrated augmented reality system consisting of a novel head-mounted projective display, a Linux-based PC, and a commercially available optical tracking system. We demonstrate the system with the visualization of anatomical airways superimposed on a human patient simulator
Simulation Guidée par l’Image pour la Réalité Augmentée durant la Chirurgie Hépatique
The main objective of this thesis is to provide surgeons with tools for pre and intra-operative decision support during minimally invasive hepaticsurgery. These interventions are usually based on laparoscopic techniques or, more recently, flexible endoscopy. During such operations, the surgeon tries to remove a significant number of liver tumors while preserving the functional role of the liver. This involves defining an optimal hepatectomy, i.e. ensuring that the volume of post-operative liver is at least at 55% of the original liver and the preserving at hepatic vasculature. Although intervention planning can now be considered on the basis of preoperative patient-specific, significant movements of the liver and its deformations during surgery data make this very difficult to use planning in practice. The work proposed in this thesis aims to provide augmented reality tools to be used in intra-operative conditions in order to visualize the position of tumors and hepatic vascular networks at any time.L’objectif principal de cette thèse est de fournir aux chirurgiens des outils d’aide à la décision pré et per-opératoire lors d’interventions minimalement invasives en chirurgie hépatique. Ces interventions reposent en général sur des techniques de laparoscopie ou plus récemment d’endoscopie flexible. Lors de telles interventions, le chirurgien cherche à retirer un nombre souvent important de tumeurs hépatiques, tout en préservant le rôle fonctionnel du foie. Cela implique de définir une hépatectomie optimale, c’est à dire garantissant un volume du foie post-opératoire d’au moins 55% du foie initial et préservant au mieux la vascularisation hépatique. Bien qu’une planification de l’intervention puisse actuellement s’envisager sur la base de données pré-opératoire spécifiques au patient, les mouvements importants du foie et ses déformations lors de l’intervention rendent cette planification très difficile à exploiter en pratique. Les travaux proposés dans cette thèse visent à fournir des outils de réalité augmentée utilisables en conditions per-opératoires et permettant de visualiser à chaque instant la position des tumeurs et réseaux vasculaires hépatiques
Articulated Statistical Shape Modelling of the Shoulder Joint
The shoulder joint is the most mobile and unstable joint in the human body. This makes it vulnerable to soft tissue pathologies and dislocation. Insight into the kinematics of the joint may enable improved diagnosis and treatment of different shoulder pathologies. Shoulder joint kinematics can be influenced by the articular geometry of the joint. The aim of this project was to develop an analysis framework for shoulder joint kinematics via the use of articulated statistical shape models (ASSMs). Articulated statistical shape models extend conventional statistical shape models by combining the shape variability of anatomical objects collected from different subjects (statistical shape models), with the physical variation of pose between the same objects (articulation). The developed pipeline involved manual annotation of anatomical landmarks selected on 3D surface meshes of scapulae and humeri and establishing dense surface correspondence across these data through a registration process. The registration was performed using a Gaussian process morphable model fitting approach. In order to register two objects separately, while keeping their shape and kinematics relationship intact, one of the objects (scapula) was fixed leaving the other (humerus) to be mobile. All the pairs of registered humeri and scapulae were brought back to their native imaged position using the inverse of the associated registration transformation. The glenohumeral rotational center and local anatomic coordinate system of the humeri and scapulae were determined using the definitions suggested by the International Society of Biomechanics. Three motions (flexion, abduction, and internal rotation) were generated using Euler angle sequences. The ASSM of the model was built using principal component analysis and validated. The validation results show that the model adequately estimated the shape and pose encoded in the training data. Developing ASSM of the shoulder joint helps to define the statistical shape and pose parameters of the gleno humeral articulating surfaces. An ASSM of the shoulder joint has potential applications in the analysis and investigation of population-wide joint posture variation and kinematics. Such analyses may include determining and quantifying abnormal articulation of the joint based on the range of motion; understanding of detailed glenohumeral joint function and internal joint measurement; and diagnosis of shoulder pathologies. Future work will involve developing a protocol for encoding the shoulder ASSM with real, rather than handcrafted, pose variation
- …