21 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    Microoptical multi aperture imaging systems

    Get PDF
    Die Verkleinerung digitaler Einzelapertur-Abbildungssysteme erreicht aktuell physikalische sowie technische Limits. Die Miniaturisierung führt zu einer Verringerung sowohl des Auflösungsvermögens als auch des Signal-Rausch-Verhältnisses. Einen Ausweg zeigen die Prinzipien der kleinsten in der Natur bekannten Sehsysteme - die Facettenaugen. Die parallelisierte Anordnung einer großen Anzahl von Optiken ermöglicht, trotz der geringen Baugröße, eine große Informationsmenge aus einem ausgedehnten Gesichtsfeld zu übertragen. Ziel ist es, die Vorteile natürlicher Facettenaugen zu analysieren und diese zur Überwindung aktueller Grenzen der Miniaturisierung von digitalen Kameras zu adaptieren. Durch die Synergie von Optik, Opto-Elektronik und Bildverarbeitung wird die Miniaturisierung unter Erreichung praxisrelevanter Parameter angestrebt. Dafür wurde eine systematische Einteilung bereits bekannter und neuartiger Prinzipien von Multiapertur-Abbildungssystemen vorgenommen. Das grundlegende Verständnis der Vor- und Nachteile sowie des Skalierungsverhaltens der verschiedenen Ansätze ermöglichte die detaillierte Untersuchung der zwei erfolgversprechendsten Systemklassen. Für die Auslegung der Multiapertur-Optiken wurde eine Kombination aus Ansätzen des klassischen Optikdesigns und neuen semi-automatisierten Simulations- und Optimierungsmethoden mittels Ray-Tracing angewandt. Die mit natürlichen Facettenaugen vergleichbare Größe der Optiken ermöglichte die Verwendung mikrooptischer Herstellungsverfahren im Wafermaßstab. Es wurden Prototypen experimentell untersucht und die simulierten Systemparameter mit Hilfe der für die Multiapertur Anordnungen angepassten Messmethoden bestätigt. Die dargestellten Lösungen demonstrieren grundsätzlich neue Ansätze für den Bereich der hochauflösenden, miniaturisierten Abbildungsoptik, die kleinste Baulängen bei gegebenem Auflösungsvermögen erzielen. Somit sind sie im Stande die Skalierungslimits der Einzelapertur-Abbildungsoptik zu überwinden

    Wide-Field Spatial Frequency Domain Imaging, Diffuse Reflectance and Endogenous Fluorescence Spectroscopy System for Quantitative Tissue Biomarkers in Radical Prostatectomy Specimens

    Get PDF
    Résumé La seule approche chirurgicale présentement utilisée pour le traitement du cancer de la prostate est le retrait complet de l’organe ; la prostatectomie radicale. Similairement à d’autres procédures de résection du cancer, une importante mesure du succès de l’opération est le degré auquel le cancer a été retiré du patient, car les chances de récurrence sont augmentées par la présence de cancer résiduel. Puisque la prostate est entourée de structures génito-urinaires importantes relativement à la qualité de vie du patient, retirer l’entièreté de la prostate tout en évitant d’impacter les structures avoisinantes est une tâche complexe. Par ailleurs, la navigation chirurgicale en prostate nécessite de grandes améliorations étant donné l’inexistence de modalités d’imagerie compatibles pour des méthodes de recalage avec ce cancer, et la faible précision et le peu d’informations spatiales fournies par les méthodes diagnostiques actuelles. Il y a un réel besoin de nouveaux outils permettant la différenciation de tissus lors de la prostatectomie radicale et ainsi la résection ciblée des tissus. La fluorescence endogène est une technique d’imagerie donnant accès à des informations moléculaires sur un échantillon et a été utilisée dans maintes applications pour la caractérisation de tissus. La méthode a été appliquée sur les tissus prostatiques avec des sondes ponctuelles, mais n’a pas trouvé d’usage en champ de vue macroscopique. Cependant, sans traitement, la fluorescence n’est pas un reflet direct du contenu moléculaire, puisqu’elle est en compétition avec d’autres processus comme l’absorption et la diffusion élastique. À cet effet, des méthodes corrigeant l’absorption et la diffusion ont été élaborées à partir de mesures de réflectance diffuse. Celles-ci étaient limitées aux géométries ponctuelles jusqu’au développement de l’imagerie dans le domaine des fréquences spatiales (spatial frequency domain imaging, SFDI), qui permet la reconstruction des coefficients d’absorption (ua) et de diffusion (u0s) en grand-champ. Toutefois, les techniques de correction de fluorescence dérivées du SFDI n’ont été appliquées qu’à des agents de fluorescence externes. Ainsi, ce projet présente un système multimodal grand-champ combinant fluorescence endogène, réflectance diffuse et SFDI pour obtenir des informations moléculaires quantitatives sur des spécimens de prostatectomie radicale. Premièrement, les modalités de fluorescence, réflectance et SFDI ont été intégrées dans un système d’imagerie avec des spécifications adaptées aux échantillons de prostates ; soit un champ de vue de 5.5 x 5.5 cm, résolution spatiale de 70 um et profondeur de champ de 1.5 cm.----------Abstract The only current surgical approach to the treatment of prostate cancer is radical prostatectomy, which enforces the complete resection of the organ. Similarly to other forms of cancer resection, the degree of success of the surgery is directly influenced by the completeness of cancer removal, as residual cancer increases the risks of recurrence. Because the prostate is close to critical genitourinary structures with a high impact on patient quality-of-life, achieving complete resection with a minimum amount of side-effects on the surrounding tissues is a challenging task. Surgical guidance in the prostate requires vital improvement due to the lack of proper co-registration imaging methods and inaccurate diagnostic techniques which give very limited spatial information. Consequently, there is a need for new tools to characterize tissue during radical prostatectomy and guide resection. Endogenous fluorescence provides molecular information from the tissue and thus has found use for tissue characterization in multiple applications. Although the technique was applied to prostate tissue in point probes, it has never been explored in macroscopic geometries. Fluorescence also competes with other optical events like absorption and elastic scattering, which makes it indirectly related to tissue molecular content. Optical properties derived from diffuse reflectance measurements can be used to correct these optical events and compute quantified fluorescence. Such correction models were limited to point geometries until the development of the spatial-frequency domain imaging (SFDI) technique, which allowed the extraction of absorption (ua) and scattering (u0s) coefficients in wide-field configurations. However, SFDIbased quantification has only been applied to external fluorescence agents. This project presents the development of a wide-field multimodal system that combines endogenous fluorescence, diffuse reflectance, and SFDI to obtain quantitative molecular information from radical prostatectomy samples. At first fluorescence, reflectance and SFDI were integrated into a single system with a field of view of 5.5 x 5.5 cm, spatial resolution of 70 um and depth of field of 1.5 cm adapted to the prostate samples. Calibration processes and acquisition parameters were determined with experiments on optical phantoms. SFDI reconstruction was accurate within 5.2% and 4.4% for absorption and scattering respectively; performances similar to other systems in the literature. Quantification of fluorescence also resulted in a significant increase in the correlation of measured intensity to fluorophore concentration

    On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.

    Get PDF
    Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke
    corecore