16 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Evaluation of HMDs by QFD for Augmented Reality Applications in the Maxillofacial Surgery Domain

    Get PDF
    Today, surgical operations are less invasive than they were a few decades ago and, in medicine, there is a growing trend towards precision surgery. Among many technological advancements, augmented reality (AR) can be a powerful tool for improving the surgery practice through its ability to superimpose the 3D geometrical information of the pre-planned operation over the surgical field as well as medical and instrumental information gathered from operating room equipment. AR is fundamental to reach new standards in maxillofacial surgery. The surgeons will be able to not shift their focus from the patients while looking to the monitors. Osteotomies will not require physical tools to be fixed on patient bones as guides to make resections. Handling grafts and 3D models directly in the operating room will permit a fine tuning of the procedure before harvesting the implant. This article aims to study the application of AR head-mounted displays (HMD) in three operative scenarios (oncological and reconstructive surgery, orthognathic surgery, and maxillofacial trauma surgery) by the means of quantitative logic using the Quality Function Deployment (QFD) tool to determine their requirements. The article provides an evaluation of the readiness degree of HMD currently on market and highlights the lacking features

    Validazione di un dispositivo indossabile basato sulla realta aumentata per il riposizionamento del mascellare superiore

    Get PDF
    Aim: We present a newly designed, localiser-free, head-mounted system featuring augmented reality (AR) as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. Also, we implement a novel and ergonomic strategy designed to present AR information to the operating surgeon (hPnP). Methods: The head-mounted wearable system was developed as a stand- alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, AR-assisted maxillary repositioning. In vitro testing was conducted on a physical replica of a human skull. Surgical accuracy was measured. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. Results: The mean linear error was 1.70±0.51mm. The axial errors were 0.89±0.54mm on the sagittal axis, 0.60±0.20mm on the frontal axis, and 1.06±0.40mm on the craniocaudal axis. Mean angular errors were also computed. Pitch: 3.13°±1.89°; Roll: 1.99°±0.95°; Yaw: 3.25°±2.26°. No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. Conclusion: Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions.Obiettivo: Presentare un nuovo sistema indossabile, privo di sistema di tracciamento esterno, che utilizzi la realtà aumentata come ausilio alla chirurgia ossea maxillo-facciale. Abbiamo validato il dispositivo. Inoltre, abbiamo implementato un nuovo metodo per presentare le informazioni aumentate al chirurgo (hPnP). Metodi: Le caratteristiche di visualizzazione del sistema, basato sul paradigma video see-through, sono state sviluppate specificamente per la chirurgia ossea maxillo-facciale. Il dispositivo è progettato per mostrare la pianificazione virtuale della chirurgia sovrapponendola all’anatomia del paziente. Abbiamo implementato un metodo che consente una tecnica senza splint, basata sulla realtà aumentata, per il riposizionamento del mascellare superiore. Il test in vitro è stato condotto su una replica di un cranio umano. La precisione chirurgica è stata misurata confrontando i risultati reali con quelli attesi. Il test è stato condotto utilizzando tre pianificazioni chirurgiche di crescente complessità, per nove operatori con diversi livelli di abilità chirurgica. Risultati: L'errore lineare medio è stato di 1,70±0,51mm. Gli errori assiali erano: 0,89±0,54mm sull'asse sagittale, 0,60±0,20mm sull'asse frontale, e 1,06±0,40mm sull'asse craniocaudale. Anche gli errori angolari medi sono stati calcolati. Beccheggio: 3.13°±1,89°; Rollio: 1,99°±0,95°; Imbardata: 3.25°±2,26°. Nessuna differenza significativa in termini di errore è stata rilevata tra gli operatori. Il feedback dei chirurghi è stato soddisfacente; tutti i test sono stati completati entro 15 minuti e lo strumento è stato considerato comodo e utilizzabile nella pratica. Conclusione: Il nostro dispositivo sembra essersi dimostrato preciso se utilizzato per eseguire il riposizionamento del mascellare superiore senza splint. I nostri risultati suggeriscono che il metodo può potenzialmente essere esteso ad altre procedure chirurgiche sullo scheletro facciale. Inoltre, appare utile procedere ai test in vivo per valutare la precisione chirurgica in condizioni cliniche reali

    Patient-specific simulation for autonomous surgery

    Get PDF
    An Autonomous Robotic Surgical System (ARSS) has to interact with the complex anatomical environment, which is deforming and whose properties are often uncertain. Within this context, an ARSS can benefit from the availability of patient-specific simulation of the anatomy. For example, simulation can provide a safe and controlled environment for the design, test and validation of the autonomous capabilities. Moreover, it can be used to generate large amounts of patient-specific data that can be exploited to learn models and/or tasks. The aim of this Thesis is to investigate the different ways in which simulation can support an ARSS and to propose solutions to favor its employability in robotic surgery. We first address all the phases needed to create such a simulation, from model choice in the pre-operative phase based on the available knowledge to its intra-operative update to compensate for inaccurate parametrization. We propose to rely on deep neural networks trained with synthetic data both to generate a patient-specific model and to design a strategy to update model parametrization starting directly from intra-operative sensor data. Afterwards, we test how simulation can assist the ARSS, both for task learning and during task execution. We show that simulation can be used to efficiently train approaches that require multiple interactions with the environment, compensating for the riskiness to acquire data from real surgical robotic systems. Finally, we propose a modular framework for autonomous surgery that includes deliberative functions to handle real anatomical environments with uncertain parameters. The integration of a personalized simulation proves fundamental both for optimal task planning and to enhance and monitor real execution. The contributions presented in this Thesis have the potential to introduce significant step changes in the development and actual performance of autonomous robotic surgical systems, making them closer to applicability to real clinical conditions

    Computer-assistierte Laparoskopie mit fluoreszierenden Markern

    Get PDF
    In der Medizin und insbesondere in der Urologie werden minimalinvasive laparoskopische Eingriffe immer häufiger durchgeführt, da sie schonender für die Patientinnen und Patienten sind. Durch die Verwendung eines Laparoskops wird die Orientierung und Navigation der chirurgischen Instrumente jedoch erschwert, da kein direkter Blick auf die Operationsszene möglich und das Sichtfeld eingeschränkt ist. Außerdem entfällt der Tastsinn. Die Lage relevanter Strukturen muss von den präoperativen Daten durch Erfahrung und Vorstellungskraft der Chirurginnen und Chirurgen auf das Laparoskopbild übertragen werden. Durch Methoden der erweiterten Realität (Augmented Reality, AR) können zusätzlich präoperative Daten im Laparoskopbild eingeblendet werden. Somit wird die intraoperative Orientierung erleichtert. Dazu muss eine geometrische Transformation zwischen den präoperativen Daten und dem Laparoskopbild gefunden werden – dieser Vorgang wird als Registrierung der Daten bezeichnet. In der Laparoskopie werden AR-Systeme allerdings noch nicht im klinischen Alltag eingesetzt, da bislang alle Ansätze zur intraoperativen Registrierung in der Laparoskopie nur sehr aufwändig in den Arbeitsablauf zu integrieren sind, die Ergebnisse nicht in Echtzeit angezeigt werden können oder die Registrierung während einer Operation nur unzuverlässig funktioniert. Das Ziel dieser Doktorarbeit war die Entwicklung eines Ansatzes zur robusten intraoperativen Registrierung in der Laparoskopie. Dazu wurde erstmalig ein auf nahinfraroter (NIR) Fluoreszenz basierendes Registrierungsverfahren entwickelt und angewandt. Dieser neue Ansatz ist deutlich robuster bei Verdeckung durch Rauch, Blut und Gewebe, ist echtzeitfähig und bietet zusätzlich die Chance auf eine sehr einfache Integration in den medizinischen Arbeitsablauf. Umsetzungsmöglichkeiten dieses neuen Konzepts wurden sowohl für die partielle Nephrektomie als auch für die Prostatektomie untersucht. Für die partielle Nephrektomie wurden fluoreszierende Marker aus Indocyaningrün (ICG) und einem Kontrastmittel für die Computertomografie (CT) entwickelt, die auf einem Organ mit einem Gewebeklebstoff angebracht und deren Positionen relativ zu den Organen durch CT-Aufnahmen bestimmt werden können. Durch eine 2D/3D-Registrierung können so die CT-Daten im Laparoskopbild eingeblendet werden. In mehreren Ex-vivo-Versuchen wurde die Machbarkeit und Genauigkeit des Registrierungsverfahrens mit diesen Markern gezeigt. Die Marker sind durch ihr NIR Fluoreszenzsignal herkömmlichen Nadelmarkern zur Registrierung deutlich überlegen, wenn diese von Rauch, Blut oder Gewebe verdeckt sind. Mit Nadelmarkern konnten beispielsweise bei Verdeckung durch Rauch nur 83% der Laparoskopbilder erfolgreich registriert werden, unter Blut konnten sie nur in bis zu 5% der Fälle und bei Verdeckung durch Gewebe konnten die Nadelmarker gar nicht detektiert werden. Bei Verwendung von fluoreszierenden Markern stieg dieser Anteil je nach Stärke der Verdeckung auf mindestens 88% bei Verdeckung durch Blut, 93% bei Verdeckung durch Gewebe und er betrug immer 100%, wenn sich Rauch im Sichtfeld des Laparoskops befand. Des Weiteren wurde die Anordnung der Marker in Computersimulationen untersucht, um den Einfluss der Markerpositionen zueinander und relativ zum Laparoskop zu analysieren. Es stellte sich heraus, dass für eine erfolgreiche Registrierung ein Mindestabstand vom Laparoskop zu den Markern eingehalten werden sollte. In Tierversuchen wurden erstmals fluoreszierende Marker zur Registrierung in vivo eingesetzt und die Robustheit dieser Marker gezeigt. Der Registrierungsfehler betrug im Durchschnitt nur 3 bis 12 Pixel, auch das überlagerte CT-Bild passte sehr gut zum dazugehörigen Laparoskopbild. Dabei zeigte sich, dass sich die Marker sehr gut zur Registrierung eignen und auch gegenüber Kamerabewegungen und Verdeckung durch Rauch, Blut oder Gewebe robust sind. Für die Prostatektomie wurde ein Ansatz entwickelt, bei dem eine fluoreszierende Variante des Farbstoffes 68Ga-PSMA-11 verwendet werden soll, die an den PSMA-Rezeptor bindet und dadurch in stark erhöhter Konzentration in Prostatakrebszellen vorkommt. So können unter anderem auch von Prostatakrebszellen befallene Lymphknoten mittels Fluoreszenz sichtbar gemacht und zur Registrierung genutzt werden. Die Herausforderungen und Anforderungen an dieses Konzept für die klinische Umsetzung wurden ausführlich diskutiert: Es sollte sich ohne großen Mehraufwand in den klinischen Arbeitsablauf integrieren lassen und kann zusätzlich die Strahlenbelastung für das medizinische Personal im Vergleich zu anderen Methoden reduzieren. Beide Anwendungen, die in dieser Doktorarbeit vorgestellt wurden, haben ein großes Potenzial für eine klinische Anwendung. Es gibt allerdings noch Hürden, die bis zum klinischen Transfer überwunden werden müssen, wie beispielsweise die Zulassung der Marker, das Anpassen der Registrierungssoftware an die Verteilung der befallenen Lymphknoten im Patienten oder die Berücksichtigung von Deformierungen. Bei den Ex- und In-vivo-Anwendungen zeigte sich, dass sich das vorgestellte Konzept basierend auf fluoreszierenden Markern für eine akkurate intraoperative Registrierung in Echtzeit eignet und dieses Verfahren wegen der erhöhten Robustheit durch NIR Fluoreszenz die Nachteile von herkömmlichen Registrierungsmethoden überwindet. Der nächste wichtige Schritt ist nun die Zulassung eines geeigneten Markers, damit dieses System an Patientinnen und Patienten eingesetzt werden kann und dadurch die intraoperative Orientierung und die Identifizierung relevanter Strukturen im Laparoskopbild erleichtert wird. So bietet sich die Chance, laparoskopische Einsätze für den Chirurgen oder die Chirurgin einfacher zu gestalten und gleichzeitig die Heilungschancen der Patientinnen und Patienten zu verbessern
    corecore