27 research outputs found

    Augmented Reality in Surgical Navigation: A Review of Evaluation and Validation Metrics

    Get PDF
    Pre-operative imaging has been used earlier to guide traditional surgical navigation systems. There has been a lot of effort in the last decade to integrate augmented reality into the operating room to help surgeons intra-operatively. An augmented reality (AR) based navigation system provides a clear three-dimensional picture of the interested areas over the patient to aid surgical navigation and operations, which is a promising approach. The goal of this study is to review the application of AR technology in various fields of surgery and how the technology is used for its performance in each field. Assessment of the available AR assisted navigation systems being used for surgery is reviewed in this paper. Furthermore, a discussion about the required evaluation and validation metric for these systems is also presented. The paper comprehensively reviews the literature since the year 2008 for providing relevant information on applying the AR technology for training, planning and surgical navigation. It also describes the limitations which need to be addressed before one can completely rely on this technology for surgery. Thus, additional research is desirable in this emerging field, particularly to evaluate and validate the use of AR technology for surgical navigation. 2023 by the authors.This publication was made possible by NPRP-11S-1219-170106 from the Qatar National Research Fund (a member of Qatar Foundation).Scopu

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Development of a dental handpiece angle correction device

    Get PDF
    Background Preparation of a uniform angle of walls is essential for making an ideal convergence angle in fixed prosthodontics. We developed a de novo detachable angle-correction apparatus for dental handpiece drills that could help the ideal tooth preparation. Methods We utilized a gyro sensor to measure the angular velocities to calculate the slope of an object by integrating the values, acceleration sensor to calculate the slope of an object by measuring the acceleration relative to gravity, and Kalman filter algorithm. Converting the angulation of the handpiece body to its drill part could be performed by a specific matrix formulation set on two reference points (2° and 6°). A flexible printed circuit board was used to minimize the size of the device. For convergence angle investigation, 16 volunteers were divided randomly into two groups for performing tooth preparation on a mandibular first molar resin tooth. All abutments were scanned by a 3D scanner (D700®, 3Shape Co., Japan), the convergence angle and tooth axis deviation were analyzed by a CAD program (SolidWorks 2013®, Dassault Systems Co., USA) with statistical analysis by Wilcoxon signed-rank test (α = 0.05) using SPSS statistical software (Version 16.0, SPSS Inc.). Results This device successfully maintained the stable zero point (less than 1° deviation) at different angles (0°, 30°, 60°, 80°) for the first 30 min. In single tooth preparation, without this apparatus, the average bucco-lingual convergence angle was 20.26° (SD 7.85), and the average mesio–distal (MD) convergence angle was 17.88° (SD 7.64). However, the use of this apparatus improved the average BL convergence angle to 13.21° (SD 4.77) and the average MD convergence angle to 10.79° (SD 4.48). The angle correction device showed a statistically significant effect on reducing the convergence angle of both directions regardless of the order of the directions. Conclusions The angle correction device developed in this study is capable of guiding practitioners with high accuracy comparable to that of commercial navigation surgery. The volume of the angle correction device is much smaller than that of any other commercial navigation surgery system. This device is expected to be widely utilized in various fields of orofacial surgery.This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2017R1D1A1B03036054)

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Robot Assisted Laser Osteotomy

    Get PDF
    In the scope of this thesis world\u27s first robot system was developed, which facilitates osteotomy using laser in arbitrary geometries with an overall accuracy below 0.5mm. Methods of computer and robot assisted surgery were reconsidered and composed to a workflow. Adequate calibration and registration methods are proposed. Further a methodology for transferring geometrically defined cutting trajectories into pulse sequences and optimized execution plans is developed

    Advancing fluorescent contrast agent recovery methods for surgical guidance applications

    Get PDF
    Fluorescence-guided surgery (FGS) utilizes fluorescent contrast agents and specialized optical instruments to assist surgeons in intraoperatively identifying tissue-specific characteristics, such as perfusion, malignancy, and molecular function. In doing so, FGS represents a powerful surgical navigation tool for solving clinical challenges not easily addressed by other conventional imaging methods. With growing translational efforts, major hurdles within the FGS field include: insufficient tools for understanding contrast agent uptake behaviors, the inability to image tissue beyond a couple millimeters, and lastly, performance limitations of currently-approved contrast agents in accurately and rapidly labeling disease. The developments presented within this thesis aim to address such shortcomings. Current preclinical fluorescence imaging tools often sacrifice either 3D scale or spatial resolution. To address this gap in high-resolution, whole-body preclinical imaging tools available, the crux of this work lays on the development of a hyperspectral cryo-imaging system and image-processing techniques to accurately recapitulate high-resolution, 3D biodistributions in whole-animal experiments. Specifically, the goal is to correct each cryo-imaging dataset such that it becomes a useful reporter for whole-body biodistributions in relevant disease models. To investigate potential benefits of seeing deeper during FGS, we investigated short-wave infrared imaging (SWIR) for recovering fluorescence beyond the conventional top few millimeters. Through phantom, preclinical, and clinical SWIR imaging, we were able to 1) validate the capability of SWIR imaging with conventional NIR-I fluorophores, 2) demonstrate the translational benefits of SWIR-ICG angiography in a large animal model, and 3) detect micro-dose levels of an EGFR-targeted NIR-I probe during a Phase 0 clinical trial. Lastly, we evaluated contrast agent performances for FGS glioma resection and breast cancer margin assessment. To evaluate glioma-labeling performance of untargeted contrast agents, 3D agent biodistributions were compared voxel-by-voxel to gold-standard Gd-MRI and pathology slides. Finally, building on expertise in dual-probe ratiometric imaging at Dartmouth, a 10-pt clinical pilot study was carried out to assess the technique’s efficacy for rapid margin assessment. In summary, this thesis serves to advance FGS by introducing novel fluorescence imaging devices, techniques, and agents which overcome challenges in understanding whole-body agent biodistributions, recovering agent distributions at greater depths, and verifying agents’ performance for specific FGS applications

    Desarrollo de un nuevo sistema de navegación en Implantología basado en unidades de medición inercial

    Get PDF
    La colocación de implantes dentales mediante cirugía guiada por ordenador tiene numerosas ventajas frente a la realizada a mano alzada, especialmente mayor precisión, mayor seguridad y menor invasividad. Pero también, tanto los sistemas estáticos o férulas, como los dinámicos o navegación, presentan dificultades de uso y un mayor coste, por lo que su utilización actual es limitada. Se elabora una revisión de los distintos sistemas de guiado de la cirugía de colocación de implantes, de su uso, precisión y fuentes de error. Describimos la unidad de medición inercial o IMU y sus primeras aplicaciones en cirugía general, como una alternativa versátil, simple y económica a las tecnologías existentes. Por medio de un prototipo que integra los sensores IMU en una interfaz computerizada, se colocan implantes guiados por estos sensores, y también por férulas CAD-CAM. Los resultados medios comparados que obtenemos entre ambos sistemas de guiado, son similares estadísticamente. Las desviaciones medias han sido en coronal 1.48 ± 0.2 (SD 0.58; 95% CI 1.27 - 1.69) y 1.42 ± 0.2 (SD 0.61; 95% CI 1.2 - 1.64) mm, en apical 2.00 ± 0.33 (SD 0.93; 95% CI 1.67 - 2.33) y 2.07 ± 0.35 (SD 0.97; 95% CI 1.72 - 2.42) mm, y las angulares 7.13º ± 1.47º (SD 4.1; 95% CI 5.66 - 8.6) y 5.63º ± 1.41º (SD 3.94; 95% CI 4.22 - 7.04), para IMU y Férulas Estereolitográficas (FE) respectivamente. Estos resultados son consistentes con la precisión reportada en la literatura para la cirugía guiada, tanto estática como dinámica. Se valora igualmente la percepción del operador y su comodidad de uso, encontrando que el manejo del sistema requiere un entrenamiento previo y que se facilitaría al mejorar la interfaz gráfica. Se abre así una vía de investigación para adaptar este nuevo sistema de navegación al uso clínico rutinario. Para ello, es necesaria la mejora in vitro de sus condiciones de manejo, así como ulteriores estudios sobre pacientes.Computer-guided dental implant placement is considered to be safer, more accurate and less invasive compared to freehand implant surgery. Currently two types of surgical guiding systems are available, static templates and dynamic navigation. Both make intervention more complex and costly and this could be the reason why their current use remains limited. A "state of the art" of the different implant placement guiding systems, their use, precision and sources of error has been conducted. The Inertial Measurement Unit (IMU) and its early applications in general surgery are described. The IMU was shown to be a versatile, simple and economical alternative to existing surgical guidance technologies. A prototype surgical handpiece was assembled with IMU sensors integrated with a computerized interface to guide implant placement. Implants were placed in models using this prototype and the standard CAD-CAM splints. Similar statistical average results were obtained using either of the two systems. The mean deviations were 1.48 ± 0.2 (SD 0.58; 95% CI 1.27 - 1.69) and 1.42 ± 0.2 (SD 0.61; 95% CI 1.2 - 1.64) mm coronal, 2.00 ± 0.33 (SD 0.93; 95% CI 1.67 - 2.33) and 2.07 ± 0.35 (SD 0.97; 95% CI 1.72 - 2.42) mm apical, and 7.13º ± 1.47º (SD 4.1; 95% CI 5.66 - 8.6) y 5.63º ± 1.41º (SD 3.94; 95% CI 4.22 - 7.04) angular, for IMU’s and splints respectively. These results are consistent with the precision reported in the literature for guided surgery, both current static and dynamic modalities. When operator’s perceptions and comfort of use were addressed, it was found that the system’s handling requires going through a learning curve and that it would be facilitated by improving the graphic interface. The present study opens a path of investigation to adapt this new surgical navigation system to routine clinical use. To this end, the system has to be improved to make the operator’s control easier before further studies on patients can be carried out

    Autoestereoscopía en dispositivos móviles para facilitar el aprendizaje

    Full text link
    [EN] Autostereoscopy is a technique for three-dimensional visualization that allows the user to perceive the sensation of depth of objects without needing to wear glasses or other devices. The major effort has focused on screens of 15 inches or greater. There are very few mobile devices that incorporate autoestereoscopía. In this thesis, we used a mobile autostereoscopic, LG Optimus 3D. Three-dimensional models are important when learning content is very difficult to acquire from 2D images or other traditional learning methods. If three-dimensional models are joined by three-dimensional visualization, we hypothesize that this binding facilitates learning. This thesis focuses on the dental learning. Specifically, it has been developed and validated an application to learning the dental morphology. The application has been validated with students of dental specialties[ES] La autoestereoscopía es una técnica de visualización tridimensional que permite al usuario percibir la sensación de profundidad de los objetos sin necesitad de llevar gafas o cualquier otro dispositivos. El esfuerzo mayoritario se ha centrado en pantallas de 15 pulgadas o superiores. Existen muy pocos dispositivos móviles que incorporen autoestereoscopía. En esta tesina, se ha utilizado un dispositivo móvil autoestereoscópico, LG Optimus 3D. Los modelos tridimensionales son importantes cuando el contenido de aprendizaje es muy difícil de adquirir a partir de imágenes 2D u otros métodos tradicionales de aprendizaje. Si a los modelos tridimensionales se les une la visualización tridimensional, nuestra hipótesis es que esta unión facilita el aprendizaje. La presente tesina se centra en el aprendizaje dental. Concretamente, se ha desarrollado y validado una aplicación que permite el aprendizaje de la morfología dental. La aplicación se ha validado con estudiantes de especialidades dentalesCarrasco Torres, C. (2014). Autoestereoscopía en dispositivos móviles para facilitar el aprendizaje. http://hdl.handle.net/10251/49125Archivo delegad
    corecore