1,109 research outputs found

    Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients.

    Get PDF
    Due to loss of tactile feedback the assessment of tumor margins during robotic surgery is based only on visual inspection, which is neither significantly sensitive nor specific. Here we demonstrate time-resolved fluorescence spectroscopy (TRFS) as a novel technique to complement the visual inspection of oral cancers during transoral robotic surgery (TORS) in real-time and without the need for exogenous contrast agents. TRFS enables identification of cancerous tissue by its distinct autofluorescence signature that is associated with the alteration of tissue structure and biochemical profile. A prototype TRFS instrument was integrated synergistically with the da Vinci Surgical robot and the combined system was validated in swine and human patients. Label-free and real-time assessment and visualization of tissue biochemical features during robotic surgery procedure, as demonstrated here, not only has the potential to improve the intraoperative decision making during TORS but also other robotic procedures without modification of conventional clinical protocols

    Optical coherence tomography-based consensus definition for lamellar macular hole.

    Get PDF
    BackgroundA consensus on an optical coherence tomography definition of lamellar macular hole (LMH) and similar conditions is needed.MethodsThe panel reviewed relevant peer-reviewed literature to reach an accord on LMH definition and to differentiate LMH from other similar conditions.ResultsThe panel reached a consensus on the definition of three clinical entities: LMH, epiretinal membrane (ERM) foveoschisis and macular pseudohole (MPH). LMH definition is based on three mandatory criteria and three optional anatomical features. The three mandatory criteria are the presence of irregular foveal contour, the presence of a foveal cavity with undermined edges and the apparent loss of foveal tissue. Optional anatomical features include the presence of epiretinal proliferation, the presence of a central foveal bump and the disruption of the ellipsoid zone. ERM foveoschisis definition is based on two mandatory criteria: the presence of ERM and the presence of schisis at the level of Henle's fibre layer. Three optional anatomical features can also be present: the presence of microcystoid spaces in the inner nuclear layer (INL), an increase of retinal thickness and the presence of retinal wrinkling. MPH definition is based on three mandatory criteria and two optional anatomical features. Mandatory criteria include the presence of a foveal sparing ERM, the presence of a steepened foveal profile and an increased central retinal thickness. Optional anatomical features are the presence of microcystoid spaces in the INL and a normal retinal thickness.ConclusionsThe use of the proposed definitions may provide uniform language for clinicians and future research

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Augmented navigation

    Get PDF
    Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors. The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded. Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale. The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path. Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions. Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%]. In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied

    Robotics in neurosurgery: A literature review

    Get PDF
    Robotic surgery has been the forte of minimally invasive stereo-tactic procedures for some decades now. Ongoing advancements and evolutionary developments require substantial evidence to build the consensus about its efficacy in the field of neurosurgery. Main obstacle in obtaining successful results in neurosurgery is fine neural structures and other anatomical limitations. Currently, human rationalisation and robotic precision works in symbiosis to provide improved results. We reviewed the current data about recent interventions. Robots are capable of providing virtual data, superior spatial resolution and geometric accuracy, superior dexterity, faster manoeuvring and non-fatigability with steady motion. Robotic surgery also allows simulation of virtual procedures which turn out to be of great succour for young apprentice surgeons to practise their surgical skills in a safe environment. It also allows senior professionals to rehearse difficult cases before involving into considerable risky procedures

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces
    • …
    corecore