1,046 research outputs found

    OPTICAL NAVIGATION TECHNIQUES FOR MINIMALLY INVASIVE ROBOTIC SURGERIES

    Get PDF
    Minimally invasive surgery (MIS) involves small incisions in a patient's body, leading to reduced medical risk and shorter hospital stays compared to open surgeries. For these reasons, MIS has experienced increased demand across different types of surgery. MIS sometimes utilizes robotic instruments to complement human surgical manipulation to achieve higher precision than can be obtained with traditional surgeries. Modern surgical robots perform within a master-slave paradigm, in which a robotic slave replicates the control gestures emanating from a master tool manipulated by a human surgeon. Presently, certain human errors due to hand tremors or unintended acts are moderately compensated at the tool manipulation console. However, errors due to robotic vision and display to the surgeon are not equivalently addressed. Current vision capabilities within the master-slave robotic paradigm are supported by perceptual vision through a limited binocular view, which considerably impacts the hand-eye coordination of the surgeon and provides no quantitative geometric localization for robot targeting. These limitations lead to unexpected surgical outcomes, and longer operating times compared to open surgery. To improve vision capabilities within an endoscopic setting, we designed and built several image guided robotic systems, which obtained sub-millimeter accuracy. With this improved accuracy, we developed a corresponding surgical planning method for robotic automation. As a demonstration, we prototyped an autonomous electro-surgical robot that employed quantitative 3D structural reconstruction with near infrared registering and tissue classification methods to localize optimal targeting and suturing points for minimally invasive surgery. Results from validation of the cooperative control and registration between the vision system in a series of in vivo and in vitro experiments are presented and the potential enhancement to autonomous robotic minimally invasive surgery by utilizing our technique will be discussed

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Ranging of Aircraft Using Wide-baseline Stereopsis

    Get PDF
    The purpose of this research was to investigate the efficacy of wide-baseline stereopsis as a method of ranging aircraft, specifically as a possible sense-and-avoid solution in Unmanned Aerial Systems. Two studies were performed: the first was an experimental pilot study to examine the ability of humans to range in-flight aircraft and the second a wide-baseline study of stereopsis to range in-flight aircraft using a baseline 14.32 meters and two 640 x 480 pixel charge coupled device camera. An experimental research design was used in both studies. Humans in the pilot study ranged aircraft with a mean absolute error of 50.34%. The wide-baseline stereo system ranged aircraft within 2 kilometers with a mean absolute error of 17.62%. A t-test was performed and there was a significant difference between the mean absolute error of the humans in the pilot study and the wide-baseline stereo system. The results suggest that the wide-baseline system is more consistent as well as more accurate than humans

    AUTOMATIC PERFORMANCE LEVEL ASSESSMENT IN MINIMALLY INVASIVE SURGERY USING COORDINATED SENSORS AND COMPOSITE METRICS

    Get PDF
    Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias. The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees objectively has become an increasing concern. In this work, we integrate and coordinate multiple camera sensors to assess the performance of MIS trainees and surgeons. This study aims at developing an objective data-driven assessment that takes advantage of multiple coordinated sensors. The technical framework for the study is a synchronized network of sensors that captures large sets of measures from the training environment. The measures are then, processed to produce a reliable set of individual and composed metrics, coordinated in time, that suggest patterns of skill development. The sensors are non-invasive, real-time, and coordinated over many cues such as, eye movement, external shots of body and instruments, and internal shots of the operative field. The platform is validated by a case study of 17 subjects and 70 sessions. The results show that the platform output is highly accurate and reliable in detecting patterns of skills development and predicting the skill level of the trainees

    Toward Real-Time Video-Enhanced Augmented Reality for Medical Visualization and Simulation

    Get PDF
    In this work we demonstrate two separate forms of augmented reality environments for use with minimally-invasive surgical techniques. In Chapter 2 it is demonstrated how a video feed from a webcam, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the image surface, resulting in a simple augmented reality environment. Chapter 3 details our implementation of a similar system to the one previously mentioned, albeit with an external tracking system. Additionally, we discuss the challenges and considerations for expanding this system to support an external tracking system, specifically the Polaris Spectra optical tracker. Because of the relocation of the tracking origin to a point other than the camera center, there is an additional registration step necessary to establish the position of all components within the scene. This modification is expected to increase accuracy and robustness of the system

    Smart Surgical Microscope based on Optical Coherence Domain Reflectometry

    Get PDF
    Department of Biomedical EngineeringOver the several decades, there have been clinical needs that requires advanced technologies in medicine. Optical coherence tomography (OCT), one of the newly emerged medical imaging devices, provides non-invasive cross-sectional images in high resolution which is mainly used in ophthalmology. However, due to the limited penetration depth of 1-2 mm in bio-samples, there is a limit to be widely used. In order to easily integrate with existing medical tools and be convenient to users, it is necessary that the sample unit of OCT should be compact and simple. In this study, we developed high-speed swept-source OCT (SS-OCT) for advanced screening of otolaryngology. Synchronized signal sampling with a high-speed digitizer using a clock signal from a swept laser source, its trigger signal is also used to synchronize with the movement of the scanning mirror. The SS-OCT system can reliably provide high-throughput images, and two-axis scanning of galvano mirrors enables real-time acquisition of 3D data. Graphic processing unit (GPU) can performs high-speed data processing through parallel programming, and can also implement perspective projection 3D OCT visualization with optimal ray casting techniques. In the Clinical Study of Otolaryngology, OCT was applied to identify the microscopic extrathyroidal extension (mETE) of papillary thyroid cancer (PTC). As a result to detect the mETE of around 60% in conventional ultrasonography, it could be improved to 84.1% accuracy in our study. The detection ratio of the mETE was calculated by the pathologist analyzing the histologic image. In chapter 3, we present a novel study using combined OCT system integrated with a conventional surgical microscope. In the current set-up of surgical microscope, only two-dimensional microscopic images through the eyepiece view are provided to the surgeon. Thus, image-guided surgery, which provides real-time image information of the tissues or the organs, has been developed as an advanced surgical technique. This study illustrate newly designed optical set-up of smart surgical microscope that combined sample arm of the OCT with an existing microscope. Specifically, we used a beam projector to overlay OCT images on existing eyepiece views, and demonstrated augmented reality images. In chapter 4, in order to develop novel microsurgical instruments, optical coherence domain reflectometry (OCDR) was applied. Introduces smart surgical forceps using OCDR as a sensor that provides high-speed, high-resolution distance information in the tissue. To attach the sensor to the forceps, the lensed fiber which is a small and high sensitivity sensor was fabricated and the results are shown to be less affected by the tilt angle. In addition, the piezo actuator compensates the hand tremor, resulting in a reduction in the human hand tremor of 5 to 15 Hz. Finally, M-mode OCT needle is proposed for microsurgery guidance in ophthalmic surgery. Stepwise transitional core (STC) fiber was applied as a sensor to measure information within the tissue and attached to a 26 gauge needle. It shows the modified OCT system and the position-guided needle design of the sample stage and shows the algorithm flowchart of M-mode OCT imaging software. The developed M-mode OCT needle has been applied to animal studies using rabbit eyes and demonstrates the big-bubble deep anterior lamellar keratoplasty (DALK) surgery for corneal transplantation. Through this study, we propose a novel microsurgical instrument for lamellar keratoplasty and evaluate its feasibility with conventional regular OCT system images. In conclusion, for fundamental study required new augmented reality guided surgery with smart surgical microscope, it is expected that OCT combined with surgical microscope can be widely used. We demonstrated a novel microsurgical instrument to share with light source and the various optical components. Acquired information throughout our integrated system would be a key method to meet a wide range of different clinical needs in the real world.ope

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    Engineering precision surgery: Design and implementation of surgical guidance technologies

    Get PDF
    In the quest for precision surgery, this thesis introduces several novel detection and navigation modalities for the localization of cancer-related tissues in the operating room. The engineering efforts have focused on image-guided surgery modalities that use the complementary tracer signatures of nuclear and fluorescence radiation. The first part of the thesis covers the use of “GPS-like” navigation concepts to navigate fluorescence cameras during surgery, based on SPECT images of the patient. The second part of the thesis introduces several new imaging modalities such as a hybrid 3D freehand Fluorescence and freehand SPECT imaging and navigation device. Furthermore, to improve the detection of radioactive tracer-emissions during robot-assisted laparoscopic surgery, a tethered DROP-IN gamma probe is introduced. The clinical indications that are used to evaluate the new technologies were all focused on sentinel lymph node procedures in urology (i.e. prostate and penile cancer). Nevertheless, all presented techniques are of such a nature, that they can be applied to different surgical indications, including sentinel lymph node and tumor-receptor-targeted procedures, localization the primary tumor and metastatic spread. This will hopefully contribute towards more precise, less invasive and more effective surgical procedures in the field of oncology. Crystal Photonics GmbH Eurorad S.A. Intuitive Surgical Inc. KARL STORZ Endoscopie Nederland B.V. MILabs B.V. PI Medical Diagnostic Equipment B.V. SurgicEye GmbH Verb Surgical Inc.LUMC / Geneeskund

    The Future of the Operating Room: Surgical Preplanning and Navigation using High Accuracy Ultra-Wideband Positioning and Advanced Bone Measurement

    Get PDF
    This dissertation embodies the diversity and creativity of my research, of which much has been peer-reviewed, published in archival quality journals, and presented nationally and internationally. Portions of the work described herein have been published in the fields of image processing, forensic anthropology, physical anthropology, biomedical engineering, clinical orthopedics, and microwave engineering. The problem studied is primarily that of developing the tools and technologies for a next-generation surgical navigation system. The discussion focuses on the underlying technologies of a novel microwave positioning subsystem and a bone analysis subsystem. The methodologies behind each of these technologies are presented in the context of the overall system with the salient results helping to elucidate the difficult facets of the problem. The microwave positioning system is currently the highest accuracy wireless ultra-wideband positioning system that can be found in the literature. The challenges in producing a system with these capabilities are many, and the research and development in solving these problems should further the art of high accuracy pulse-based positioning
    corecore