2,721 research outputs found

    Augmented Reality Visualization for Image-Guided Surgery:A Validation Study Using a Three-Dimensional Printed Phantom

    Get PDF
    Background Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MM). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Results Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). Conclusion The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup. (C) 2021 The Author. Published by Elsevier Inc. on behalf of The American Association of Oral and Maxillofacial Surgeons

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumours

    Full text link
    Background The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Methods Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a ‘remote centre of motion’ (RCM) wrist and a non-RCM wrist, which is preferred in real applications. Results The needle placement accuracies were < 3 mm for both wrists in the mechanical phantom experiment. The target accuracy for the robot with the RCM wrist was improved to 1.6 ± 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 ± 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. Conclusion The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright © 2010 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78054/1/313_ftp.pd

    FPGA-based High-Performance Collision Detection: An Enabling Technique for Image-Guided Robotic Surgery

    Get PDF
    Collision detection, which refers to the computational problem of finding the relative placement or con-figuration of two or more objects, is an essential component of many applications in computer graphics and robotics. In image-guided robotic surgery, real-time collision detection is critical for preserving healthy anatomical structures during the surgical procedure. However, the computational complexity of the problem usually results in algorithms that operate at low speed. In this paper, we present a fast and accurate algorithm for collision detection between Oriented-Bounding-Boxes (OBBs) that is suitable for real-time implementation. Our proposed Sweep and Prune algorithm can perform a preliminary filtering to reduce the number of objects that need to be tested by the classical Separating Axis Test algorithm, while the OBB pairs of interest are preserved. These OBB pairs are re-checked by the Separating Axis Test algorithm to obtain accurate overlapping status between them. To accelerate the execution, our Sweep and Prune algorithm is tailor-made for the proposed method. Meanwhile, a high performance scalable hardware architecture is proposed by analyzing the intrinsic parallelism of our algorithm, and is implemented on FPGA platform. Results show that our hardware design on the FPGA platform can achieve around 8X higher running speed than the software design on a CPU platform. As a result, the proposed algorithm can achieve a collision frame rate of 1 KHz, and fulfill the requirement for the medical surgery scenario of Robot Assisted Laparoscopy.published_or_final_versio

    Virtual Reality Aided Mobile C-arm Positioning for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) is the minimally invasive procedure based on the pre-operative volume in conjunction with intra-operative X-ray images which are commonly captured by mobile C-arms for the confirmation of surgical outcomes. Although currently some commercial navigation systems are employed, one critical issue of such systems is the neglect regarding the radiation exposure to the patient and surgeons. In practice, when one surgical stage is finished, several X-ray images have to be acquired repeatedly by the mobile C-arm to obtain the desired image. Excessive radiation exposure may increase the risk of some complications. Therefore, it is necessary to develop a positioning system for mobile C-arms, and achieve one-time imaging to avoid the additional radiation exposure. In this dissertation, a mobile C-arm positioning system is proposed with the aid of virtual reality (VR). The surface model of patient is reconstructed by a camera mounted on the mobile C-arm. A novel registration method is proposed to align this model and pre-operative volume based on a tracker, so that surgeons can visualize the hidden anatomy directly from the outside view and determine a reference pose of C-arm. Considering the congested operating room, the C-arm is modeled as manipulator with a movable base to maneuver the image intensifier to the desired pose. In the registration procedure above, intensity-based 2D/3D registration is used to transform the pre-operative volume into the coordinate system of tracker. Although it provides a high accuracy, the small capture range hinders its clinical use due to the initial guess. To address such problem, a robust and fast initialization method is proposed based on the automatic tracking based initialization and multi-resolution estimation in frequency domain. This hardware-software integrated approach provides almost optimal transformation parameters for intensity-based registration. To determine the pose of mobile C-arm, high-quality visualization is necessary to locate the pathology in the hidden anatomy. A novel dimensionality reduction method based on sparse representation is proposed for the design of multi-dimensional transfer function in direct volume rendering. It not only achieves the similar performance to the conventional methods, but also owns the capability to deal with the large data sets

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Robust fetoscopic mosaicking from deep learned flow fields

    Get PDF
    PURPOSE: Fetoscopic laser photocoagulation is a minimally invasive procedure to treat twin-to-twin transfusion syndrome during pregnancy by stopping irregular blood flow in the placenta. Building an image mosaic of the placenta and its network of vessels could assist surgeons to navigate in the challenging fetoscopic environment during the procedure. METHODOLOGY: We propose a fetoscopic mosaicking approach by combining deep learning-based optical flow with robust estimation for filtering inconsistent motions that occurs due to floating particles and specularities. While the current state of the art for fetoscopic mosaicking relies on clearly visible vessels for registration, our approach overcomes this limitation by considering the motion of all consistent pixels within consecutive frames. We also overcome the challenges in applying off-the-shelf optical flow to fetoscopic mosaicking through the use of robust estimation and local refinement. RESULTS: We compare our proposed method against the state-of-the-art vessel-based and optical flow-based image registration methods, and robust estimation alternatives. We also compare our proposed pipeline using different optical flow and robust estimation alternatives. CONCLUSIONS: Through analysis of our results, we show that our method outperforms both the vessel-based state of the art and LK, noticeably when vessels are either poorly visible or too thin to be reliably identified. Our approach is thus able to build consistent placental vessel mosaics in challenging cases where currently available alternatives fail

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space
    • …
    corecore