352 research outputs found

    Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients.

    Get PDF
    Due to loss of tactile feedback the assessment of tumor margins during robotic surgery is based only on visual inspection, which is neither significantly sensitive nor specific. Here we demonstrate time-resolved fluorescence spectroscopy (TRFS) as a novel technique to complement the visual inspection of oral cancers during transoral robotic surgery (TORS) in real-time and without the need for exogenous contrast agents. TRFS enables identification of cancerous tissue by its distinct autofluorescence signature that is associated with the alteration of tissue structure and biochemical profile. A prototype TRFS instrument was integrated synergistically with the da Vinci Surgical robot and the combined system was validated in swine and human patients. Label-free and real-time assessment and visualization of tissue biochemical features during robotic surgery procedure, as demonstrated here, not only has the potential to improve the intraoperative decision making during TORS but also other robotic procedures without modification of conventional clinical protocols

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients

    Get PDF
    Due to loss of tactile feedback the assessment of tumor margins during robotic surgery is based only on visual inspection, which is neither significantly sensitive nor specific. Here we demonstrate time-resolved fluorescence spectroscopy (TRFS) as a novel technique to complement the visual inspection of oral cancers during transoral robotic surgery (TORS) in real-time and without the need for exogenous contrast agents. TRFS enables identification of cancerous tissue by its distinct autofluorescence signature that is associated with the alteration of tissue structure and biochemical profile. A prototype TRFS instrument was integrated synergistically with the da Vinci Surgical robot and the combined system was validated in swine and human patients. Label-free and real-time assessment and visualization of tissue biochemical features during robotic surgery procedure, as demonstrated here, not only has the potential to improve the intraoperative decision making during TORS but also other robotic procedures without modification of conventional clinical protocols

    Optical and hyperspectral image analysis for image-guided surgery

    Get PDF

    The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation

    Get PDF
    With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.publishedVersio

    Optical and hyperspectral image analysis for image-guided surgery

    Get PDF

    Patient-Specific Implants in Musculoskeletal (Orthopedic) Surgery

    Get PDF
    Most of the treatments in medicine are patient specific, aren’t they? So why should we bother with individualizing implants if we adapt our therapy to patients anyway? Looking at the neighboring field of oncologic treatment, you would not question the fact that individualization of tumor therapy with personalized antibodies has led to the thriving of this field in terms of success in patient survival and positive responses to alternatives for conventional treatments. Regarding the latest cutting-edge developments in orthopedic surgery and biotechnology, including new imaging techniques and 3D-printing of bone substitutes as well as implants, we do have an armamentarium available to stimulate the race for innovation in medicine. This Special Issue of Journal of Personalized Medicine will gather all relevant new and developed techniques already in clinical practice. Examples include the developments in revision arthroplasty and tumor (pelvic replacement) surgery to recreate individual defects, individualized implants for primary arthroplasty to establish physiological joint kinematics, and personalized implants in fracture treatment, to name but a few

    Trustworthy and Intelligent COVID-19 Diagnostic IoMT through XR and Deep-Learning-Based Clinic Data Access

    Get PDF
    This article presents a novel extended reality (XR) and deep-learning-based Internet-of-Medical-Things (IoMT) solution for the COVID-19 telemedicine diagnostic, which systematically combines virtual reality/augmented reality (AR) remote surgical plan/rehearse hardware, customized 5G cloud computing and deep learning algorithms to provide real-time COVID-19 treatment scheme clues. Compared to existing perception therapy techniques, our new technique can significantly improve performance and security. The system collected 25 clinic data from the 347 positive and 2270 negative COVID-19 patients in the Red Zone by 5G transmission. After that, a novel auxiliary classifier generative adversarial network-based intelligent prediction algorithm is conducted to train the new COVID-19 prediction model. Furthermore, The Copycat network is employed for the model stealing and attack for the IoMT to improve the security performance. To simplify the user interface and achieve an excellent user experience, we combined the Red Zone's guiding images with the Green Zone's view through the AR navigate clue by using 5G. The XR surgical plan/rehearse framework is designed, including all COVID-19 surgical requisite details that were developed with a real-time response guaranteed. The accuracy, recall, F1-score, and area under the ROC curve (AUC) area of our new IoMT were 0.92, 0.98, 0.95, and 0.98, respectively, which outperforms the existing perception techniques with significantly higher accuracy performance. The model stealing also has excellent performance, with the AUC area of 0.90 in Copycat slightly lower than the original model. This study suggests a new framework in the COVID-19 diagnostic integration and opens the new research about the integration of XR and deep learning for IoMT implementation

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform
    corecore