874 research outputs found

    Augmented Reality in Kidney Cancer

    Get PDF
    Augmented reality(AR) is the concept of a digitally created perception that enhances components of the real-world to allow better engagement with it. Within healthcare, there has been a recent expansion of AR solutions, especially in the field of surgery. Traditional renal cancer surgery has been largely replaced by minimally invasive laparoscopic (or robotic) partial nephrectomies. This has meant loss of certain intra-operative experiences such as haptic feedback and AR can aid this replacement with enhanced visual and patient-specific feedback. The kidney is a dynamic organ and current AR development has revolved around specific surgical stages such as safe arterial clamping and perfecting tumour margins. This chapter discusses the current state of AR technology in these areas with key attention to the aspects of image registration, organ tracking, tissue deformation and live imaging. The chapter then discusses limitations of AR, such as intentional blindness and depth perception and provides potential future ideas and solutions. These include inventions such as AR headsets and 3D-printed renal models (with the possibility of remote surgical intervention). AR provides a very positive outcome for the future of truly minimally invasive renal surgery. However, current AR needs validation, cost evaluation and thorough planning before being safely integrated into everyday surgical practice

    Role and Utility of Mixed Reality Technology in Laparoscopic Partial Nephrectomy: Outcomes of a Prospective RCT Using an Indigenously Developed Software

    Full text link
    OBJECTIVE: To develop a software for mixed reality (MR) anatomical model creation and study its intraoperative clinical utility to facilitate laparoscopic partial nephrectomy. MATERIALS AND METHODS: After institutional review board approval, 47 patients were prospectively randomized for LPN into two groups: the control group (24 patients) underwent operation with an intraoperative ultrasound (US) control and the experimental group (23 patients) with smart glasses HoloLens 2 (Microsoft, Seattle, WA, USA). Our team has developed an open-source software package called “HLOIA,” utilization of which allowed to create and use during surgery the MR anatomical model of the kidney with its vascular pedicle and tumor. The study period extended from June 2020 to February 2021 where demographic, perioperative, and pathological data were collected for all qualifying patients. The objective was to assess the utility of a MR model during LPN and through a 5-point Likert scale questionnaire, completed by the surgeon, immediately after LPN. Patient characteristics were tested using the chi-square test for categorical variables and Student's t-test or Mann–Whitney test for continuous variables. RESULTS: Comparison of the variables between the groups revealed statistically significant differences only in the following parameters: the time for renal pedicle exposure and the time from the renal pedicle to the detection of tumor localization (p < 0.001), which were in favor of the experimental group. The surgeon's impression of the utility of the MR model by the proposed questionnaire demonstrated high scores in all statements. CONCLUSIONS: Developed open-source software “HLOIA” allowed to create the mixed reality anatomical model by operating urologist which is when used with smart glasses has shown improvement in terms of time for renal pedicle exposure and time for renal tumor identification without compromising safety

    Image-Fusion for Biopsy, Intervention, and Surgical Navigation in Urology

    Get PDF

    The Challenge of Augmented Reality in Surgery

    Get PDF
    Imaging has revolutionized surgery over the last 50 years. Diagnostic imaging is a key tool for deciding to perform surgery during disease management; intraoperative imaging is one of the primary drivers for minimally invasive surgery (MIS), and postoperative imaging enables effective follow-up and patient monitoring. However, notably, there is still relatively little interchange of information or imaging modality fusion between these different clinical pathway stages. This book chapter provides a critique of existing augmented reality (AR) methods or application studies described in the literature using relevant examples. The aim is not to provide a comprehensive review, but rather to give an indication of the clinical areas in which AR has been proposed, to begin to explain the lack of clinical systems and to provide some clear guidelines to those intending pursue research in this area

    Surgical Strategy for the Management of Renal Cell Carcinoma with Inferior Vena Cava Tumor Thrombus

    Get PDF
    The hallmark of renal cell carcinoma is its biological characteristic of invading the renal vein and/or inferior vena cava (IVC), which occurs in 4–10% of patients. Radical nephrectomy (RN) with tumor thrombectomy is the standard approach for treating such challenging cases. Except tumor thrombus height, several factors can determine the surgical strategy, including the effect of targeted molecular therapy (TMT), invasion of the IVC wall, venous occlusion, establishment of collateral circulation, IVC thromboembolism, and primary tumor location. The surgical strategy for patients with retrohepatic vena cava tumor thrombi depends on the upper extent of the tumor thrombus. In addition, the first porta hepatis and hepatic veins are important anatomical boundaries. Based on previous studies, the effect of pre-surgical TMT is limited. The safety of IVC venography, an imaging modality that can observe congestion of the tumor thrombus and show the collateral circulation, has considerably improved. IVC interruption plays an important role in tumor thrombectomy for patients with invasion of the venous walls, complete occlusion of the vena cava, and the presence of distal thrombus. A series of retrospective and prospective studies are needed to be conducted, which will provide our clinical work with more powerful reference and basis

    Advancements in Laparoscopic Partial Nephrectomy: Expanding the Feasibility of Nephron-Sparing

    Get PDF
    Partial nephrectomy (PN) offers equivalent oncologic outcomes to radical nephrectomy (RN) but has greater preservation of renal function and less risk of chronic kidney disease and cardiovascular disease. Laparoscopic PN remains underutilized likely because it is a technically challenging operation with higher rates of perioperative complications compared to open PN and laparoscopic RN. A review of the latest PN literature demonstrates that recent advancements in laparoscopic approaches, imaging modalities, ischemic mitigating strategies, renorrhaphy techniques, and hemostatic agents will likely allow greater utilization of LPN and expand its usage to increasingly more complex tumors

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery

    Get PDF
    Introduction The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. Methods This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. Results The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. Discussion Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process
    corecore