68,021 research outputs found

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Changing the paradigm: endoscopic video analysis and information extraction for safer surgeries

    Get PDF
    Background: Minimally invasive surgery creates two technological opportunities: (1) the development of better training and objective evaluation environments, and (2) the creation of image guided surgical systems

    Framework for a low-cost intra-operative image-guided neuronavigator including brain shift compensation

    Full text link
    In this paper we present a methodology to address the problem of brain tissue deformation referred to as 'brain-shift'. This deformation occurs throughout a neurosurgery intervention and strongly alters the accuracy of the neuronavigation systems used to date in clinical routine which rely solely on pre-operative patient imaging to locate the surgical target, such as a tumour or a functional area. After a general description of the framework of our intra-operative image-guided system, we describe a procedure to generate patient specific finite element meshes of the brain and propose a biomechanical model which can take into account tissue deformations and surgical procedures that modify the brain structure, like tumour or tissue resection

    Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies

    Get PDF
    Introduction: Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. Methods: A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient's skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. Results: The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. Conclusion: The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation dat

    Image Guided Robotic Systems for Focal Ultrasound Based Surgical Applications

    Get PDF

    Fluorescent image-guided surgery in breast cancer by intravenous application of a quenched fluorescence activity-based probe for cysteine cathepsins in a syngeneic mouse model

    Get PDF
    PURPOSE: The reoperation rate for breast-conserving surgery is as high as 15-30% due to residual tumor in the surgical cavity after surgery. In vivo tumor-targeted optical molecular imaging may serve as a red-flag technique to improve intraoperative surgical margin assessment and to reduce reoperation rates. Cysteine cathepsins are overexpressed in most solid tumor types, including breast cancer. We developed a cathepsin-targeted, quenched fluorescent activity-based probe, VGT-309, and evaluated whether it could be used for tumor detection and image-guided surgery in syngeneic tumor-bearing mice. METHODS: Binding specificity of the developed probe was evaluated in vitro. Next, fluorescent imaging in BALB/c mice bearing a murine breast tumor was performed at different time points after VGT-309 administration. Biodistribution of VGT-309 after 24 h in tumor-bearing mice was compared to control mice. Image-guided surgery was performed at multiple time points tumors with different clinical fluorescent camera systems and followed by ex vivo analysis. RESULTS: The probe was specifically activated by cathepsins X, B/L, and S. Fluorescent imaging revealed an increased tumor-to-background contrast over time up to 15.1 24 h post probe injection. In addition, VGT-309 delineated tumor tissue during image-guided surgery with different optical fluorescent imaging camera systems. CONCLUSION: These results indicate that optical fluorescent molecular imaging using the cathepsin-targeted probe, VGT-309, may improve intraoperative tumor detection, which could translate to more complete tumor resection when coupled with commercially available surgical tools and techniques

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Minimally invasive surgical video analysis: a powerful tool for surgical training and navigation

    Get PDF
    Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery

    Minimally invasive surgical video analysis: a powerful tool for surgical training and navigation

    Full text link
    Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery

    Modular MRI Guided Device Development System: Development, Validation and Applications

    Get PDF
    Since the first robotic surgical intervention was performed in 1985 using a PUMA industrial manipulator, development in the field of surgical robotics has been relatively fast paced, despite the tremendous costs involved in developing new robotic interventional devices. This is due to the clear advantages to augmented a clinicians skill and dexterity with the precision and reliability of computer controlled motion. A natural extension of robotic surgical intervention is the integration of image guided interventions, which give the promise of reduced trauma, procedure time and inaccuracies. Despite magnetic resonance imaging (MRI) being one of the most effective imaging modalities for visualizing soft tissue structures within the body, MRI guided surgical robotics has been frustrated by the high magnetic field in the MRI image space and the extreme sensitivity to electromagnetic interference. The primary contributions of this dissertation relate to enabling the use of direct, live MR imaging to guide and assist interventional procedures. These are the two focus areas: creation both of an integrated MRI-guided development platform and of a stereotactic neural intervention system. The integrated series of modules of the development platform represent a significant advancement in the practice of creating MRI guided mechatronic devices, as well as an understanding of design requirements for creating actuated devices to operate within a diagnostic MRI. This knowledge was gained through a systematic approach to understanding, isolating, characterizing, and circumventing difficulties associated with developing MRI-guided interventional systems. These contributions have been validated on the levels of the individual modules, the total development system, and several deployed interventional devices. An overview of this work is presented with a summary of contributions and lessons learned along the way
    • …
    corecore