1,068 research outputs found

    Simulation Approaches to X-ray C-Arm-based Interventions

    Get PDF
    Mobile C-Arm systems have enabled interventional spine procedures, such as facet joint injections, to be performed minimally-invasively under X-ray or fluoroscopy guidance. The downside to these procedures is the radiation exposure the patient and medical staff are subject to, which can vary greatly depending on the procedure as well as the skill and experience of the team. Standard training methods for these procedures involve the use of a physical C-Arm with real X-rays training on either cadavers or via an apprenticeship-based program. Many guidance systems have been proposed in the literature which aim to reduce the amount of radiation exposure intraoperatively by supplementing the X-ray images with digitally reconstructed radiographs (DRRs). These systems have shown promising results in the lab but have proven difficult to integrate into the clinical workflow due to costly equipment, safety protocols, and difficulties in maintaining patient registration. Another approach for reducing the amount of radiation exposure is by providing better hands-on training for C-Arm positioning through a pre-operative simulator. Such simulators have been proposed in the literature but still require access to a physical C-Arm or costly tracking equipment. With the goal of providing hands-on, accessible training for C-Arm positioning tasks, we have developed a miniature 3D-printed C-Arm simulator using accelerometer-based tracking. The system is comprised of a software application to interface with the accelerometers and provide a real-time DRR display based on the position of the C-Arm source. We conducted a user study, consisting of control and experimental groups, to evaluate the efficacy of the system as a training tool. The experimental group achieved significantly lower procedure time and higher positioning accuracy than the control group. The system was evaluated positively for its use in medical education via a 5-pt likert scale questionnaire. C-Arm positioning tasks are associated with a highly visual learning-based nature due to the spatial mapping required from 2D fluoroscopic image to 3D C-Arm and patient. Due to the limited physical interaction required, this task is well suited for training in Virtual Reality (VR), eliminating the need for a physical C-Arm. To this end, we extended the system presented in chapter 2 to an entirely virtual-based approach. We implemented the system as a 3DSlicer module and conducted a pilot study for preliminary evaluation. The reception was overall positive, with users expressing enthusiasm towards training in VR, but also highlighting limitations and potential areas of improvement of the system

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Virtual reality surgery simulation: A survey on patient specific solution

    Get PDF
    For surgeons, the precise anatomy structure and its dynamics are important in the surgery interaction, which is critical for generating the immersive experience in VR based surgical training applications. Presently, a normal therapeutic scheme might not be able to be straightforwardly applied to a specific patient, because the diagnostic results are based on averages, which result in a rough solution. Patient Specific Modeling (PSM), using patient-specific medical image data (e.g. CT, MRI, or Ultrasound), could deliver a computational anatomical model. It provides the potential for surgeons to practice the operation procedures for a particular patient, which will improve the accuracy of diagnosis and treatment, thus enhance the prophetic ability of VR simulation framework and raise the patient care. This paper presents a general review based on existing literature of patient specific surgical simulation on data acquisition, medical image segmentation, computational mesh generation, and soft tissue real time simulation

    Radiological Society of North America (RSNA) 3D printing Special Interest Group (SIG): Guidelines for medical 3D printing and appropriateness for clinical scenarios

    Get PDF
    Este número da revista Cadernos de Estudos Sociais estava em organização quando fomos colhidos pela morte do sociólogo Ernesto Laclau. Seu falecimento em 13 de abril de 2014 surpreendeu a todos, e particularmente ao editor Joanildo Burity, que foi seu orientando de doutorado na University of Essex, Inglaterra, e que recentemente o trouxe à Fundação Joaquim Nabuco para uma palestra, permitindo que muitos pudessem dialogar com um dos grandes intelectuais latinoamericanos contemporâneos. Assim, buscamos fazer uma homenagem ao sociólogo argentino publicando uma entrevista inédita concedida durante a sua passagem pelo Recife, em 2013, encerrando essa revista com uma sessão especial sobre a sua trajetória

    The Art and Science of Abdominal Hernia

    Get PDF
    The Art and Science of Abdominal Hernia encompasses a broad range of topics regarding abdominal wall hernia. Minute technical details are largely avoided, the main focus being on diagnosis and common surgical approaches. Chapters address clinical evaluation and diagnosis, different types of hernias including inguinal, femoral, and umbilical hernias, and biological mechanisms of inguinal pain and chronification

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces
    • …
    corecore