4,693 research outputs found

    Improving Radiotherapy Targeting for Cancer Treatment Through Space and Time

    Get PDF
    Radiotherapy is a common medical treatment in which lethal doses of ionizing radiation are preferentially delivered to cancerous tumors. In external beam radiotherapy, radiation is delivered by a remote source which sits several feet from the patient\u27s surface. Although great effort is taken in properly aligning the target to the path of the radiation beam, positional uncertainties and other errors can compromise targeting accuracy. Such errors can lead to a failure in treating the target, and inflict significant toxicity to healthy tissues which are inadvertently exposed high radiation doses. Tracking the movement of targeted anatomy between and during treatment fractions provides valuable localization information that allows for the reduction of these positional uncertainties. Inter- and intra-fraction anatomical localization data not only allows for more accurate treatment setup, but also potentially allows for 1) retrospective treatment evaluation, 2) margin reduction and modification of the dose distribution to accommodate daily anatomical changes (called `adaptive radiotherapy\u27), and 3) targeting interventions during treatment (for example, suspending radiation delivery while the target it outside the path of the beam). The research presented here investigates the use of inter- and intra-fraction localization technologies to improve radiotherapy to targets through enhanced spatial and temporal accuracy. These technologies provide significant advancements in cancer treatment compared to standard clinical technologies. Furthermore, work is presented for the use of localization data acquired from these technologies in adaptive treatment planning, an investigational technique in which the distribution of planned dose is modified during the course of treatment based on biological and/or geometrical changes of the patient\u27s anatomy. The focus of this research is directed at abdominal sites, which has historically been central to the problem of motion management in radiation therapy

    Electromagnetic Tracking in Medicine - A Review of Technology, Validation, and Applications

    Get PDF

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    Respiratory organ motion in interventional MRI : tracking, guiding and modeling

    Get PDF
    Respiratory organ motion is one of the major challenges in interventional MRI, particularly in interventions with therapeutic ultrasound in the abdominal region. High-intensity focused ultrasound found an application in interventional MRI for noninvasive treatments of different abnormalities. In order to guide surgical and treatment interventions, organ motion imaging and modeling is commonly required before a treatment start. Accurate tracking of organ motion during various interventional MRI procedures is prerequisite for a successful outcome and safe therapy. In this thesis, an attempt has been made to develop approaches using focused ultrasound which could be used in future clinically for the treatment of abdominal organs, such as the liver and the kidney. Two distinct methods have been presented with its ex vivo and in vivo treatment results. In the first method, an MR-based pencil-beam navigator has been used to track organ motion and provide the motion information for acoustic focal point steering, while in the second approach a hybrid imaging using both ultrasound and magnetic resonance imaging was combined for advanced guiding capabilities. Organ motion modeling and four-dimensional imaging of organ motion is increasingly required before the surgical interventions. However, due to the current safety limitations and hardware restrictions, the MR acquisition of a time-resolved sequence of volumetric images is not possible with high temporal and spatial resolution. A novel multislice acquisition scheme that is based on a two-dimensional navigator, instead of a commonly used pencil-beam navigator, was devised to acquire the data slices and the corresponding navigator simultaneously using a CAIPIRINHA parallel imaging method. The acquisition duration for four-dimensional dataset sampling is reduced compared to the existing approaches, while the image contrast and quality are improved as well. Tracking respiratory organ motion is required in interventional procedures and during MR imaging of moving organs. An MR-based navigator is commonly used, however, it is usually associated with image artifacts, such as signal voids. Spectrally selective navigators can come in handy in cases where the imaging organ is surrounding with an adipose tissue, because it can provide an indirect measure of organ motion. A novel spectrally selective navigator based on a crossed-pair navigator has been developed. Experiments show the advantages of the application of this novel navigator for the volumetric imaging of the liver in vivo, where this navigator was used to gate the gradient-recalled echo sequence

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet
    corecore