190 research outputs found

    Assessment of spatiotemporal changes of pain and sensory perceptions using digital health technology

    Get PDF

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Enabling technologies for MRI guided interventional procedures

    Get PDF
    This dissertation addresses topics related to developing interventional assistant devices for Magnetic Resonance Imaging (MRI). MRI can provide high-quality 3D visualization of target anatomy and surrounding tissue, but the benefits can not be readily harnessed for interventional procedures due to difficulties associated with the use of high-field (1.5T or greater) MRI. Discussed are potential solutions to the inability to use conventional mecha- tronics and the confined physical space in the scanner bore. This work describes the development of two apparently dissimilar systems that repre- sent different approaches to the same surgical problem - coupling information and action to perform percutaneous (through the skin) needle placement with MR imaging. The first system addressed takes MR images and projects them along with a surgical plan directly on the interventional site, thus providing in-situ imaging. With anatomical images and a corresponding plan visible in the appropriate pose, the clinician can use this information to perform the surgical action. My primary research effort has focused on a robotic assistant system that overcomes the difficulties inherent to MR-guided procedures, and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot is a servo pneumatically operated automatic needle guide, and effectively guides needles under real- time MR imaging. This thesis describes development of the robotic system including requirements, workspace analysis, mechanism design and optimization, and evaluation of MR compatibility. Further, a generally applicable MR-compatible robot controller is de- veloped, the pneumatic control system is implemented and evaluated, and the system is deployed in pre-clinical trials. The dissertation concludes with future work and lessons learned from this endeavor

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    Proceedings XXI Congresso SIAMOC 2021

    Get PDF
    XXI Congresso Annuale della SIAMOC, modalità telematica il 30 settembre e il 1° ottobre 2021. Come da tradizione, il congresso vuole essere un’occasione di arricchimento e mutuo scambio, dal punto di vista scientifico e umano. Verranno toccati i temi classici dell’analisi del movimento, come lo sviluppo e l’applicazione di metodi per lo studio del movimento nel contesto clinico, e temi invece estremamente attuali, come la teleriabilitazione e il telemonitoraggio

    DESIGN PRINCIPLES FOR APP-BASED HEALTHCARE INTERVENTIONS: A MIXED METHOD APPROACH

    Get PDF
    Despite the ubiquity of mobile health applications (apps), the practical use and success of the apps have been questionable. Design Principles (DP) can affect chronic health app user satisfaction and have been studied for ensuring favorable app usage. However, there is no consensual definition of DP within the preceding literature, which has a technical rather than an end-user-centric focus and lacks a rigorous theoretical basis. Moreover, different levels of DPs’ application can lead to differential user satisfaction as influenced by the user-contextual environment, warranting a quantitative assessment. Accordingly, the overarching question to be addressed is which DP for the self-management of chronic conditions contributes to better user satisfaction outcomes. The research focuses on Multiple Sclerosis (MS) as a representative condition. This research uses a mixed methods, with a qualitative approach for DP identification and a quantitative approach for the studying the DP-Satisfaction relationship. The DP identification is achieved through - 1) An in depth review of foundational theory for greater validity, 2) A Systematic Literature Review (SLR), for DP themes grounded in theory, and 3) Manually coded user reviews for MS apps. The theoretical underpinnings of the empirical approach are established through a composite theoretical lens, based on technologically, behaviorally, and cognitively oriented frameworks. The DP extracted from theory, SLR, and manual coding methods are found to be largely consistent with each other, namely ‘Communication with Clinicians’, ‘Compatibility, ‘Education’, ‘Notifications’, ‘Tracking’, ‘Social Support’, ‘Ease of Use’, ‘Technical Support’, ‘Usefulness’, ‘Privacy and Security’, and Quality. An ordinal logistic regression analysis is conducted to understand the relationship between DP and User Satisfaction outcomes based on the manually coded DP scores of the user reviews. All DP have a significant impact on User Satisfaction. From a theoretical perspective, the research improves our understanding of key design principles for the self-management of chronic conditions such as MS and the impact of such principles on user satisfaction. From a practical perspective, the findings provide guidance to the user requirement elicitation process, potentially leading to the development of more successful, sustainable, and responsive healthcare interventions

    SURGICAL NAVIGATION AND AUGMENTED REALITY FOR MARGINS CONTROL IN HEAD AND NECK CANCER

    Get PDF
    I tumori maligni del distretto testa-collo rappresentano un insieme di lesioni dalle diverse caratteristiche patologiche, epidemiologiche e prognostiche. Per una porzione considerevole di tali patologie, l’intervento chirurgico finalizzato all’asportazione completa del tumore rappresenta l’elemento chiave del trattamento, quand’anche esso includa altre modalità quali la radioterapia e la terapia sistemica. La qualità dell’atto chirurgico ablativo è pertanto essenziale al fine di garantire le massime chance di cura al paziente. Nell’ambito della chirurgia oncologica, la qualità delle ablazioni viene misurata attraverso l’analisi dello stato dei margini di resezione. Oltre a rappresentare un surrogato della qualità della resezione chirurgica, lo stato dei margini di resezione ha notevoli implicazioni da un punto di vista clinico e prognostico. Infatti, il coinvolgimento dei margini di resezione da parte della neoplasia rappresenta invariabilmente un fattore prognostico sfavorevole, oltre che implicare la necessità di intensificare i trattamenti postchirurgici (e.g., ponendo indicazione alla chemioradioterapia adiuvante), comportando una maggiore tossicità per il paziente. La proporzione di resezioni con margini positivi (i.e., coinvolti dalla neoplasia) nel distretto testa-collo è tra le più elevate in ambito di chirurgia oncologica. In tale contesto si pone l’obiettivo del dottorato di cui questa tesi riporta i risultati. Le due tecnologie di cui si è analizzata l’utilità in termini di ottimizzazione dello stato dei margini di resezione sono la navigazione chirurgica con rendering tridimensionale e la realtà aumentata basata sulla videoproiezione di immagini. Le sperimentazioni sono state svolte parzialmente presso l’Università degli Studi di Brescia, parzialmente presso l’Azienda Ospedale Università di Padova e parzialmente presso l’University Health Network (Toronto, Ontario, Canada). I risultati delle sperimentazioni incluse in questo elaborato dimostrano che l'impiego della navigazione chirurgica con rendering tridimensionale nel contesto di procedure oncologiche ablative cervico-cefaliche risulta associata ad un vantaggio significativo in termini di riduzione della frequenza di margini positivi. Al contrario, le tecniche di realtà aumentata basata sulla videoproiezione, nell'ambito della sperimentazione preclinica effettuata, non sono risultate associate a vantaggi sufficienti per poter considerare tale tecnologia per la traslazione clinica.Head and neck malignancies are an heterogeneous group of tumors. Surgery represents the mainstay of treatment for the large majority of head and neck cancers, with ablation being aimed at removing completely the tumor. Radiotherapy and systemic therapy have also a substantial role in the multidisciplinary management of head and neck cancers. The quality of surgical ablation is intimately related to margin status evaluated at a microscopic level. Indeed, margin involvement has a remarkably negative effect on prognosis of patients and mandates the escalation of postoperative treatment by adding concomitant chemotherapy to radiotherapy and accordingly increasing the toxicity of overall treatment. The rate of margin involvement in the head and neck is among the highest in the entire field of surgical oncology. In this context, the present PhD project was aimed at testing the utility of 2 technologies, namely surgical navigation with 3-dimensional rendering and pico projector-based augmented reality, in decreasing the rate of involved margins during oncologic surgical ablations in the craniofacial area. Experiments were performed in the University of Brescia, University of Padua, and University Health Network (Toronto, Ontario, Canada). The research activities completed in the context of this PhD course demonstrated that surgical navigation with 3-dimensional rendering confers a higher quality to oncologic ablations in the head and neck, irrespective of the open or endoscopic surgical technique. The benefits deriving from this implementation come with no relevant drawbacks from a logistical and practical standpoint, nor were major adverse events observed. Thus, implementation of this technology into the standard care is the logical proposed step forward. However, the genuine presence of a prognostic advantage needs longer and larger study to be formally addressed. On the other hand, pico projector-based augmented reality showed no sufficient advantages to encourage translation into the clinical setting. Although observing a clear practical advantage deriving from the projection of osteotomy lines onto the surgical field, no substantial benefits were measured when comparing this technology with surgical navigation with 3-dimensional rendering. Yet recognizing a potential value of this technology from an educational standpoint, the performance displayed in the preclinical setting in terms of surgical margins optimization is not in favor of a clinical translation with this specific aim

    Ultrasound Guidance in Perioperative Care

    Get PDF
    corecore