1,678 research outputs found

    Tactile Sensing System for Lung Tumour Localization during Minimally Invasive Surgery

    Get PDF
    Video-assisted thoracoscopie surgery (VATS) is becoming a prevalent method for lung cancer treatment. However, VATS suffers from the inability to accurately relay haptic information to the surgeon, often making tumour localization difficult. This limitation was addressed by the design of a tactile sensing system (TSS) consisting of a probe with a tactile sensor and interfacing visualization software. In this thesis, TSS performance was tested to determine the feasibility of implementing the system in VATS. This was accomplished through a series of ex vivo experiments in which the tactile sensor was calibrated and the visualization software was modified to provide haptic information visually to the user, and TSS performance was compared using human and robot palpation methods, and conventional VATS instruments. It was concluded that the device offers the possibility of providing to the surgeon the haptic information lost during surgery, thereby mitigating one of the current limitations of VATS

    Improving Radiotherapy Targeting for Cancer Treatment Through Space and Time

    Get PDF
    Radiotherapy is a common medical treatment in which lethal doses of ionizing radiation are preferentially delivered to cancerous tumors. In external beam radiotherapy, radiation is delivered by a remote source which sits several feet from the patient\u27s surface. Although great effort is taken in properly aligning the target to the path of the radiation beam, positional uncertainties and other errors can compromise targeting accuracy. Such errors can lead to a failure in treating the target, and inflict significant toxicity to healthy tissues which are inadvertently exposed high radiation doses. Tracking the movement of targeted anatomy between and during treatment fractions provides valuable localization information that allows for the reduction of these positional uncertainties. Inter- and intra-fraction anatomical localization data not only allows for more accurate treatment setup, but also potentially allows for 1) retrospective treatment evaluation, 2) margin reduction and modification of the dose distribution to accommodate daily anatomical changes (called `adaptive radiotherapy\u27), and 3) targeting interventions during treatment (for example, suspending radiation delivery while the target it outside the path of the beam). The research presented here investigates the use of inter- and intra-fraction localization technologies to improve radiotherapy to targets through enhanced spatial and temporal accuracy. These technologies provide significant advancements in cancer treatment compared to standard clinical technologies. Furthermore, work is presented for the use of localization data acquired from these technologies in adaptive treatment planning, an investigational technique in which the distribution of planned dose is modified during the course of treatment based on biological and/or geometrical changes of the patient\u27s anatomy. The focus of this research is directed at abdominal sites, which has historically been central to the problem of motion management in radiation therapy

    IMAGE ANALYSIS FOR SPINE SURGERY: DATA-DRIVEN DETECTION OF SPINE INSTRUMENTATION & AUTOMATIC ANALYSIS OF GLOBAL SPINAL ALIGNMENT

    Get PDF
    Spine surgery is a therapeutic modality for treatment of spine disorders, including spinal deformity, degeneration, and trauma. Such procedures benefit from accurate localization of surgical targets, precise delivery of instrumentation, and reliable validation of surgical objectives – for example, confirming that the surgical implants are delivered as planned and desired changes to the global spinal alignment (GSA) are achieved. Recent advances in surgical navigation have helped to improve the accuracy and precision of spine surgery, including intraoperative imaging integrated with real-time tracking and surgical robotics. This thesis aims to develop two methods for improved image-guided surgery using image analytic techniques. The first provides a means for automatic detection of pedicle screws in intraoperative radiographs – for example, to streamline intraoperative assessment of implant placement. The algorithm achieves a precision and recall of 0.89 and 0.91, respectively, with localization accuracy within ~10 mm. The second develops two algorithms for automatic assessment of GSA in computed tomography (CT) or cone-beam CT (CBCT) images, providing a means to quantify changes in spinal curvature and reduce the variability in GSA measurement associated with manual methods. The algorithms demonstrate GSA estimates with 93.8% of measurements within a 95% confidence interval of manually defined truth. Such methods support the goals of safe, effective spine surgery and provide a means for more quantitative intraoperative quality assurance. In turn, the ability to quantitatively assess instrument placement and changes in GSA could represent important elements of retrospective analysis of large image datasets, improved clinical decision support, and improved patient outcomes

    CT Scanning

    Get PDF
    Since its introduction in 1972, X-ray computed tomography (CT) has evolved into an essential diagnostic imaging tool for a continually increasing variety of clinical applications. The goal of this book was not simply to summarize currently available CT imaging techniques but also to provide clinical perspectives, advances in hybrid technologies, new applications other than medicine and an outlook on future developments. Major experts in this growing field contributed to this book, which is geared to radiologists, orthopedic surgeons, engineers, and clinical and basic researchers. We believe that CT scanning is an effective and essential tools in treatment planning, basic understanding of physiology, and and tackling the ever-increasing challenge of diagnosis in our society

    3D MODELLING AND RAPID PROTOTYPING FOR CARDIOVASCULAR SURGICAL PLANNING – TWO CASE STUDIES

    Get PDF
    In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice

    Deep Learning-based Patient Re-identification Is able to Exploit the Biometric Nature of Medical Chest X-ray Data

    Full text link
    With the rise and ever-increasing potential of deep learning techniques in recent years, publicly available medical datasets became a key factor to enable reproducible development of diagnostic algorithms in the medical domain. Medical data contains sensitive patient-related information and is therefore usually anonymized by removing patient identifiers, e.g., patient names before publication. To the best of our knowledge, we are the first to show that a well-trained deep learning system is able to recover the patient identity from chest X-ray data. We demonstrate this using the publicly available large-scale ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images from 30,805 unique patients. Our verification system is able to identify whether two frontal chest X-ray images are from the same person with an AUC of 0.9940 and a classification accuracy of 95.55%. We further highlight that the proposed system is able to reveal the same person even ten and more years after the initial scan. When pursuing a retrieval approach, we observe an mAP@R of 0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to 0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks on external datasets such as CheXpert and the COVID-19 Image Data Collection. Based on this high identification rate, a potential attacker may leak patient-related information and additionally cross-reference images to obtain more information. Thus, there is a great risk of sensitive content falling into unauthorized hands or being disseminated against the will of the concerned patients. Especially during the COVID-19 pandemic, numerous chest X-ray datasets have been published to advance research. Therefore, such data may be vulnerable to potential attacks by deep learning-based re-identification algorithms.Comment: Published in Scientific Report
    • …
    corecore