83 research outputs found

    Uncertainty-aware Visualization in Medical Imaging - A Survey

    Get PDF
    Medical imaging (image acquisition, image transformation, and image visualization) is a standard tool for clinicians in order to make diagnoses, plan surgeries, or educate students. Each of these steps is affected by uncertainty, which can highly influence the decision-making process of clinicians. Visualization can help in understanding and communicating these uncertainties. In this manuscript, we aim to summarize the current state-of-the-art in uncertainty-aware visualization in medical imaging. Our report is based on the steps involved in medical imaging as well as its applications. Requirements are formulated to examine the considered approaches. In addition, this manuscript shows which approaches can be combined to form uncertainty-aware medical imaging pipelines. Based on our analysis, we are able to point to open problems in uncertainty-aware medical imaging

    Enabling Technologies for Co-Robotic Translational Ultrasound and Photoacoustic Imaging

    Get PDF
    Among many medical imaging modalities, medical ultrasound possesses its unique advantages of non-ionizing, real-time, and non-invasive properties. With its safeness, ease of use, and cost-effectiveness, ultrasound imaging has been used in a wide variety of diagnostic applications. Photoacoustic imaging is a hybrid imaging modality merging light and ultrasound, and reveals the tissue metabolism and molecular distribution with the utilization of endo- and exogenous contrast agents. With the emergence of photoacoustic imaging, ultrasound and photoacoustic imaging can comprehensively depict not only anatomical but also functional information of biological tissue. To broaden the impact of translational ultrasound and photoacoustic imaging, this dissertation focuses on the development of enabling technologies and the exploration of associated applications. The goals of these technologies are; (1) Enabling Technologies for Translational Photoacoustic Imaging. We investigated the potential of maximizing the access to translational photoacoustic imaging using a clinical ultrasound scanner and a low-cost light source, instead of widely used customized data acquisition system and expensive high power laser. (2) Co-robotic Ultrasound and Photoacoustic Imaging. We introduced a co-robotic paradigm to make ultrasound/photoacoustic imaging more comprehensive and capable of imaging deeper with higher resolution and wider field-of-view.(3) Advancements on Translational Photoacoustic Imaging. We explored the new use of translational photoacoustic imaging for molecular-based cancer detection and the sensing of neurotransmitter activity in the brain. Together, these parts explore the feasibility of co-robotic translational ultrasound and photoacoustic imaging

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Augmented Image-Guidance for Transcatheter Aortic Valve Implantation

    Get PDF
    The introduction of transcatheter aortic valve implantation (TAVI), an innovative stent-based technique for delivery of a bioprosthetic valve, has resulted in a paradigm shift in treatment options for elderly patients with aortic stenosis. While there have been major advancements in valve design and access routes, TAVI still relies largely on single-plane fluoroscopy for intraoperative navigation and guidance, which provides only gross imaging of anatomical structures. Inadequate imaging leading to suboptimal valve positioning contributes to many of the early complications experienced by TAVI patients, including valve embolism, coronary ostia obstruction, paravalvular leak, heart block, and secondary nephrotoxicity from contrast use. A potential method of providing improved image-guidance for TAVI is to combine the information derived from intra-operative fluoroscopy and TEE with pre-operative CT data. This would allow the 3D anatomy of the aortic root to be visualized along with real-time information about valve and prosthesis motion. The combined information can be visualized as a `merged\u27 image where the different imaging modalities are overlaid upon each other, or as an `augmented\u27 image, where the location of key target features identified on one image are displayed on a different imaging modality. This research develops image registration techniques to bring fluoroscopy, TEE, and CT models into a common coordinate frame with an image processing workflow that is compatible with the TAVI procedure. The techniques are designed to be fast enough to allow for real-time image fusion and visualization during the procedure, with an intra-procedural set-up requiring only a few minutes. TEE to fluoroscopy registration was achieved using a single-perspective TEE probe pose estimation technique. The alignment of CT and TEE images was achieved using custom-designed algorithms to extract aortic root contours from XPlane TEE images, and matching the shape of these contours to a CT-derived surface model. Registration accuracy was assessed on porcine and human images by identifying targets (such as guidewires or coronary ostia) on the different imaging modalities and measuring the correspondence of these targets after registration. The merged images demonstrated good visual alignment of aortic root structures, and quantitative assessment measured an accuracy of less than 1.5mm error for TEE-fluoroscopy registration and less than 6mm error for CT-TEE registration. These results suggest that the image processing techniques presented have potential for development into a clinical tool to guide TAVI. Such a tool could potentially reduce TAVI complications, reducing morbidity and mortality and allowing for a safer procedure

    KinImmerse: Macromolecular VR for NMR ensembles

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case.</p> <p>Methods</p> <p>The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE.</p> <p>Results</p> <p>In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs).</p> <p>Conclusion</p> <p>The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis.</p

    Mobile and Low-cost Hardware Integration in Neurosurgical Image-Guidance

    Get PDF
    It is estimated that 13.8 million patients per year require neurosurgical interventions worldwide, be it for a cerebrovascular disease, stroke, tumour resection, or epilepsy treatment, among others. These procedures involve navigating through and around complex anatomy in an organ where damage to eloquent healthy tissue must be minimized. Neurosurgery thus has very specific constraints compared to most other domains of surgical care. These constraints have made neurosurgery particularly suitable for integrating new technologies. Any new method that has the potential to improve surgical outcomes is worth pursuing, as it has the potential to not only save and prolong lives of patients, but also increase the quality of life post-treatment. In this thesis, novel neurosurgical image-guidance methods are developed, making use of currently available, low-cost off-the-shelf components. In particular, a mobile device (e.g. smartphone or tablet) is integrated into a neuronavigation framework to explore new augmented reality visualization paradigms and novel intuitive interaction methods. The developed tools aim at improving image-guidance using augmented reality to improve intuitiveness and ease of use. Further, we use gestures on the mobile device to increase interactivity with the neuronavigation system in order to provide solutions to the problem of accuracy loss or brain shift that occurs during surgery. Lastly, we explore the effectiveness and accuracy of low-cost hardware components (i.e. tracking systems and ultrasound) that could be used to replace the current high cost hardware that are integrated into commercial image-guided neurosurgery systems. The results of our work show the feasibility of using mobile devices to improve neurosurgical processes. Augmented reality enables surgeons to focus on the surgical field while getting intuitive guidance information. Mobile devices also allow for easy interaction with the neuronavigation system thus enabling surgeons to directly interact with systems in the operating room to improve accuracy and streamline procedures. Lastly, our results show that low-cost components can be integrated into a neurosurgical guidance system at a fraction of the cost, while having a negligible impact on accuracy. The developed methods have the potential to improve surgical workflows, as well as democratize access to higher quality care worldwide

    Image Guided Robots for Urology

    Get PDF
    This dissertation addresses the development of medical image-guided robots and their applications in urology. Image-guided robots integrate medical image information with robotic precision to assist the planning and execution of the image-guided interventions. Robots guided by two different image modalities, ultrasound and MR image, were developed. Ultrasound image-guided robots manipulate an ultrasound probe and a needle-guide that are calibrated with respect to the robot for image-guided targeting. A method for calibration was developed and verified through the image-guided targeting experiments. Robotic manipulation of the calibrated probe allows acquisition of image slices at precise location, which can be combined to generate a 3D ultrasound image. Software for 3D ultrasound image acquisition, processing, and segmentation was developed as a part of the image-guided robot system. The feasibility of several image-guided intervention procedures using the ultrasound image-guided robot system was tested. The robot was used in a clinical trial of intraoperative transrectal ultrasound (TRUS) guided prostatectomy. The accuracy of TRUS-guided prostate biopsy using the robot was evaluated in a comparative study versus the classic human operation of the probe. Robot controlled palpation and image processing methods were developed for ultrasound elastography imaging of the prostate. An ultrasound to CT image-fusion method using the robot as a common reference was developed for percutaneous access of the kidney. MRI-guided robots were developed for transrectal and transperineal prostate biopsy. Extensive in-vitro tests were performed to ensure MRI compatibility and image-guided accuracy of the robots. The transrectal robot was evaluated in an animal study and the transperineal robot is undergoing a clinical trial. The collection of methods and algorithms presented in this dissertation can contribute to the development of image-guided robots that may provide less invasive and more precise interventions in urology, interventional radiology, and other fields

    New Technology and Techniques for Needle-Based Magnetic Resonance Image-Guided Prostate Focal Therapy

    Get PDF
    The most common diagnosis of prostate cancer is that of localized disease, and unfortunately the optimal type of treatment for these men is not yet certain. Magnetic resonance image (MRI)-guided focal laser ablation (FLA) therapy is a promising potential treatment option for select men with localized prostate cancer, and may result in fewer side effects than whole-gland therapies, while still achieving oncologic control. The objective of this thesis was to develop methods of accurately guiding needles to the prostate within the bore of a clinical MRI scanner for MRI-guided FLA therapy. To achieve this goal, a mechatronic needle guidance system was developed. The system enables precise targeting of prostate tumours through angulated trajectories and insertion of needles with the patient in the bore of a clinical MRI scanner. After confirming sufficient accuracy in phantoms, and good MRI-compatibility, the system was used to guide needles for MRI-guided FLA therapy in eight patients. Results from this case series demonstrated an improvement in needle guidance time and ease of needle delivery compared to conventional approaches. Methods of more reliable treatment planning were sought, leading to the development of a systematic treatment planning method, and Monte Carlo simulations of needle placement uncertainty. The result was an estimate of the maximum size of focal target that can be confidently ablated using the mechatronic needle guidance system, leading to better guidelines for patient eligibility. These results also quantified the benefit that could be gained with improved techniques for needle guidance
    • …
    corecore