68 research outputs found

    imaged-based tip force estimation on steerable intracardiac catheters using learning-based methods

    Get PDF
    Minimally invasive surgery has turned into the most commonly used approach to treat cardiovascular diseases during the surgical procedure; it is hypothesized that the absence of haptic (tactile) feedback and force presented to surgeons is a restricting factor. The use of ablation catheters with the integrated sensor at the tip results in high cost and noise complications. In this thesis, two sensor-less methods are proposed to estimate the force at the intracardiac catheter’s tip. Force estimation at the catheter tip is of great importance because insufficient force in ablation treatment may result in incomplete treatment and excessive force leads to damaging the heart chamber. Besides, adding the sensor to intracardiac catheters adds complexity to their structures. This thesis is categorized into two sensor-less approaches: 1- Learning-Based Force Estimation for Intracardiac Ablation Catheters, 2- A Deep-Learning Force Estimator System for Intracardiac Catheters. The first proposed method estimates catheter-tissue contact force by learning the deflected shape of the catheter tip section image. A regression model is developed based on predictor variables of tip curvature coefficients and knob actuation. The learning-based approach achieved force predictions in close agreement with experimental contact force measurements. The second approach proposes a deep learning method to estimate the contact forces directly from the catheter’s image tip. A convolutional neural network extracts the catheter’s deflection through input images and translates them into the corresponding forces. The ResNet graph was implemented as the architecture of the proposed model to perform a regression. The model can estimate catheter-tissue contact force based on the input images without utilizing any feature extraction or pre-processing. Thus, it can estimate the force value regardless of the tip displacement and deflection shape. The evaluation results show that the proposed method can elicit a robust model from the specified data set and approximate the force with appropriate accuracy

    Augmented Reality Ultrasound Guidance in Anesthesiology

    Get PDF
    Real-time ultrasound has become a mainstay in many image-guided interventions and increasingly popular in several percutaneous procedures in anesthesiology. One of the main constraints of ultrasound-guided needle interventions is identifying and distinguishing the needle tip from needle shaft in the image. Augmented reality (AR) environments have been employed to address challenges surrounding surgical tool visualization, navigation, and positioning in many image-guided interventions. The motivation behind this work was to explore the feasibility and utility of such visualization techniques in anesthesiology to address some of the specific limitations of ultrasound-guided needle interventions. This thesis brings together the goals, guidelines, and best development practices of functional AR ultrasound image guidance (AR-UIG) systems, examines the general structure of such systems suitable for applications in anesthesiology, and provides a series of recommendations for their development. The main components of such systems, including ultrasound calibration and system interface design, as well as applications of AR-UIG systems for quantitative skill assessment, were also examined in this thesis. The effects of ultrasound image reconstruction techniques, as well as phantom material and geometry on ultrasound calibration, were investigated. Ultrasound calibration error was reduced by 10% with synthetic transmit aperture imaging compared with B-mode ultrasound. Phantom properties were shown to have a significant effect on calibration error, which is a variable based on ultrasound beamforming techniques. This finding has the potential to alter how calibration phantoms are designed cognizant of the ultrasound imaging technique. Performance of an AR-UIG guidance system tailored to central line insertions was evaluated in novice and expert user studies. While the system outperformed ultrasound-only guidance with novice users, it did not significantly affect the performance of experienced operators. Although the extensive experience of the users with ultrasound may have affected the results, certain aspects of the AR-UIG system contributed to the lackluster outcomes, which were analyzed via a thorough critique of the design decisions. The application of an AR-UIG system in quantitative skill assessment was investigated, and the first quantitative analysis of needle tip localization error in ultrasound in a simulated central line procedure, performed by experienced operators, is presented. Most participants did not closely follow the needle tip in ultrasound, resulting in 42% unsuccessful needle placements and a 33% complication rate. Compared to successful trials, unsuccessful procedures featured a significantly greater (p=0.04) needle-tip to image-plane distance. Professional experience with ultrasound does not necessarily lead to expert level performance. Along with deliberate practice, quantitative skill assessment may reinforce clinical best practices in ultrasound-guided needle insertions. Based on the development guidelines, an AR-UIG system was developed to address the challenges in ultrasound-guided epidural injections. For improved needle positioning, this system integrated A-mode ultrasound signal obtained from a transducer housed at the tip of the needle. Improved needle navigation was achieved via enhanced visualization of the needle in an AR environment, in which B-mode and A-mode ultrasound data were incorporated. The technical feasibility of the AR-UIG system was evaluated in a preliminary user study. The results suggested that the AR-UIG system has the potential to outperform ultrasound-only guidance

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload
    • …
    corecore