2 research outputs found

    The Impact of Performance Expectancy, Workload, Risk, and Satisfaction on Trust in ChatGPT: Cross-sectional Survey Analysis

    Full text link
    This study investigated how perceived workload, satisfaction, performance expectancy, and risk-benefit perception influenced users' trust in Chat Generative Pre-Trained Transformer (ChatGPT). We aimed to understand the nuances of user engagement and provide insights to improve future design and adoption strategies for similar technologies. A semi-structured, web-based survey was conducted among adults in the United States who actively use ChatGPT at least once a month. The survey was conducted from 22nd February 2023 through 24th March 2023. We used structural equation modeling to understand the relationships among the constructs of perceived workload, satisfaction, performance expectancy, risk-benefit, and trust. The analysis of 607 survey responses revealed a significant negative relationship between perceived workload and user satisfaction, a negative but insignificant relationship between perceived workload and trust, and a positive relationship between user satisfaction and trust. Trust was also found to increase with performance expectancy. In contrast, the relationship between the benefit-to-risk ratio of using ChatGPT and trust was insignificant. The findings underscore the importance of ensuring user-friendly design and functionality in AI-based applications to reduce workload and enhance user satisfaction, thereby increasing user trust. Future research should further explore the relationship between the benefit-to-risk ratio and trust in the context of AI chatbots

    Exploring the Effects of Task Priority on Attention Allocation and Trust Towards Imperfect Automation: A Flight Simulator Study

    Get PDF
    The present study examined the effect of task priority and task load on attention allocation and automation trust in a multitasking flight simulator platform. Previous research demonstrated that, participants made less fixations and reported lower levels of trust towards the automation in the secondary monitoring under higher load on the primary tracking task (e.g., Karpinsky et al., 2018). The results suggested that participants perceived behaviors of the automated system less accurately due to less attention allocated to monitoring of the system, leading to decreased trust towards it. One potential explanation of the effect is that participants might have prioritized the tracking task due to the elevated task load over monitoring of the automation. The current study employed a 2 x 2 mixed design with task difficulty (low vs. high difficulty) and task priority (equal vs. tracking priority). Participants performed the central tracking task, the system monitoring task, and the fuel management task where the system monitoring was assisted by an imperfect automated system. Participants were instructed to either prioritize the central tracking task over the other two tasks or maximize performance for all tasks. Additionally, participants received feedback on their tracking performance reflecting an anchor of their baseline performance. The data indicated that participants rated lower performance-based trust in a multitasking environment when all tasks were equally prioritized, supporting the notion that task priority modulates the effect of task load
    corecore