1,197 research outputs found

    Intrusiveness, Trust and Argumentation: Using Automated Negotiation to Inhibit the Transmission of Disruptive Information

    No full text
    The question of how to promote the growth and diffusion of information has been extensively addressed by a wide research community. A common assumption underpinning most studies is that the information to be transmitted is useful and of high quality. In this paper, we endorse a complementary perspective. We investigate how the growth and diffusion of high quality information can be managed and maximized by preventing, dampening and minimizing the diffusion of low quality, unwanted information. To this end, we focus on the conflict between pervasive computing environments and the joint activities undertaken in parallel local social contexts. When technologies for distributed activities (e.g. mobile technology) develop, both artifacts and services that enable people to participate in non-local contexts are likely to intrude on local situations. As a mechanism for minimizing the intrusion of the technology, we develop a computational model of argumentation-based negotiation among autonomous agents. A key component in the model is played by trust: what arguments are used and how they are evaluated depend on how trustworthy the agents judge one another. To gain an insight into the implications of the model, we conduct a number of virtual experiments. Results enable us to explore how intrusiveness is affected by trust, the negotiation network and the agents' abilities of conducting argumentation

    Facing the Artificial: Understanding Affinity, Trustworthiness, and Preference for More Realistic Digital Humans

    Get PDF
    In recent years, companies have been developing more realistic looking human faces for digital, virtual agents controlled by artificial intelligence (AI). But how do users feel about interacting with such virtual agents? We used a controlled lab experiment to examine users’ perceived trustworthiness, affinity, and preference towards a real human travel agent appearing via video (i.e., Skype) as well as in the form of a very human-realistic avatar; half of the participants were (deceptively) told the avatar was a virtual agent controlled by AI while the other half were told the avatar was controlled by the same human travel agent. Results show that participants rated the video human agent more trustworthy, had more affinity for him, and preferred him to both avatar versions. Users who believed the avatar was a virtual agent controlled by AI reported the same level of affinity, trustworthiness, and preferences towards the agent as those who believed it was controlled by a human. Thus, use of a realistic digital avatar lowered affinity, trustworthiness, and preferences, but how the avatar was controlled (by human or machine) had no effect. The conclusion is that improved visual fidelity alone makes a significant positive difference and that users are not averse to advanced AI simulating human presence, some may even be anticipating such an advanced technology

    My Actions Speak Louder Than Your Words: When User Behavior Predicts Their Beliefs about Agents' Attributes

    Full text link
    An implicit expectation of asking users to rate agents, such as an AI decision-aid, is that they will use only relevant information -- ask them about an agent's benevolence, and they should consider whether or not it was kind. Behavioral science, however, suggests that people sometimes use irrelevant information. We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcome -- which were the result of their own behavior -- with the same agent. Our analyses suggest the need for augmentation of models so that they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.Comment: HCII 202

    User trust here and now but not necessarily there and then - A Design Perspective on Appropriate Trust in Automated Vehicles (AVs)

    Get PDF
    Automation may carry out functions previously conducted only by humans. In the past, interaction with automation was primarily designed for, and used by, users with special training (pilots in aviation or operators in the process industry for example) but since automation has developed and matured, it has also become more available to users who have no additional training on automation such as users of automated vehicles (AVs). However, before we can reap the benefits of AV use, users must first trust the vehicles. According to earlier studies on trust in automation (TiA), user trust is a precondition for the use of automated systems not only because it is essential to user acceptance, but also because it is a prerequisite for a good user experience. Furthermore, that user trust is appropriate in relation to the actual performance of the AV, that is, user trust is calibrated to the capabilities and limitations of the AV. Otherwise, it may lead to misuse or disuse of the AV.\ua0\ua0\ua0\ua0 The issue of how to design for appropriate user trust was approached from a user-centred design perspective based on earlier TiA theories and was addressed in four user studies using mixed-method research designs. The four studies involved three types of AVs; an automated car, an automated public transport bus as well as an automated delivery bot for last-mile deliveries (LMD) of parcels. The users’ ranged from ordinary car drivers, bus drivers, public transport commuters and logistic personnel.\ua0\ua0\ua0\ua0 The findings show that user trust in the AVs was primarily affected by information relating to the performance of the AV. That is factors such as, how predictable, reliable and capable the AV was perceived to be conducting for instance a task, as well as how appropriate the behaviour of the AV was perceived to be for conducting the task and whether or not the user understood why the AV behaved as it did when conducting the task. Secondly, it was also found that contextual aspects influenced user trust in AVs. This primarily related to the users’ perception of risk for oneself and others as well as perceptions of task difficulty. That is, user trust was affected by the perception of risk for oneself but also by the possible risks the AV could impose on other e.g. road users. The perception of task difficulty influenced user trust in situations when a task was perceived as (too) easy, the user could not judge the trustworthiness of the AV or when the AV increased the task difficulty for the user thus adding to negative outcomes. Therefore, AV-related trust factors and contextual aspects are important to consider when designing for appropriate user trust in different types of AVs operating in different domains.\ua0\ua0\ua0\ua0 However, from a more in-depth cross-study analysis and consequent synthesis it was found that when designing for appropriate user trust the earlier mentioned factors and aspects should be considered but should not be the focus. They are effects, that is the user’s interpretation of information originating from the behaviour of the AV in a particular context which in turn are the consequence of the following design variables: (I) The Who i.e. the AV, (II) What the AV does, (III) by What Means the AV does something, (IV) When the AV does something, (V) Why the AV does something and(VI) Where the AV does something, as well as the interplay between them. Furthermore, it was found that user trust is affected by the interdependency between (II) What the AV does and (VI) Where the AV does something; this was always assessed together by the user in turn affecting user trust. From these findings a tentative Framework of Trust Analysis & Design was developed. The framework can be used as a ‘tool-for-thought’ and accounts for the activity conducted by the AV, the context as well as their interdependence that ultimately affect user trust

    Factors That Enhance Consumer Trust in Human-Computer Interaction: An Examination of Interface Factors and Moderating Influences

    Get PDF
    The Internet coupled with agent technology presents a unique setting to examine consumer trust. Since the Internet is a relatively new, technically complex environment where human-computer interaction (HCI) is the basic communication modality, there is greater perception of risk facing consumers and hence a greater need for trust. In this dissertation, the notion of consumer trust was revisited and conceptually redefined adopting an integrative perspective. A critical test of trust theory revealed its cognitive (i.e., competence, information credibility), affective (i.e., benevolence), and intentional (i.e., trusting intention) constructs. The theoretical relationships among these trust constructs were confirmed through confirmatory factor analysis and structural equation modeling. The primary purpose of this dissertation was to investigate antecedent and moderating factors affecting consumer trust in HCI. This dissertation focused on interface-based antecedents of trust in the agent-assisted shopping context aiming at discovering potential interface strategies as a means to enhance consumer trust in the computer agent. The effects of certain interface design factors including face human-likeliness, script social presence, information richness, and price increase associated with upgrade recommendation by the computer agent were examined for their usefulness in enhancing the affective and cognitive bases for consumer trust. In addition, the role of individual difference factors and situational factors in moderating the relationship between specific types of computer interfaces and consumer trust perceptions was examined. Two experiments were conducted employing a computer agent, Agent John, which was created using MacroMedia Authorware. The results of the two experiments showed that certain interface factors including face and script could affect the affective trust perception. Information richness did not enhance consumers’ cognitive trust perceptions; instead, the percentage of price increase associated with Agent John’s upgrade recommendation affected individuals’ cognitive trust perceptions. Interestingly, the moderating influence of consumer personality (especially feminine orientation) on trust perceptions was significant. The consequences of enhanced consumer trust included increased conversion behavior, satisfaction and retention, and to a lesser extent, self-disclosure behavior. Finally, theoretical and managerial implications as well as future research directions were discussed

    Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions

    Get PDF
    ABSTRACT Trust is a critical component to both human-automation and human-human interactions. Interface manipulations, such as visual anthropomorphism and machine politeness, have been used to affect trust in automation. However, these design strategies have been primarily used to facilitate initial trust formation but have not been examined means to actively repair trust that has been violated by a system failure. Previous research has shown that trust in another party can be effectively repaired after a violation using various strategies, but there is little evidence substantiating such strategies in human-automation context. The current study examined the effectiveness of trust repair strategies, derived from a human-human or human-organizational context, in human-automation interaction. During a taxi dispatching task, participants interacted with imperfect automation that either denied or apologized for committing competence- or integrity-based failures. Participants performed two experimental blocks (one for each failure type), and, after each block, reported subjective trust in the automation. Consistent with interpersonal literature, our analysis revealed that automation apologies more successfully repaired trust following competence-based failures than integrity-based failures. However, user trust in automation was not significantly different when the automation denied committing competence- or integrity-based failures. These findings provide important insight into the unique ways in which humans interact with machines

    Human, Hybrid, or Machine? Exploring the Trustworthiness of Voice-Based Assistants

    Get PDF
    This study investigates how people assess the trustworthiness of perceptually hybrid communicative technologies such as voice-based assistants (VBAs). VBAs are often perceived as hybrids between human and machine, which challenges previously distinct definitions of human and machine trustworthiness. Thus, this study explores how the two trustworthiness models can be combined in a hybrid trustworthiness model, which model (human, hybrid, or machine) is most applicable to examine VBA trustworthiness, and whether this differs between respondents with different levels of prior experience with VBAs. Results from two surveys revealed that, overall, the human model exhibited the best model fit; however, the hybrid model also showed acceptable model fit as prior experience increased. Findings are discussed considering the ongoing discourse to establish adequate measures for HMC research

    Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Full text link
    In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional nearinfrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions

    Exploring social metacognition

    Get PDF
    This thesis explores two questions: does the way individuals seek advice produce echo chamber-like networks; and is the well-established phenomenon of egocentric discounting explicable as a rational process? Both parts are presented within a framework of advice as information transfer; the implications for wider interpretations of advice are discussed in the conclusion. Both parts are investigated with a mixture of computational simulations and behavioural experiments. For the first question, behavioural experiments implementing a Judge-Advisor System with a perceptual decision-making task and a date estimation task are used to characterise people’s propensity to use agreement as a signal of advice quality in the absence of feedback. These experiments provide moderate evidence suggesting that people do do this, and that experience of agreement in the absence of feedback increases their trust in advisors. Agent-based computational simulations take the results of the behavioural experiments and simulate their effects on trust ratings between agents. The simulations indicate that including the kind of heterogeneity seen in the participants in the behavioural experiments slows down the formation of echo chambers and limits the extent of polarisation. In the second part, I argue that egocentric discounting deviates from a normative model of advice-taking because it is a rational response to concerns that always accompany advice: that the advice might be deliberately misleading, lazily researched, or misunderstood. Evolutionary computational simulations of advice-taking illustrate that when any of these circumstances might be true, egocentric discounting emerges as an adaptive response. Behavioural experiments using a date estimation task within a Judge-Advisor System test whether people respond adaptively to alterations in the circumstances explored in the evolutionary simulations. These experiments show that people respond flexibly to changes in the probability that their advisor will attempt to mislead them. Experiments attempting to explore people’s ability to flexibly respond to acquiring information about an advisor’s confidence calibration were inconclusive. A web-book version of this thesis is available at https://mjaquiery.github.io/oxforddown/. Its RMarkdown source code is available at https://github.com/mjaquiery/oxforddown

    Can Trust be Trusted in Cybersecurity?

    Get PDF
    Human compliance in cybersecurity continues to be a persistent problem for organizations. This research-in-progress advances theoretical understanding of the negative effects of trust formed between individuals and the cybersecurity function (i.e., those responsible for protection), cybersecurity system (i.e., the protective technologies), and organization (i.e., those verifying the cybersecurity department) that leads to suboptimal compliance behaviors. In contrast to the current information security literature that focuses on how organizations can induce compliance, this study begins to provide understanding into the degradation of compliance by organizations and how to combat it. An integrated model is conceptualized using the theories of trust and attention. This model provides the theoretical foundation to study the role of dark side trust in the context of cybersecurity and provides initial mechanisms to reduce it. Additionally, by developing this conceptualization of dark side trust and model, this study contributes to the general study of trust in information systems research outside of the domain of cybersecurity
    • 

    corecore