7,257 research outputs found

    Deceptive Language by Innocent and Guilty Criminal Suspects: The Influence of Dominance, Question, and Guilt on Interview Responses

    Get PDF
    Matthew L. Jensen is an assistant professor in the Price College of Business and a researcher in the Center for Applied Social Research at the University of Oklahoma. His primary research interests are deception and credibility in online and face-to-face interaction. Recent publications have dealt with computer-aided deception detection and establishing credibility online.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    The effect of conversational agent skill on user behavior during deception

    Get PDF
    Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive

    Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment

    Get PDF
    This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions

    Online Disinformation and the Psychological Bases of Prejudice and Political Conservatism

    Get PDF
    It is widely believed that the impact of fake news, internet rumors, hoaxes, deceptive memes etc. are spilling into the physical world from the virtual world. In fact, social media has had a significant role in the origination and spread of such deceptive communication, as social media users often lack awareness of the intentional manipulation of online content and are easily tricked into believing unverifiable content. In an increasingly polarized world where social media and the internet have pushed people to live inside “echo chambers” and “filter bubbles,” people consciously and unconsciously are exposed only to content that reinforce their confirmation bias. In such a scenario, people only agree with content that aligns with their preexisting beliefs and disagree with or label as “fake” content that is opposed to their worldview. This paper proposes to study the psychological differences that cause people to either agree or disagree with such prejudiced and ideologically oriented online disinformation

    Theory of Robot Communication: II. Befriending a Robot over Time

    Full text link
    In building on theories of Computer-Mediated Communication (CMC), Human-Robot Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the current paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human-Robot Communication comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of Computer-Mediated Communication, Human-Robot Communication, and Media Psychology, outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a robot over time. arXiv:cs, 2502572(v1), 1-2

    Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment

    Get PDF
    This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions

    Automated deception detection of 911 call transcripts

    Get PDF

    Disharmony and Matchless: Interpersonal Deception Theory in Online Dating

    Get PDF
    In recent years, computer-media dated communication has not only become extremely popular but has also begun to hold an important function in daily social interactions. This qualitative study investigates the communication phenomena of deception as it occurs in the online dating environment. The research study focused on four questions: (1) About what characteristics are online daters deceptive? (2) What motivation do online daters have for their deception of others in the online dating environment? (3) What perceptions do online daters have about other daters\u27 deceit towards them in the online dating environment? (4) How does deception affect the romantic relationships formed in the online dating environment? Through an online surveying tool data was collected with 15 open ended questions. A total of 52 participants were included in the study ranging in ages from 21-37. The results of the study found that the majority of online daters consider themselves and others to be mostly honest in their online self presentations. Those online daters that did use deception were motivated to do so by the longing to attract members of the opposite sex and project a positive self-image. Daters were also willing to overlook deception in others if they viewed the dishonesty as a slight exaggeration or characteristic of little value to the dater. Despite the deception that does occur, participants still believe that the online dating environment is capable of developing successful romantic relationships

    Deception Detection in a Computer-Mediated Environment: Gender, Trust, and Training Issues

    Get PDF
    The Department of Defense is increasingly relying on computer-mediated communications to conduct business. This reliance introduces an amplified vulnerability to strategic information manipulation, or deception. This research draws on communication and deception literature to develop a conceptual model proposing relationships between deception detection abilities in a computer-mediated environment, gender, trust, and training. An experiment was conducted with 119 communications personnel to test the proposed hypotheses. No relationship between gender or trust and deception detection accuracy was found. Partial support was found showing that training improves deception detection accuracy. The most significant finding was that individual’s deception detection abilities deteriorate in lean media environments. The results showed significant differences in deception detection abilities across media types; indicating lower accuracy rates in the lean media environments (i.e. audio and text). This suggests that deception detection is more difficult when the deceptive message is presented in a lean medium such as a text only online chat, than when delivered in richer medium. Future research should be conducted to further explore this finding
    • 

    corecore