7,257 research outputs found
Deceptive Language by Innocent and Guilty Criminal Suspects: The Influence of Dominance, Question, and Guilt on Interview Responses
Matthew L. Jensen is an assistant professor in the Price College of Business and a researcher in the Center for Applied Social Research at the University of Oklahoma. His primary research interests are deception and credibility in online and face-to-face interaction. Recent publications have dealt with computer-aided deception detection and establishing credibility online.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline
The effect of conversational agent skill on user behavior during deception
Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skillâthat is, the ability of the CA to mimic human conversationâfrom CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive
Recommended from our members
Untangling a Web of Lies: Exploring Automated Detection of Deception in Computer-Mediated Communication
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firmsâ decision-making and guide compliance monitoring system development
Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment
This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participantsâ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participantâs input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions
Online Disinformation and the Psychological Bases of Prejudice and Political Conservatism
It is widely believed that the impact of fake news, internet rumors, hoaxes, deceptive memes etc. are spilling into the physical world from the virtual world. In fact, social media has had a significant role in the origination and spread of such deceptive communication, as social media users often lack awareness of the intentional manipulation of online content and are easily tricked into believing unverifiable content. In an increasingly polarized world where social media and the internet have pushed people to live inside âecho chambersâ and âfilter bubbles,â people consciously and unconsciously are exposed only to content that reinforce their confirmation bias. In such a scenario, people only agree with content that aligns with their preexisting beliefs and disagree with or label as âfakeâ content that is opposed to their worldview. This paper proposes to study the psychological differences that cause people to either agree or disagree with such prejudiced and ideologically oriented online disinformation
Theory of Robot Communication: II. Befriending a Robot over Time
In building on theories of Computer-Mediated Communication (CMC), Human-Robot
Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the
current paper proposes an explanation of how over time, people experience the
mediated or simulated aspects of the interaction with a social robot. In two
simultaneously running loops, a more reflective process is balanced with a more
affective process. If human interference is detected behind the machine,
Robot-Mediated Communication commences, which basically follows CMC
assumptions; if human interference remains undetected, Human-Robot
Communication comes into play, holding the robot for an autonomous social
actor. The more emotionally aroused a robot user is, the more likely they
develop an affective relationship with what actually is a machine. The main
contribution of this paper is an integration of Computer-Mediated
Communication, Human-Robot Communication, and Media Psychology, outlining a
full-blown theory of robot communication connected to friendship formation,
accounting for communicative features, modes of processing, as well as
psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a
robot over time. arXiv:cs, 2502572(v1), 1-2
Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment
This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participantsâ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participantâs input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions
Disharmony and Matchless: Interpersonal Deception Theory in Online Dating
In recent years, computer-media dated communication has not only become extremely popular but has also begun to hold an important function in daily social interactions. This qualitative study investigates the communication phenomena of deception as it occurs in the online dating environment. The research study focused on four questions: (1) About what characteristics are online daters deceptive? (2) What motivation do online daters have for their deception of others in the online dating environment? (3) What perceptions do online daters have about other daters\u27 deceit towards them in the online dating environment? (4) How does deception affect the romantic relationships formed in the online dating environment? Through an online surveying tool data was collected with 15 open ended questions. A total of 52 participants were included in the study ranging in ages from 21-37. The results of the study found that the majority of online daters consider themselves and others to be mostly honest in their online self presentations. Those online daters that did use deception were motivated to do so by the longing to attract members of the opposite sex and project a positive self-image. Daters were also willing to overlook deception in others if they viewed the dishonesty as a slight exaggeration or characteristic of little value to the dater. Despite the deception that does occur, participants still believe that the online dating environment is capable of developing successful romantic relationships
Deception Detection in a Computer-Mediated Environment: Gender, Trust, and Training Issues
The Department of Defense is increasingly relying on computer-mediated communications to conduct business. This reliance introduces an amplified vulnerability to strategic information manipulation, or deception. This research draws on communication and deception literature to develop a conceptual model proposing relationships between deception detection abilities in a computer-mediated environment, gender, trust, and training. An experiment was conducted with 119 communications personnel to test the proposed hypotheses. No relationship between gender or trust and deception detection accuracy was found. Partial support was found showing that training improves deception detection accuracy. The most significant finding was that individualâs deception detection abilities deteriorate in lean media environments. The results showed significant differences in deception detection abilities across media types; indicating lower accuracy rates in the lean media environments (i.e. audio and text). This suggests that deception detection is more difficult when the deceptive message is presented in a lean medium such as a text only online chat, than when delivered in richer medium. Future research should be conducted to further explore this finding
- âŠ