3,608 research outputs found
Pragmatic and Cultural Considerations for Deception Detection in Asian Languages
In hopes of sparking a discussion, I argue for much needed research on automated deception detection in Asian languages. The task of discerning truthful texts from deceptive ones is challenging, but a logical sequel to opinion mining. I suggest that applied computational linguists pursue broader interdisciplinary research on cultural differences and pragmatic use of language in Asian cultures, before turning to detection methods based on a primarily Western (English-centric) worldview. Deception is fundamentally human, but how do various cultures interpret and judge deceptive behavior
Leader Member Exchange: An Interactive Framework to Uncover a Deceptive Insider as Revealed by Human Sensors
This study intends to provide a theoretical ground that conceptualizes the prospect of detecting insider threats based on leader-member exchange. This framework specifically corresponds to two propositions raised by Ho, Kaarst-Brown et al. [42]. Team members that are geographically co-located or dispersed are analogized as human sensors in social networks with the ability to collectively âreactâ to deception, even when the act of deception itself is not obvious to any one member. Close interactive relationships are the key to afford a network of human sensors an opportunity to formulate baseline knowledge of a deceptive insider. The research hypothesizes that groups unknowingly impacted by a deceptive leader are likely to use certain language-action cues when interacting with each other after a leader violates group trust
Truth and Deception at the Rhetorical Structure Level
This paper furthers the development of methods to dis- tinguish truth from deception in textual data. We use rhetorical structure theory (RST) as the analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coher- ence and structure. A sample of 36 elicited personal stories, self-ranked as truthful or deceptive, is manu- ally analyzed by assigning RST discourse relations among each storyâs constituent parts. A vector space model (VSM) assesses each storyâs position in multi- dimensional RST space with respect to its distance from truthful and deceptive centers as measures of the storyâs level of deception and truthfulness. Ten human judges evaluate independently whether each story is deceptive and assign their confidence levels (360 evaluations total), producing measures of the expected human ability to recognize deception. As a robustness check, a test sample of 18 truthful stories (with 180 additional evaluations) is used to determine the reli- ability of our RST-VSM method in determining decep- tion. The contribution is in demonstration of the discourse structure analysis as a significant method for automated deception detection and an effective complement to lexicosemantic analysis. The potential is in developing novel discourse-based tools to alert information users to potential deception in computer- mediated texts
Recommended from our members
Untangling a Web of Lies: Exploring Automated Detection of Deception in Computer-Mediated Communication
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firmsâ decision-making and guide compliance monitoring system development
Establishing a Foundation for Automated Human Credibility Screening
Automated human credibility screening is an emerging research area that has potential for high impact in fields as diverse as homeland security and accounting fraud detection. Systems that conduct interviews and make credibility judgments can provide objectivity, improved accuracy, and greater reliability to credibility assessment practices, need to be built. This study establishes a foundation for developing automated systems for human credibility screening
- âŠ