2,483 research outputs found

    Constructing Robust Emotional State-based Feature with a Novel Voting Scheme for Multi-modal Deception Detection in Videos

    Full text link
    Deception detection is an important task that has been a hot research topic due to its potential applications. It can be applied in many areas, from national security (e.g., airport security, jurisprudence, and law enforcement) to real-life applications (e.g., business and computer vision). However, some critical problems still exist and are worth more investigation. One of the significant challenges in the deception detection tasks is the data scarcity problem. Until now, only one multi-modal benchmark open dataset for human deception detection has been released, which contains 121 video clips for deception detection (i.e., 61 for deceptive class and 60 for truthful class). Such an amount of data is hard to drive deep neural network-based methods. Hence, those existing models often suffer from overfitting problems and low generalization ability. Moreover, the ground truth data contains some unusable frames for many factors. However, most of the literature did not pay attention to these problems. Therefore, in this paper, we design a series of data preprocessing methods to deal with the aforementioned problem first. Then, we propose a multi-modal deception detection framework to construct our novel emotional state-based feature and use the open toolkit openSMILE to extract the features from the audio modality. We also design a voting scheme to combine the emotional states information obtained from visual and audio modalities. Finally, we can determine the novel emotion state transformation feature with our self-designed algorithms. In the experiment, we conduct the critical analysis and comparison of the proposed methods with the state-of-the-art multi-modal deception detection methods. The experimental results show that the overall performance of multi-modal deception detection has a significant improvement in the accuracy from 87.77% to 92.78% and the ROC-AUC from 0.9221 to 0.9265.Comment: 8 pages, for AAAI23 publicatio

    Recent Trends in Deep Learning Based Personality Detection

    Full text link
    Recently, the automatic prediction of personality traits has received a lot of attention. Specifically, personality trait prediction from multimodal data has emerged as a hot topic within the field of affective computing. In this paper, we review significant machine learning models which have been employed for personality detection, with an emphasis on deep learning-based methods. This review paper provides an overview of the most popular approaches to automated personality detection, various computational datasets, its industrial applications, and state-of-the-art machine learning models for personality detection with specific focus on multimodal approaches. Personality detection is a very broad and diverse topic: this survey only focuses on computational approaches and leaves out psychological studies on personality detection

    Automated Deception Detection from Videos: Using End-to-End Learning Based High-Level Features and Classification Approaches

    Full text link
    Deception detection is an interdisciplinary field attracting researchers from psychology, criminology, computer science, and economics. We propose a multimodal approach combining deep learning and discriminative models for automated deception detection. Using video modalities, we employ convolutional end-to-end learning to analyze gaze, head pose, and facial expressions, achieving promising results compared to state-of-the-art methods. Due to limited training data, we also utilize discriminative models for deception detection. Although sequence-to-class approaches are explored, discriminative models outperform them due to data scarcity. Our approach is evaluated on five datasets, including a new Rolling-Dice Experiment motivated by economic factors. Results indicate that facial expressions outperform gaze and head pose, and combining modalities with feature selection enhances detection performance. Differences in expressed features across datasets emphasize the importance of scenario-specific training data and the influence of context on deceptive behavior. Cross-dataset experiments reinforce these findings. Despite the challenges posed by low-stake datasets, including the Rolling-Dice Experiment, deception detection performance exceeds chance levels. Our proposed multimodal approach and comprehensive evaluation shed light on the potential of automating deception detection from video modalities, opening avenues for future research.Comment: 29 pages, 17 figures (19 if counting subfigures

    Pragmatic and Cultural Considerations for Deception Detection in Asian Languages

    Get PDF
    In hopes of sparking a discussion, I argue for much needed research on automated deception detection in Asian languages. The task of discerning truthful texts from deceptive ones is challenging, but a logical sequel to opinion mining. I suggest that applied computational linguists pursue broader interdisciplinary research on cultural differences and pragmatic use of language in Asian cultures, before turning to detection methods based on a primarily Western (English-centric) worldview. Deception is fundamentally human, but how do various cultures interpret and judge deceptive behavior

    Veracity Roadmap: Is Big Data Objective, Truthful and Credible?

    Get PDF
    This paper argues that big data can possess different characteristics, which affect its quality. Depending on its origin, data processing technologies, and methodologies used for data collection and scientific discoveries, big data can have biases, ambiguities, and inaccuracies which need to be identified and accounted for to reduce inference errors and improve the accuracy of generated insights. Big data veracity is now being recognized as a necessary property for its utilization, complementing the three previously established quality dimensions (volume, variety, and velocity), But there has been little discussion of the concept of veracity thus far. This paper provides a roadmap for theoretical and empirical definitions of veracity along with its practical implications. We explore veracity across three main dimensions: 1) objectivity/subjectivity, 2) truthfulness/deception, 3) credibility/implausibility – and propose to operationalize each of these dimensions with either existing computational tools or potential ones, relevant particularly to textual data analytics. We combine the measures of veracity dimensions into one composite index – the big data veracity index. This newly developed veracity index provides a useful way of assessing systematic variations in big data quality across datasets with textual information. The paper contributes to the big data research by categorizing the range of existing tools to measure the suggested dimensions, and to Library and Information Science (LIS) by proposing to account for heterogeneity of diverse big data, and to identify information quality dimensions important for each big data type

    Assessing the Credibility of Cyber Adversaries

    Get PDF
    Online communications are ever increasing, and we are constantly faced with the challenge of whether online information is credible or not. Being able to assess the credibility of others was once the work solely of intelligence agencies. In the current times of disinformation and misinformation, understanding what we are reading and to who we are paying attention to is essential for us to make considered, informed, and accurate decisions, and it has become everyone’s business. This paper employs a literature review to examine the empirical evidence across online credibility, trust, deception, and fraud detection in an effort to consolidate this information to understand adversary online credibility – how do we know with whom we are conversing is who they say they are? Based on this review, we propose a model that includes examining information as well as user and interaction characteristics to best inform an assessment of online credibility. Limitations and future opportunities are highlighted

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    Language Use Of Successful Liars

    Full text link
    Little research has been done to determine whether the cues to deception researched by academia and delivered to law enforcement agencies are equally useful for detecting both skilled and unskilled liars. This study investigated the effects of deceptive skill on six linguistic variables including parts of speech and emotional affect. Data was gathered from transcripts of a deceptive group communication task conducted in an online synchronous chat environment. An analysis of transcripts confirmed that liars can be distinguished from truth-tellers, and revealed that skill is also a factor affecting language patterns. Analyzed with a Mixed Model ANOVA, first-person pronouns, second-person pronouns, and conjunctions all showed a main effect for role, distinguishing liars from truth-tellers. Furthermore, skilled liars were found to use fewer words, first- and second-person pronouns, and conjunctions in synchronous chat

    Novel Applications of Response Time-Based Memory Detection

    Full text link
    • …
    corecore