9,117 research outputs found

    Intelligent Deception Detection through Machine Based Interviewing

    Get PDF
    In this paper an automatic deception detection system, which analyses participant deception risk scores from non-verbal behaviour captured during an interview conducted by an Avatar, is demonstrated. The system is built on a configuration of artificial neural networks, which are used to detect facial objects and extract non-verbal behaviour in the form of micro gestures over short periods of time. A set of empirical experiments was conducted based a typical airport security scenario of packing a suitcase. Data was collected through 30 participants participating in either a truthful or deceptive scenarios being interviewed by a machine based border guard Avatar. Promising results were achieved using raw unprocessed data on un-optimized classifier neural networks. These indicate that a machine based interviewing technique can elicit non-verbal interviewee behavior, which allows an automatic system to detect deception

    Deception Detection in Videos

    Full text link
    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action recognition, are also very good at predicting deception in videos. We fuse the score of classifiers trained on IDT features and high-level micro-expressions to improve performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio domain also provide a significant boost in performance, while information from transcripts is not very beneficial for our system. Using various classifiers, our automated system obtains an AUC of 0.877 (10-fold cross-validation) when evaluated on subjects which were not part of the training set. Even though state-of-the-art methods use human annotations of micro-expressions for deception detection, our fully automated approach outperforms them by 5%. When combined with human annotations of micro-expressions, our AUC improves to 0.922. We also present results of a user-study to analyze how well do average humans perform on this task, what modalities they use for deception detection and how they perform if only one modality is accessible. Our project page can be found at \url{https://doubaibai.github.io/DARE/}.Comment: AAAI 2018, project page: https://doubaibai.github.io/DARE

    Lying takes time : a meta-analysis on reaction time measures of deception

    Get PDF
    Lie detection techniques are frequently used, but most of them have been criticized for the lack of empirical support for their predictive validity and presumed underlying mechanisms. This situation has led to increased efforts to unravel the cognitive mechanisms underlying deception and to develop a comprehensive theory of deception. A cognitive approach to deception has reinvigorated interest in reaction time (RT) measures to differentiate lies from truths and to investigate whether lying is more cognitively demanding than truth telling. Here, we provide the results of a meta-analysis of 114 studies (n = 3307) using computerized RT paradigms to assess the cognitive cost of lying. Results revealed a large standardized RT difference, even after correction for publication bias (d = 1.049; 95% CI [0.930; 1.169]), with a large heterogeneity amongst effect sizes. Moderator analyses revealed that the RT deception effect was smaller, yet still large, in studies in which participants received instructions to avoid detection. The autobiographical Implicit Association Test produced smaller effects than the Concealed Information Test, the Sheffield Lie Test, and the Differentiation of Deception paradigm. An additional meta-analysis (17 studies, n = 348) showed that, like other deception measures, RT deception measures are susceptible to countermeasures. Whereas our meta-analysis corroborates current cognitive approaches to deception, the observed heterogeneity calls for further research on the boundary conditions of the cognitive cost of deception. RT-based measures of deception may have potential in applied settings, but countermeasures remain an important challenge
    corecore