9,117 research outputs found
Intelligent Deception Detection through Machine Based Interviewing
In this paper an automatic deception detection system, which analyses participant deception risk scores from non-verbal behaviour captured during an interview conducted by an Avatar, is demonstrated. The system is built on a configuration of artificial neural networks, which are used to detect facial objects and extract non-verbal behaviour in the form of micro gestures over short periods of time. A set of empirical experiments was conducted based a typical airport security scenario of packing a suitcase. Data was collected through 30 participants participating in either a truthful or deceptive scenarios being interviewed by a machine based border guard Avatar. Promising results were achieved using raw unprocessed data on un-optimized classifier neural networks. These indicate that a machine based interviewing technique can elicit non-verbal interviewee behavior, which allows an automatic system to detect deception
Recommended from our members
Deception in Spoken Dialogue: Classification and Individual Differences
Automatic deception detection is an important problem with far-reaching implications in many areas, including law enforcement, military and intelligence agencies, social services, and politics. Despite extensive efforts to develop automated deception detection technologies, there have been few objective successes. This is likely due to the many challenges involved, including the lack of large, cleanly recorded corpora; the difficulty of acquiring ground truth labels; and major differences in incentives for lying in the laboratory vs. lying in real life. Another well-recognized issue is that there are individual and cultural differences in deception production and detection, although little has been done to identify them. Human performance at deception detection is at the level of chance, making it an uncommon problem where machines can potentially outperform humans.
This thesis addresses these challenges associated with research of deceptive speech. We created the Columbia X-Cultural Deception (CXD) Corpus, a large-scale collection of deceptive and non-deceptive dialogues between native speakers of Standard American English and Mandarin Chinese. This corpus enabled a comprehensive study of deceptive speech on a large scale.
In the first part of the thesis, we introduce the CXD corpus and present an empirical analysis of acoustic-prosodic and linguistic cues to deception. We also describe machine learning classification experiments to automatically identify deceptive speech using those features. Our best classifier achieves classification accuracy of almost 70%, well above human performance.
The second part of this thesis addresses individual differences in deceptive speech. We present a comprehensive analysis of individual differences in verbal cues to deception, and several methods for leveraging these speaker differences to improve automatic deception classification. We identify many differences in cues to deception across gender, native language, and personality. Our comparison of approaches for leveraging these differences shows that speaker-dependent features that capture a speaker's deviation from their natural speaking style can improve deception classification performance. We also develop neural network models that accurately model speaker-specific patterns of deceptive speech.
The contributions of this work add substantially to our scientific understanding of deceptive speech, and have practical implications for human practitioners and automatic deception detection
Deception Detection in Videos
We present a system for covert automated deception detection in real-life
courtroom trial videos. We study the importance of different modalities like
vision, audio and text for this task. On the vision side, our system uses
classifiers trained on low level video features which predict human
micro-expressions. We show that predictions of high-level micro-expressions can
be used as features for deception prediction. Surprisingly, IDT (Improved Dense
Trajectory) features which have been widely used for action recognition, are
also very good at predicting deception in videos. We fuse the score of
classifiers trained on IDT features and high-level micro-expressions to improve
performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio
domain also provide a significant boost in performance, while information from
transcripts is not very beneficial for our system. Using various classifiers,
our automated system obtains an AUC of 0.877 (10-fold cross-validation) when
evaluated on subjects which were not part of the training set. Even though
state-of-the-art methods use human annotations of micro-expressions for
deception detection, our fully automated approach outperforms them by 5%. When
combined with human annotations of micro-expressions, our AUC improves to
0.922. We also present results of a user-study to analyze how well do average
humans perform on this task, what modalities they use for deception detection
and how they perform if only one modality is accessible. Our project page can
be found at \url{https://doubaibai.github.io/DARE/}.Comment: AAAI 2018, project page: https://doubaibai.github.io/DARE
Recommended from our members
The truth-telling motor cortex: Response competition in M1 discloses deceptive behaviour
Neural circuits associated with response conflict are active during deception. Here we use transcranial magnetic stimulation to examine for the first time whether competing responses in primary motor cortex can be used to detect lies. Participants used their little finger or thumb to respond either truthfully or deceitfully regarding facial familiarity. Motor-evoked-potentials (MEPs) from muscles associated with both digits tracked the development of each motor plan. When preparing to deceive, the MEP of the non-responding digit (i.e. the plan corresponding to the truth) exceeds the MEP of the responding digit (i.e. the lie), whereas a mirror-reversed pattern occurs when telling the truth. This give away response conflict interacts with the time of stimulation during a speeded reaction period. Lies can even activate digit-specific cortical representations when only verbal responses are made. Our findings support neurobiological models which blend cognitive decision-making with motor programming, and suggest a novel index for discriminating between honest and intentionally false facial recognition
Lying takes time : a meta-analysis on reaction time measures of deception
Lie detection techniques are frequently used, but most of them have been criticized for the lack of empirical support for their predictive validity and presumed underlying mechanisms. This situation has led to increased efforts to unravel the cognitive mechanisms underlying deception and to develop a comprehensive theory of deception. A cognitive approach to deception has reinvigorated interest in reaction time (RT) measures to differentiate lies from truths and to investigate whether lying is more cognitively demanding than truth telling. Here, we provide the results of a meta-analysis of 114 studies (n = 3307) using computerized RT paradigms to assess the cognitive cost of lying. Results revealed a large standardized RT difference, even after correction for publication bias (d = 1.049; 95% CI [0.930; 1.169]), with a large heterogeneity amongst effect sizes. Moderator analyses revealed that the RT deception effect was smaller, yet still large, in studies in which participants received instructions to avoid detection. The autobiographical Implicit Association Test produced smaller effects than the Concealed Information Test, the Sheffield Lie Test, and the Differentiation of Deception paradigm. An additional meta-analysis (17 studies, n = 348) showed that, like other deception measures, RT deception measures are susceptible to countermeasures. Whereas our meta-analysis corroborates current cognitive approaches to deception, the observed heterogeneity calls for further research on the boundary conditions of the cognitive cost of deception. RT-based measures of deception may have potential in applied settings, but countermeasures remain an important challenge
- …