248 research outputs found
Deception/Truthful Prediction Based on Facial Feature and Machine Learning Analysis
The Automatic Deception detection refers to the investigative practices used to determine whether person is telling you Truth or lie. Automatic deception detection has been studied extensively as it can be useful in many real-life scenarios in health, justice, and security systems. Many psychological studies have been reported for deception detection. Polygraph testing is a current trending technique to detect deception, but it requires human intervention and training. In recent times, many machine learning based approaches have been applied to detect deceptions. Various modalities like Thermal Imaging, Brain Activity Mapping, Acoustic analysis, eye tracking. Facial Micro expression processing and linguistic analyses are used to detect deception. Machine learning techniques based on facial feature analysis look like a promising path for automatic deception detection. It also works without human intervention. So, it may give better results because it does not affect race or ethnicity. Moreover, one can do covert operation to find deceit using facial video recording. Covert Operation may capture the real personality of deceptive persons. By making combination of various facial features like Facial Emotion, Facial Micro Expressions and Eye blink rate, pupil size, Facial Action Units we can get better accuracy in Deception Detection
Machine Learning-based Lie Detector applied to a Novel Annotated Game Dataset
Lie detection is considered a concern for everyone in their day to day life
given its impact on human interactions. Thus, people normally pay attention to
both what their interlocutors are saying and also to their visual appearances,
including faces, to try to find any signs that indicate whether the person is
telling the truth or not. While automatic lie detection may help us to
understand this lying characteristics, current systems are still fairly
limited, partly due to lack of adequate datasets to evaluate their performance
in realistic scenarios. In this work, we have collected an annotated dataset of
facial images, comprising both 2D and 3D information of several participants
during a card game that encourages players to lie. Using our collected dataset,
We evaluated several types of machine learning-based lie detectors in terms of
their generalization, person-specific and cross-domain experiments. Our results
show that models based on deep learning achieve the best accuracy, reaching up
to 57\% for the generalization task and 63\% when dealing with a single
participant. Finally, we also highlight the limitation of the deep learning
based lie detector when dealing with cross-domain lie detection tasks
Objective Classes for Micro-Facial Expression Recognition
Micro-expressions are brief spontaneous facial expressions that appear on a
face when a person conceals an emotion, making them different to normal facial
expressions in subtlety and duration. Currently, emotion classes within the
CASME II dataset are based on Action Units and self-reports, creating conflicts
during machine learning training. We will show that classifying expressions
using Action Units, instead of predicted emotion, removes the potential bias of
human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D
feature descriptors. The experiments are evaluated on two benchmark FACS coded
datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when
classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the
result of the state-of-the-art 5-class emotional-based classification in CASME
II. Results indicate that classification based on Action Units provides an
objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for
journal revie
Review on Emotion Recognition Databases
Over the past few decades human-computer interaction has become more important in our daily lives and research has developed in many directions: memory research, depression detection, and behavioural deficiency detection, lie detection, (hidden) emotion recognition etc. Because of that, the number of generic emotion and face databases or those tailored to specific needs have grown immensely large. Thus, a comprehensive yet compact guide is needed to help researchers find the most suitable database and understand what types of databases already exist. In this paper, different elicitation methods are discussed and the databases are primarily organized into neat and informative tables based on the format
Audio-Visual Deception Detection: DOLOS Dataset and Parameter-Efficient Crossmodal Learning
Deception detection in conversations is a challenging yet important task,
having pivotal applications in many fields such as credibility assessment in
business, multimedia anti-frauds, and custom security. Despite this, deception
detection research is hindered by the lack of high-quality deception datasets,
as well as the difficulties of learning multimodal features effectively. To
address this issue, we introduce DOLOS\footnote {The name ``DOLOS" comes from
Greek mythology.}, the largest gameshow deception detection dataset with rich
deceptive conversations. DOLOS includes 1,675 video clips featuring 213
subjects, and it has been labeled with audio-visual feature annotations. We
provide train-test, duration, and gender protocols to investigate the impact of
different factors. We benchmark our dataset on previously proposed deception
detection approaches. To further improve the performance by fine-tuning fewer
parameters, we propose Parameter-Efficient Crossmodal Learning (PECL), where a
Uniform Temporal Adapter (UT-Adapter) explores temporal attention in
transformer-based architectures, and a crossmodal fusion module, Plug-in
Audio-Visual Fusion (PAVF), combines crossmodal information from audio-visual
features. Based on the rich fine-grained audio-visual annotations on DOLOS, we
also exploit multi-task learning to enhance performance by concurrently
predicting deception and audio-visual features. Experimental results
demonstrate the desired quality of the DOLOS dataset and the effectiveness of
the PECL. The DOLOS dataset and the source codes are available at
https://github.com/NMS05/Audio-Visual-Deception-Detection-DOLOS-Dataset-and-Parameter-Efficient-Crossmodal-Learning/tree/main.Comment: 11 pages, 6 figure
- …