2,634 research outputs found
Audio-Visual Deception Detection: DOLOS Dataset and Parameter-Efficient Crossmodal Learning
Deception detection in conversations is a challenging yet important task,
having pivotal applications in many fields such as credibility assessment in
business, multimedia anti-frauds, and custom security. Despite this, deception
detection research is hindered by the lack of high-quality deception datasets,
as well as the difficulties of learning multimodal features effectively. To
address this issue, we introduce DOLOS\footnote {The name ``DOLOS" comes from
Greek mythology.}, the largest gameshow deception detection dataset with rich
deceptive conversations. DOLOS includes 1,675 video clips featuring 213
subjects, and it has been labeled with audio-visual feature annotations. We
provide train-test, duration, and gender protocols to investigate the impact of
different factors. We benchmark our dataset on previously proposed deception
detection approaches. To further improve the performance by fine-tuning fewer
parameters, we propose Parameter-Efficient Crossmodal Learning (PECL), where a
Uniform Temporal Adapter (UT-Adapter) explores temporal attention in
transformer-based architectures, and a crossmodal fusion module, Plug-in
Audio-Visual Fusion (PAVF), combines crossmodal information from audio-visual
features. Based on the rich fine-grained audio-visual annotations on DOLOS, we
also exploit multi-task learning to enhance performance by concurrently
predicting deception and audio-visual features. Experimental results
demonstrate the desired quality of the DOLOS dataset and the effectiveness of
the PECL. The DOLOS dataset and the source codes are available at
https://github.com/NMS05/Audio-Visual-Deception-Detection-DOLOS-Dataset-and-Parameter-Efficient-Crossmodal-Learning/tree/main.Comment: 11 pages, 6 figure
Detection of Deception in a Virtual World
This work explores the role of multimodal cues in detection of deception in a virtual world, an online community of World of Warcraft players. Case studies from a five-year ethnography are presented in three categories: small-scale deception in text, deception by avoidance, and large-scale deception in game-external modes. Each case study is analyzed in terms of how the affordances of the medium enabled or hampered deception as well as how the members of the community ultimately detected the deception. The ramifications of deception on the community are discussed, as well as the need for researchers to have a deep community knowledge when attempting to understand the role of deception in a complex society. Finally, recommendations are given for assessment of behavior in virtual worlds and the unique considerations that investigators must give to the rules and procedures of online communities.</jats:p
Can lies be faked? Comparing low-stakes and high-stakes deception video datasets from a Machine Learning perspective
Despite the great impact of lies in human societies and a meager 54% human
accuracy for Deception Detection (DD), Machine Learning systems that perform
automated DD are still not viable for proper application in real-life settings
due to data scarcity. Few publicly available DD datasets exist and the creation
of new datasets is hindered by the conceptual distinction between low-stakes
and high-stakes lies. Theoretically, the two kinds of lies are so distinct that
a dataset of one kind could not be used for applications for the other kind.
Even though it is easier to acquire data on low-stakes deception since it can
be simulated (faked) in controlled settings, these lies do not hold the same
significance or depth as genuine high-stakes lies, which are much harder to
obtain and hold the practical interest of automated DD systems. To investigate
whether this distinction holds true from a practical perspective, we design
several experiments comparing a high-stakes DD dataset and a low-stakes DD
dataset evaluating their results on a Deep Learning classifier working
exclusively from video data. In our experiments, a network trained in
low-stakes lies had better accuracy classifying high-stakes deception than
low-stakes, although using low-stakes lies as an augmentation strategy for the
high-stakes dataset decreased its accuracy.Comment: 11 pages, 3 figure
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection
Humans are the final decision makers in critical tasks that involve ethical
and legal concerns, ranging from recidivism prediction, to medical diagnosis,
to fighting against fake news. Although machine learning models can sometimes
achieve impressive performance in these tasks, these tasks are not amenable to
full automation. To realize the potential of machine learning for improving
human decisions, it is important to understand how assistance from machine
learning models affects human performance and human agency.
In this paper, we use deception detection as a testbed and investigate how we
can harness explanations and predictions of machine learning models to improve
human performance while retaining human agency. We propose a spectrum between
full human agency and full automation, and develop varying levels of machine
assistance along the spectrum that gradually increase the influence of machine
predictions. We find that without showing predicted labels, explanations alone
slightly improve human performance in the end task. In comparison, human
performance is greatly improved by showing predicted labels (>20% relative
improvement) and can be further improved by explicitly suggesting strong
machine performance. Interestingly, when predicted labels are shown,
explanations of machine predictions induce a similar level of accuracy as an
explicit statement of strong machine performance. Our results demonstrate a
tradeoff between human performance and human agency and show that explanations
of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo
available at https://deception.machineintheloop.co
CIMTDetect: A Community Infused Matrix-Tensor Coupled Factorization Based Method for Fake News Detection
Detecting whether a news article is fake or genuine is a crucial task in
today's digital world where it's easy to create and spread a misleading news
article. This is especially true of news stories shared on social media since
they don't undergo any stringent journalistic checking associated with main
stream media. Given the inherent human tendency to share information with their
social connections at a mouse-click, fake news articles masquerading as real
ones, tend to spread widely and virally. The presence of echo chambers (people
sharing same beliefs) in social networks, only adds to this problem of
wide-spread existence of fake news on social media. In this paper, we tackle
the problem of fake news detection from social media by exploiting the very
presence of echo chambers that exist within the social network of users to
obtain an efficient and informative latent representation of the news article.
By modeling the echo-chambers as closely-connected communities within the
social network, we represent a news article as a 3-mode tensor of the structure
- and propose a tensor factorization based method to
encode the news article in a latent embedding space preserving the community
structure. We also propose an extension of the above method, which jointly
models the community and content information of the news article through a
coupled matrix-tensor factorization framework. We empirically demonstrate the
efficacy of our method for the task of Fake News Detection over two real-world
datasets. Further, we validate the generalization of the resulting embeddings
over two other auxiliary tasks, namely: \textbf{1)} News Cohort Analysis and
\textbf{2)} Collaborative News Recommendation. Our proposed method outperforms
appropriate baselines for both the tasks, establishing its generalization.Comment: Presented at ASONAM'1
Telling the difference between deceiving and truth telling: An experiment in a public space
The behavioral experiment presented in this paper investigated deception tasks (both concealment and lying) undertaken in a public space. The degree of risk of deception detection and the demands of self-regulation when deceiving were manipulated. The results showed a significant interaction effect between veracity and risk of deception detection, emerged for the body movement of “hand(s) in pocket(s)”. The incidence of “hand(s) in pocket(s)” was found to increase from truth telling to deceiving conditions when the risk of deception detection was higher, and to decrease from truth telling to deceiving conditions when the risk was lower. Higher risk of deception detection was also found in magnifying the “overall negative and controlled impression” displayed by both deceivers and truth tellers, compared to the lower risk of detection condition. We also discussed the possible effects of risk of deception detection and depletion of self-regulation, on deception behavior. Further studies and the connection between this study and the research community of computer vision and multimodel interaction is also discussed
- …