8 research outputs found

    Deception Detection in Group Video Conversations using Dynamic Interaction Networks

    Full text link
    Detecting groups of people who are jointly deceptive in video conversations is crucial in settings such as meetings, sales pitches, and negotiations. Past work on deception in videos focuses on detecting a single deceiver and uses facial or visual features only. In this paper, we propose the concept of Face-to-Face Dynamic Interaction Networks (FFDINs) to model the interpersonal interactions within a group of people. The use of FFDINs enables us to leverage network relations in detecting group deception in video conversations for the first time. We use a dataset of 185 videos from a deception-based game called Resistance. We first characterize the behavior of individual, pairs, and groups of deceptive participants and compare them to non-deceptive participants. Our analysis reveals that pairs of deceivers tend to avoid mutual interaction and focus their attention on non-deceivers. In contrast, non-deceivers interact with everyone equally. We propose Negative Dynamic Interaction Networks to capture the notion of missing interactions. We create the DeceptionRank algorithm to detect deceivers from NDINs extracted from videos that are just one minute long. We show that our method outperforms recent state-of-the-art computer vision, graph embedding, and ensemble methods by at least 20.9% AUROC in identifying deception from videos.Comment: The paper is published at ICWSM 2021. Dataset link: https://snap.stanford.edu/data/comm-f2f-Resistance.htm

    ๊ฐœ์ธ ์‚ฌํšŒ๋ง ๋„คํŠธ์›Œํฌ ๋ถ„์„ ๊ธฐ๋ฐ˜ ์˜จ๋ผ์ธ ์‚ฌํšŒ ๊ณต๊ฒฉ์ž ํƒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ๊น€์ข…๊ถŒ.In the last decade we have witnessed the explosive growth of online social networking services (SNSs) such as Facebook, Twitter, Weibo and LinkedIn. While SNSs provide diverse benefits โ€“ for example, fostering inter-personal relationships, community formations and news propagation, they also attracted uninvited nuiance. Spammers abuse SNSs as vehicles to spread spams rapidly and widely. Spams, unsolicited or inappropriate messages, significantly impair the credibility and reliability of services. Therefore, detecting spammers has become an urgent and critical issue in SNSs. This paper deals with spamming in Twitter and Weibo. Instead of spreading annoying messages to the public, a spammer follows (subscribes to) normal users, and followed a normal user. Sometimes a spammer makes link farm to increase target accounts explicit influence. Based on the assumption that the online relationships of spammers are different from those of normal users, I proposed classification schemes that detect online social attackers including spammers. I firstly focused on ego-network social relations and devised two features, structural features based on Triad Significance Profile (TSP) and relational semantic features based on hierarchical homophily in an ego-network. Experiments on real Twitter and Weibo datasets demonstrated that the proposed approach is very practical. The proposed features are scalable because instead of analyzing the whole network, they inspect user-centered ego-networks. My performance study showed that proposed methods yield significantly better performance than prior scheme in terms of true positives and false positives.์ตœ๊ทผ ์šฐ๋ฆฌ๋Š” Facebook, Twitter, Weibo, LinkedIn ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ์‚ฌํšŒ ๊ด€๊ณ„๋ง ์„œ๋น„์Šค๊ฐ€ ํญ๋ฐœ์ ์œผ๋กœ ์„ฑ์žฅํ•˜๋Š” ํ˜„์ƒ์„ ๋ชฉ๊ฒฉํ•˜์˜€๋‹ค. ํ•˜์ง€๋งŒ ์‚ฌํšŒ ๊ด€๊ณ„๋ง ์„œ๋น„์Šค๊ฐ€ ๊ฐœ์ธ๊ณผ ๊ฐœ์ธ๊ฐ„์˜ ๊ด€๊ณ„ ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ ํ˜•์„ฑ๊ณผ ๋‰ด์Šค ์ „ํŒŒ ๋“ฑ์˜ ์—ฌ๋Ÿฌ ์ด์ ์„ ์ œ๊ณตํ•ด ์ฃผ๊ณ  ์žˆ๋Š”๋ฐ ๋ฐ˜ํ•ด ๋ฐ˜๊ฐ‘์ง€ ์•Š์€ ํ˜„์ƒ ์—ญ์‹œ ๋ฐœ์ƒํ•˜๊ณ  ์žˆ๋‹ค. ์ŠคํŒจ๋จธ๋“ค์€ ์‚ฌํšŒ ๊ด€๊ณ„๋ง ์„œ๋น„์Šค๋ฅผ ๋™๋ ฅ ์‚ผ์•„ ์ŠคํŒธ์„ ๋งค์šฐ ๋น ๋ฅด๊ณ  ๋„“๊ฒŒ ์ „ํŒŒํ•˜๋Š” ์‹์œผ๋กœ ์•…์šฉํ•˜๊ณ  ์žˆ๋‹ค. ์ŠคํŒธ์€ ์ˆ˜์‹ ์ž๊ฐ€ ์›์น˜ ์•Š๋Š” ๋ฉ”์‹œ์ง€๋“ค์„ ์ผ์ปฝ๋Š”๋ฐ ์ด๋Š” ์„œ๋น„์Šค์˜ ์‹ ๋ขฐ๋„์™€ ์•ˆ์ •์„ฑ์„ ํฌ๊ฒŒ ์†์ƒ์‹œํ‚จ๋‹ค. ๋”ฐ๋ผ์„œ, ์ŠคํŒจ๋จธ๋ฅผ ํƒ์ง€ํ•˜๋Š” ๊ฒƒ์ด ํ˜„์žฌ ์†Œ์…œ ๋ฏธ๋””์–ด์—์„œ ๋งค์šฐ ๊ธด๊ธ‰ํ•˜๊ณ  ์ค‘์š”ํ•œ ๋ฌธ์ œ๊ฐ€ ๋˜์—ˆ๋‹ค. ์ด ๋…ผ๋ฌธ์€ ๋Œ€ํ‘œ์ ์ธ ์‚ฌํšŒ ๊ด€๊ณ„๋ง ์„œ๋น„์Šค๋“ค ์ค‘ Twitter์™€ Weibo์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ŠคํŒจ๋ฐ์„ ๋‹ค๋ฃจ๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์œ ํ˜•์˜ ์ŠคํŒจ๋ฐ๋“ค์€ ๋ถˆํŠน์ • ๋‹ค์ˆ˜์—๊ฒŒ ๋ฉ”์‹œ์ง€๋ฅผ ์ „ํŒŒํ•˜๋Š” ๋Œ€์‹ ์—, ๋งŽ์€ ์ผ๋ฐ˜ ์‚ฌ์šฉ์ž๋“ค์„ 'ํŒ”๋กœ์šฐ(๊ตฌ๋…)'ํ•˜๊ณ  ์ด๋“ค๋กœ๋ถ€ํ„ฐ '๋งž ํŒ”๋กœ์ž‰(๋งž ๊ตฌ๋…)'์„ ์ด๋Œ์–ด ๋‚ด๋Š” ๊ฒƒ์„ ๋ชฉ์ ์œผ๋กœ ํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ๋•Œ๋กœ๋Š” link farm์„ ์ด์šฉํ•ด ํŠน์ • ๊ณ„์ •์˜ ํŒ”๋กœ์›Œ ์ˆ˜๋ฅผ ๋†’์ด๊ณ  ๋ช…์‹œ์  ์˜ํ–ฅ๋ ฅ์„ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ๋„ ํ•œ๋‹ค. ์ŠคํŒจ๋จธ์˜ ์˜จ๋ผ์ธ ๊ด€๊ณ„๋ง์ด ์ผ๋ฐ˜ ์‚ฌ์šฉ์ž์˜ ์˜จ๋ผ์ธ ์‚ฌํšŒ๋ง๊ณผ ๋‹ค๋ฅผ ๊ฒƒ์ด๋ผ๋Š” ๊ฐ€์ • ํ•˜์—, ๋‚˜๋Š” ์ŠคํŒจ๋จธ๋“ค์„ ํฌํ•จํ•œ ์ผ๋ฐ˜์ ์ธ ์˜จ๋ผ์ธ ์‚ฌํšŒ๋ง ๊ณต๊ฒฉ์ž๋“ค์„ ํƒ์ง€ํ•˜๋Š” ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ๋‚˜๋Š” ๋จผ์ € ๊ฐœ์ธ ์‚ฌํšŒ๋ง ๋‚ด ์‚ฌํšŒ ๊ด€๊ณ„์— ์ฃผ๋ชฉํ•˜๊ณ  ๋‘ ๊ฐ€์ง€ ์ข…๋ฅ˜์˜ ๋ถ„๋ฅ˜ ํŠน์„ฑ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋“ค์€ ๊ฐœ์ธ ์‚ฌํšŒ๋ง์˜ Triad Significance Profile (TSP)์— ๊ธฐ๋ฐ˜ํ•œ ๊ตฌ์กฐ์  ํŠน์„ฑ๊ณผ Hierarchical homophily์— ๊ธฐ๋ฐ˜ํ•œ ๊ด€๊ณ„ ์˜๋ฏธ์  ํŠน์„ฑ์ด๋‹ค. ์‹ค์ œ Twitter์™€ Weibo ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ๋งค์šฐ ์‹ค์šฉ์ ์ด๋ผ๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค€๋‹ค. ์ œ์•ˆํ•œ ํŠน์„ฑ๋“ค์€ ์ „์ฒด ๋„คํŠธ์›Œํฌ๋ฅผ ๋ถ„์„ํ•˜์ง€ ์•Š์•„๋„ ๊ฐœ์ธ ์‚ฌํšŒ๋ง๋งŒ ๋ถ„์„ํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์— scalableํ•˜๊ฒŒ ์ธก์ •๋  ์ˆ˜ ์žˆ๋‹ค. ๋‚˜์˜ ์„ฑ๋Šฅ ๋ถ„์„ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•์ด ๊ธฐ์กด ๋ฐฉ๋ฒ•์— ๋น„ํ•ด true positive์™€ false positive ์ธก๋ฉด์—์„œ ์šฐ์ˆ˜ํ•˜๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค€๋‹ค.1 Introduction 1 2 Related Work 6 2.1 OSN Spammer Detection Approaches 6 2.1.1 Contents-based Approach 6 2.1.2 Social Network-based Approach 7 2.1.3 Subnetwork-based Approach 8 2.1.4 Behavior-based Approach 9 2.2 Link Spam Detection 10 2.3 Data mining schemes for Spammer Detection 10 2.4 Sybil Detection 12 3 Triad Significance Profile Analysis 14 3.1 Motivation 14 3.2 Twitter Dataset 18 3.3 Indegree and Outdegree of Dataset 20 3.4 Twitter spammer Detection with TSP 22 3.5 TSP-Filtering 27 3.6 Performance Evaluation of TSP-Filtering 29 4 Hierarchical Homophily Analysis 33 4.1 Motivation 33 4.2 Hierarchical Homophily in OSN 37 4.2.1 Basic Analysis of Datasets 39 4.2.2 Status gap distribution and Assortativity 44 4.2.3 Hierarchical gap distribution 49 4.3 Performance Evaluation of HH-Filtering 53 5 Overall Performance Evaluation 58 6 Conclusion 63 Bibliography 65Docto

    Crowd and AI Powered Manipulation: Characterization and Detection

    Get PDF
    User reviews are ubiquitous. They power online review aggregators that influence our daily-based decisions, from what products to purchase (e.g., Amazon), movies to view (e.g., Netflix, HBO, Hulu), restaurants to patronize (e.g., Yelp), and hotels to book (e.g., TripAdvisor, Airbnb). In addition, policy makers rely on online commenting platforms like Regulations.gov and FCC.gov as a means for citizens to voice their opinions about public policy issues. However, showcasing the opinions of fellow users has a dark side as these reviews and comments are vulnerable to manipulation. And as advances in AI continue, fake reviews generated by AI agents rather than users pose even more scalable and dangerous manipulation attacks. These attacks on online discourse can sway ratings of products, manipulate opinions and perceived support of key issues, and degrade our trust in online platforms. Previous efforts have mainly focused on highly visible anomaly behaviors captured by statistical modeling or clustering algorithms. While detection of such anomalous behaviors helps to improve the reliability of online interactions, it misses subtle and difficult-to-detect behaviors. This research investigates two major research thrusts centered around manipulation strategies. In the first thrust, we study crowd-based manipulation strategies wherein crowds of paid workers organize to spread fake reviews. In the second thrust, we explore AI-based manipulation strategies, where crowd workers are replaced by scalable, and potentially undetectable generative models of fake reviews. In particular, one of the key aspects of this work is to address the research gap in previous efforts for anomaly detection where ground truth data is missing (and hence, evaluation can be challenging). In addition, this work studies the capabilities and impact of model-based attacks as the next generation of online threats. We propose inter-related methods for collecting evidence of these attacks, and create new countermeasures for defending against them. The performance of proposed methods are compared against other state-of-the-art approaches in the literature. We find that although crowd campaigns do not show obvious anomaly behavior, they can be detected given a careful formulation of their behaviors. And, although model-generated fake reviews may appear on the surface to be legitimate, we find that they do not completely mimic the underlying distribution of human-written reviews, so we can leverage this signal to detect them

    Adaptive Spammer Detection with Sparse Group Modeling

    No full text
    Social spammers disseminate unsolicited information on social media sites that negatively impacts social networking systems. To detect social spammers, traditional methods leverage social network structures to identify the behavioral patterns hidden in their social interactions. They focus on accounts that are affiliated with groups comprising known spammers. However, since different parties are emerging to generate various spammers, they may form different kinds of groups, and some spammers may even detach from the flock. Therefore, it is challenging for existing methods to find the optimal group structure that captures different spammers simultaneously. Employing different approaches for specific spammers is time-consuming, and it also lacks the adaptivity of dealing with emerging spammers. In this work, we aim to propose a group modeling framework that adaptively characterizes social interactions of spammers. In particular, we introduce to integrate content information into the group modeling process. The proposed framework exploits additional content information in selecting groups and individuals that are likely to be involved in spamming activities. In order to alleviate the intensive computational cost, we transform the problem as a sparse learning task that can be solved efficiently. Experimental results on real-world datasets show that the proposed method outperforms the state-of-the-art approaches
    corecore