4 research outputs found

    Reverse Intervention for Dealing with Malicious Information in Online Social Networks

    Get PDF
    Malicious information is often hidden in the massive data flow of online social networks. In “We Media'' era, if the system is closed without intervention, malicious information may spread to the entire network quickly, which would cause severe economic and political losses. This paper adopts a reverse intervention strategy from the perspective of topology control, so that the spread of malicious information could be suppressed at a minimum cost. Noting that as the information spreads, social networks often present a community structure and multiple malicious information promoters may appear. Therefore, this paper adopts a divide and conquer strategy and proposes an intervention algorithm based on subgraph partitioning, in which we search for some influential nodes to block or release clarification. The main algorithm consists of two main phases. Firstly, a subgraph partitioning method based on community structure is given to quickly extract the community structure of the information dissemination network. Secondly, a node blocking and clarification publishing algorithm based on the Jordan Center is proposed in the obtained subgraphs. Experiments show that the proposed algorithm can effectively suppress the spread of malicious information with a low time complexity compared with the benchmark algorithms

    Your most telling friends: Propagating latent ideological features on Twitter using neighborhood coherence

    Full text link
    Multidimensional scaling in networks allows for the discovery of latent information about their structure by embedding nodes in some feature space. Ideological scaling for users in social networks such as Twitter is an example, but similar settings can include diverse applications in other networks and even media platforms or e-commerce. A growing literature of ideology scaling methods in social networks restricts the scaling procedure to nodes that provide interpretability of the feature space: on Twitter, it is common to consider the sub-network of parliamentarians and their followers. This allows to interpret inferred latent features as indices for ideology-related concepts inspecting the position of members of parliament. While effective in inferring meaningful features, this is generally restrained to these sub-networks, limiting interesting applications such as country-wide measurement of polarization and its evolution. We propose two methods to propagate ideological features beyond these sub-networks: one based on homophily (linked users have similar ideology), and the other on structural similarity (nodes with similar neighborhoods have similar ideologies). In our methods, we leverage the concept of neighborhood ideological coherence as a parameter for propagation. Using Twitter data, we produce an ideological scaling for 370K users, and analyze the two families of propagation methods on a population of 6.5M users. We find that, when coherence is considered, the ideology of a user is better estimated from those with similar neighborhoods, than from their immediate neighbors.Comment: 8 pages, 2020 ASONAM Conferenc

    Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks

    Full text link
    Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation. To investigate how users' biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1) their classification as misinformation is more objective; 2) we can control the demographics of the personas presented; 3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N=2,016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.Comment: Supplementary appendix available upon request for the time bein
    corecore