14 research outputs found

    How to Burst the Bubble in Social Networks?

    Get PDF
    Filter bubble has considered as a serious risk for democracy and freedom of information on the internet and social media. This phenomenon can restrict users\u27 access to information sources outside their comfort zone and increase the risk of polarisation of opinions on different topics. This in-progress paper explains our plan for conducting a prescriptive research aiming at decreasing the chance of filter bubbles formation on social networks. The paper explains a gap in the literature which is a prescriptive work considering both human and technology perspectives. To focus on this research gap, a design perspective has been selected covering two different bodies of theory as kernel theories. The paper explains the relevance of these theories, some of the primarily formed requirements derived from them and the future steps in this research. The explained future steps includes various phases of developing an Information Systems Design Theory and our strategy to evaluate the effectiveness of the developed theory

    Neural Based Statement Classification for Biased Language

    Full text link
    Biased language commonly occurs around topics which are of controversial nature, thus, stirring disagreement between the different involved parties of a discussion. This is due to the fact that for language and its use, specifically, the understanding and use of phrases, the stances are cohesive within the particular groups. However, such cohesiveness does not hold across groups. In collaborative environments or environments where impartial language is desired (e.g. Wikipedia, news media), statements and the language therein should represent equally the involved parties and be neutrally phrased. Biased language is introduced through the presence of inflammatory words or phrases, or statements that may be incorrect or one-sided, thus violating such consensus. In this work, we focus on the specific case of phrasing bias, which may be introduced through specific inflammatory words or phrases in a statement. For this purpose, we propose an approach that relies on a recurrent neural networks in order to capture the inter-dependencies between words in a phrase that introduced bias. We perform a thorough experimental evaluation, where we show the advantages of a neural based approach over competitors that rely on word lexicons and other hand-crafted features in detecting biased language. We are able to distinguish biased statements with a precision of P=0.92, thus significantly outperforming baseline models with an improvement of over 30%. Finally, we release the largest corpus of statements annotated for biased language.Comment: The Twelfth ACM International Conference on Web Search and Data Mining, February 11--15, 2019, Melbourne, VIC, Australi

    Burst the Filter Bubble: Towards an Integrated Tool

    Get PDF
    Formation of filter bubbles is known as a risk for democracy and can bring negative consequences like polarisation of the society, users’ tendency to extremist viewpoints, and the proliferation of fake news. Previous studies, including prescriptive studies, focused on limited aspects of filter bubbles. The current study aims to propose a model for an integrated tool that assists users in avoiding filter bubbles in social networks. To this end, a systematic literature review has been adopted and 571 papers in six top-ranked scientific databases have been identified. After excluding irrelevant studies and an in-depth study of the remaining papers, a classification of research studies is proposed. This classification is then used to propose an overall architecture for an integrated tool that synthesises all previous studies and proposes new features for avoiding filter bubbles. The study explains the components and features of the proposed architecture and describes their focus on content and agents

    A Dynamic Embedding Model of the Media Landscape

    Full text link
    Information about world events is disseminated through a wide variety of news channels, each with specific considerations in the choice of their reporting. Although the multiplicity of these outlets should ensure a variety of viewpoints, recent reports suggest that the rising concentration of media ownership may void this assumption. This observation motivates the study of the impact of ownership on the global media landscape and its influence on the coverage the actual viewer receives. To this end, the selection of reported events has been shown to be informative about the high-level structure of the news ecosystem. However, existing methods only provide a static view into an inherently dynamic system, providing underperforming statistical models and hindering our understanding of the media landscape as a whole. In this work, we present a dynamic embedding method that learns to capture the decision process of individual news sources in their selection of reported events while also enabling the systematic detection of large-scale transformations in the media landscape over prolonged periods of time. In an experiment covering over 580M real-world event mentions, we show our approach to outperform static embedding methods in predictive terms. We demonstrate the potential of the method for news monitoring applications and investigative journalism by shedding light on important changes in programming induced by mergers and acquisitions, policy changes, or network-wide content diffusion. These findings offer evidence of strong content convergence trends inside large broadcasting groups, influencing the news ecosystem in a time of increasing media ownership concentration

    Discovering Dense Correlated Subgraphs in Dynamic Networks

    Full text link
    Given a dynamic network, where edges appear and disappear over time, we are interested in finding sets of edges that have similar temporal behavior and form a dense subgraph. Formally, we define the problem as the enumeration of the maximal subgraphs that satisfy specific density and similarity thresholds. To measure the similarity of the temporal behavior, we use the correlation between the binary time series that represent the activity of the edges. For the density, we study two variants based on the average degree. For these problem variants we enumerate the maximal subgraphs and compute a compact subset of subgraphs that have limited overlap. We propose an approximate algorithm that scales well with the size of the network, while achieving a high accuracy. We evaluate our framework on both real and synthetic datasets. The results of the synthetic data demonstrate the high accuracy of the approximation and show the scalability of the framework.Comment: Full version of the paper included in the proceedings of the PAKDD 2021 conferenc

    Maximizing the Diversity of Exposure in a Social Network

    Full text link
    Social-media platforms have created new ways for citizens to stay informed and participate in public debates. However, to enable a healthy environment for information sharing, social deliberation, and opinion formation, citizens need to be exposed to sufficiently diverse viewpoints that challenge their assumptions, instead of being trapped inside filter bubbles. In this paper, we take a step in this direction and propose a novel approach to maximize the diversity of exposure in a social network. We formulate the problem in the context of information propagation, as a task of recommending a small number of news articles to selected users. We propose a realistic setting where we take into account content and user leanings, and the probability of further sharing an article. This setting allows us to capture the balance between maximizing the spread of information and ensuring the exposure of users to diverse viewpoints. The resulting problem can be cast as maximizing a monotone and submodular function subject to a matroid constraint on the allocation of articles to users. It is a challenging generalization of the influence maximization problem. Yet, we are able to devise scalable approximation algorithms by introducing a novel extension to the notion of random reverse-reachable sets. We experimentally demonstrate the efficiency and scalability of our algorithm on several real-world datasets
    corecore