7 research outputs found

    Fake news identification on Twitter with hybrid CNN and RNN models

    Get PDF
    The problem associated with the propagation of fake news continues to grow at an alarming scale. This trend has generated much interest from politics to academia and industry alike. We propose a framework that detects and classifies fake news messages from Twitter posts using hybrid of convolutional neural networks and long-short term recurrent neural network models. The proposed work using this deep learning approach achieves 82% accuracy. Our approach intuitively identifies relevant features associated with fake news stories without previous knowledge of the domain

    Spread of Misinformation Online: Simulation Impact of Social Media Newsgroups

    Get PDF
    Academic research shows increase reliance of online users on social media as a main source of news and information. Researchers found that young users are particularly inclined to believe what they read on social media without adequate verification of the information. There has been some research to study the spread of misinformation and identification of key variables in developing simulations of the process. Current literature on combating misinformation focuses on individuals and neglects social newsgroups-key players in the dissemination of information online. Using benchmark variables and values from the literature, the authors simulated the process using Biolayout; a big data-modeling tool. The results show social newsgroups have significant impact in the explosion of misinformation as well as combating misinformation. The outcome has helped better understand and visualize how misinformation travels in the spatial space of social media

    Autonomy and the social dilemma of online manipulative behavior

    Get PDF
    Persuasive online technologies were initially designed and used to gain insights into the online behavior of individuals to personalize advertising campaigns in an effort to influence people and convince them to buy certain products. But recently, these technologies have blurred the lines and morphed into technologies that covertly and gradually manipulate people into attaining a goal that is predetermined by the algorithm and disregards the decision-making rights of the individual. This may lead to people exercising decisions that do not align with their personal values and beliefs, and rob them of their autonomy—an ethical principle, in the absence of which the application of these technologies may be unethical. However, not all technologies that are persuasive are necessarily manipulative which require the careful consideration of a couple of elements to determine whether or not technologies are manipulative and ultimately whether their application is ethical or not. In this article, we analyze the ethical principle of autonomy and unpack the underlying elements of this ethical principle which must be considered to determine whether the application of a technology is ethical or not in the context of it being persuasive or manipulative
    corecore