16 research outputs found

    'No, auntie, that's false': Female baby boomers develop critical skills to confront fake news with guidance from relatives

    Full text link
    The spread of fake news has been increasing, which gives rise to a special interest in the development of identification and coping skills among news consumers so that they can filter out misleading information. Studies suggest that older people share more fake news from social media. There is scarce literature that analyse how baby boomers behave in the face of fake news. The purpose of this study is to examine how female baby boomers deal with fake news on Facebook and their available resources to learn how to identify and handle dubious information. A qualitative study and thematic analysis were conducted using information obtained from interviewing female baby boomers. Four themes emerge from the analysis, revealing that participants recognise that they can identify fake news but may not always be able to do so due to limitations in their understanding of an issue or uncertainty about its source. Participants show participants empirically develop critical identification and filtering skills with the assistance from close family members.Comment: 14 pages, 1 tabl

    Security techniques for intelligent spam sensing and anomaly detection in online social platforms

    Get PDF
    Copyright © 2020 Institute of Advanced Engineering and Science. All rights reserved. The recent advances in communication and mobile technologies made it easier to access and share information for most people worldwide. Among the most powerful information spreading platforms are the Online Social Networks (OSN)s that allow Internet-connected users to share different information such as instant messages, tweets, photos, and videos. Adding to that many governmental and private institutions use the OSNs such as Twitter for official announcements. Consequently, there is a tremendous need to provide the required level of security for OSN users. However, there are many challenges due to the different protocols and variety of mobile apps used to access OSNs. Therefore, traditional security techniques fail to provide the needed security and privacy, and more intelligence is required. Computational intelligence adds high-speed computation, fault tolerance, adaptability, and error resilience when used to ensure security in OSN apps. This research provides a comprehensive related work survey and investigates the application of artificial neural networks for intrusion detection systems and spam filtering for OSNs. In addition, we use the concept of social graphs and weighted cliques in the detection of suspicious behavior of certain online groups and to prevent further planned actions such as cyber/terrorist attacks before they happen

    Security techniques for intelligent spam sensing and anomaly detection in online social platforms

    Get PDF
    Copyright © 2020 Institute of Advanced Engineering and Science. All rights reserved. The recent advances in communication and mobile technologies made it easier to access and share information for most people worldwide. Among the most powerful information spreading platforms are the Online Social Networks (OSN)s that allow Internet-connected users to share different information such as instant messages, tweets, photos, and videos. Adding to that many governmental and private institutions use the OSNs such as Twitter for official announcements. Consequently, there is a tremendous need to provide the required level of security for OSN users. However, there are many challenges due to the different protocols and variety of mobile apps used to access OSNs. Therefore, traditional security techniques fail to provide the needed security and privacy, and more intelligence is required. Computational intelligence adds high-speed computation, fault tolerance, adaptability, and error resilience when used to ensure security in OSN apps. This research provides a comprehensive related work survey and investigates the application of artificial neural networks for intrusion detection systems and spam filtering for OSNs. In addition, we use the concept of social graphs and weighted cliques in the detection of suspicious behavior of certain online groups and to prevent further planned actions such as cyber/terrorist attacks before they happen

    Towards Responsible Media Recommendation

    Get PDF
    Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio

    Fighting disinformation with artificial intelligence: fundamentals, advances and challenges

    Get PDF
    Internet and social media have revolutionised the way news is distributed and consumed. However, the constant flow of massive amounts of content has made it difficult to discern between truth and falsehood, especially in online platforms plagued with malicious actors who create and spread harmful stories. Debunking disinformation is costly, which has put artificial intelligence (AI) and, more specifically, machine learning (ML) in the spotlight as a solution to this problem. This work revises recent literature on AI and ML techniques to combat disinformation, ranging from automatic classification to feature extraction, as well as their role in creating realistic synthetic content. We conclude that ML advances have been mainly focused on automatic classification and scarcely adopted outside research labs due to their dependence on limited-scope datasets. Therefore, research efforts should be redirected towards developing AI-based systems that are reliable and trustworthy in supporting humans in early disinformation detection instead of fully automated solutions.ÂEuropean Commission, project Iberifier (Iberian Digital Media Research and Fact-Checking Hub)The call CEF-TC-2020–2 (European Digital Media Observatory), grant number 2020-EU-IA-025
    corecore