328 research outputs found

    FAKE NEWS DETECTION ABOUT SARS-COV-2 PANDEMIC USING NEUTRAL NETWORKS AND DETECTION ALGORITHMS

    Get PDF
    Fake news has an extremely high impact on society, spreading quite simple and fast through social media, TV, internet, press, and other means of communication. The false news about the new coronavirus is blocked by the authorities, according to the decree for establishing a state of emergency. The misinformation of the population and the placement of fake news is two inevitable consequences in times of crisis, these being amplified by two other elements, which feed each other: fear of illness, which can cause deaths, but also uncertainty or lack of information on how to manage the crisis and what is involved.  The need to stop the spreading of fake news, it's paramount and this paper proposes to recognizing truthful information from false information during the pandemic COVID-19 through a guide learning method. This guide implies a model for distinguishing false messages in the online environment, such as Machine Learning algorithms, which can have an accuracy of over 95%

    The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans

    Full text link
    A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web's false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web

    Evidence of personality traits on phishing attack menace among selected university undergraduates in Nigerian

    Get PDF
    Access ease, mobility, portability, and improved speed have continued to ease the adoption of computing devices; while, consequently proliferating phishing attacks. These, in turn, have created mixed feelings in increased adoption and nosedived users’ trust level of devices. The study recruited 480-students, who were exposed to socially-engineered attack directives. Attacks were designed toretrieve personal dataand entice participants to access compromised links. Wesought to determine the risks of cybercrimes among the undergraduates in selected Nigerian universities, observe students’ responses and explore their attitudes before/after each attack. Participants were primed to remain vigilant to all forms of scams as WE sought to investigate attacks’ influence on gender, students’ status, and age to perceived safety on susceptibility to phishing. Results show that contrary to public beliefs, age, status, and gender were not among the factors associated with scam susceptibility and vulnerability rates of the participants. However, the study reports decreased user trust levels in the adoption of these new, mobile computing devices

    Social media, political polarization, and political disinformation: a review of the scientific literature

    Get PDF
    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them

    Social media, political polarization, and political disinformation: a review of the scientific literature

    Get PDF
    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them

    Combating Fake News on Social Media: A Framework, Review, and Future Opportunities

    Get PDF
    Social media platforms facilitate the sharing of a vast magnitude of information in split seconds among users. However, some false information is also widely spread, generally referred to as “fake news”. This can have major negative impacts on individuals and societies. Unfortunately, people are often not able to correctly identify fake news from truth. Therefore, there is an urgent need to find effective mechanisms to fight fake news on social media. To this end, this paper adapts the Straub Model of Security Action Cycle to the context of combating fake news on social media. It uses the adapted framework to classify the vast literature on fake news to action cycle phases (i.e., deterrence, prevention, detection, and mitigation/remedy). Based on a systematic and inter-disciplinary review of the relevant literature, we analyze the status and challenges in each stage of combating fake news, followed by introducing future research directions. These efforts allow the development of a holistic view of the research frontier on fighting fake news online. We conclude that this is a multidisciplinary issue; and as such, a collaborative effort from different fields is needed to effectively address this problem

    MARA and public user characteristics in response to phishing emails

    Get PDF
    “Social Engineering” refers to the attacks that deceive, persuade and influence an individual to provide information or perform an action that will benefit the attackers. Fraudulent and deceptive individuals use social engineering traps and tactics through Social Networking Sites (SNSs) and electronic communication forms to trick users into obeying them, accepting threats, falling victims to various silent crimes such as phishing, clickjacking, malware installation, sexual abuse, financial abuse, identity theft and physical crime. Although computers can enhance our work activities, e.g., through greater efficiency in document production and ease of communication., the reliance on its benefits has reduced with the introduction of social engineering threats. Phishing email results in significant losses, estimated at billions of dollars, to organisations and individual users every year. According to the 2019 statistics report from retruster.com, the average financial cost of a data breach is 3.8 million dollars, with 90% of it coming from phishing attacks on user accounts. To reduce users’ vulnerability to phishing emails, we need first to understand the users’ detection behaviour. Many research studies focus only on whether participants respond to phishing or not. A widely held view that we endorse is that this continuing challenge of email is not wholly technical in nature and thereby cannot be entirely resolved through technical measures. Instead, we have here a socio-technical problem whose resolution requires attention to both technical issues and end-users’ specific attitudes and behavioural characteristics. Using a sequential exploratory mixed method approach, qualitative grounded theory is used to explore and generate an in-depth understanding of what and why the phishing characteristics influence email users to judge the attacker as credible. Quantitative experiments are used to relate participants’ characteristics with their behaviour. The study was carefully designed to ensure that valid data could be collected without harm to participants, and with University Ethics Committee approval. The research output is a new model to explain the impact of users’characteristics on their detection behaviour. The model was tested through two study groups, namely Public and MARA . In addition, the final model was tested using structural equation modelling (SEM). This showed that the proposed model explains 17% and 39%, respectively, for the variance in Public and MARA participants’ tendency to respond to phishing emails. The results also explained which, and to what extent, phishing characteristics influence users’ judgement of sender credibility.“Social Engineering” refers to the attacks that deceive, persuade and influence an individual to provide information or perform an action that will benefit the attackers. Fraudulent and deceptive individuals use social engineering traps and tactics through Social Networking Sites (SNSs) and electronic communication forms to trick users into obeying them, accepting threats, falling victims to various silent crimes such as phishing, clickjacking, malware installation, sexual abuse, financial abuse, identity theft and physical crime. Although computers can enhance our work activities, e.g., through greater efficiency in document production and ease of communication., the reliance on its benefits has reduced with the introduction of social engineering threats. Phishing email results in significant losses, estimated at billions of dollars, to organisations and individual users every year. According to the 2019 statistics report from retruster.com, the average financial cost of a data breach is 3.8 million dollars, with 90% of it coming from phishing attacks on user accounts. To reduce users’ vulnerability to phishing emails, we need first to understand the users’ detection behaviour. Many research studies focus only on whether participants respond to phishing or not. A widely held view that we endorse is that this continuing challenge of email is not wholly technical in nature and thereby cannot be entirely resolved through technical measures. Instead, we have here a socio-technical problem whose resolution requires attention to both technical issues and end-users’ specific attitudes and behavioural characteristics. Using a sequential exploratory mixed method approach, qualitative grounded theory is used to explore and generate an in-depth understanding of what and why the phishing characteristics influence email users to judge the attacker as credible. Quantitative experiments are used to relate participants’ characteristics with their behaviour. The study was carefully designed to ensure that valid data could be collected without harm to participants, and with University Ethics Committee approval. The research output is a new model to explain the impact of users’characteristics on their detection behaviour. The model was tested through two study groups, namely Public and MARA . In addition, the final model was tested using structural equation modelling (SEM). This showed that the proposed model explains 17% and 39%, respectively, for the variance in Public and MARA participants’ tendency to respond to phishing emails. The results also explained which, and to what extent, phishing characteristics influence users’ judgement of sender credibility
    • 

    corecore