1,360 research outputs found

    Defending Elections Against Malicious Spread of Misinformation

    Full text link
    The integrity of democratic elections depends on voters' access to accurate information. However, modern media environments, which are dominated by social media, provide malicious actors with unprecedented ability to manipulate elections via misinformation, such as fake news. We study a zero-sum game between an attacker, who attempts to subvert an election by propagating a fake new story or other misinformation over a set of advertising channels, and a defender who attempts to limit the attacker's impact. Computing an equilibrium in this game is challenging as even the pure strategy sets of players are exponential. Nevertheless, we give provable polynomial-time approximation algorithms for computing the defender's minimax optimal strategy across a range of settings, encompassing different population structures as well as models of the information available to each player. Experimental results confirm that our algorithms provide near-optimal defender strategies and showcase variations in the difficulty of defending elections depending on the resources and knowledge available to the defender.Comment: Full version of paper accepted to AAAI 201

    Artificial Intelligence Crime:An Overview of Malicious Use and Abuse of AI

    Get PDF
    The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI

    Defending Democracy: Taking Stock of the Global Fight Against Digital Repression, Disinformation, and Election Insecurity

    Full text link
    Amidst the regular drumbeat of reports about Russian attempts to undermine U.S. democratic institutions from Twitter bots to cyber-attacks on Congressional candidates, it is easy to forget that the problem of election security is not isolated to the United States and extends far beyond safeguarding insecure voting machines. Consider Australia, which has long been grappling with repeated Chinese attempts to interfere with its political system. Yet Australia has taken a distinct approach in how it has sought to protect its democratic institutions, including reclassifying its political parties as “critical infrastructure,” a step that the U.S. government has yet to take despite repeated breaches at both the Democratic and Republican National Committees. This Article analyzes the Australian approach to protecting its democratic institutions from Chinese influence operations and compares it to the U.S. response to Russian efforts. It then moves on to discuss how other cyber powers, including the European Union, have taken on the fight against digital repression and disinformation, and then compares these practices to the particular vulnerabilities of Small Pacific Island Nations. Such a comparative study is vital to help build resilience, and trust, in democratic systems on both sides of the Pacific. We argue that a multifaceted approach is needed to build more resilient and sustainable democratic systems. This should encompass both targeted reforms focusing on election infrastructure security—such as requiring paper ballots and risk-limiting audits—with deeper structural interventions to limit the spread of misinformation and combat digital repression

    Managing the Misinformation Marketplace: The First Amendment and the Fight Against Fake News

    Get PDF
    In recent years, fake news has overtaken the internet. Fake news publishers are able to disseminate false stories widely and cheaply on social media websites, amassing millions of likes, comments, and shares, with some fake news even “trending” on certain platforms. The ease with which a publisher can create and spread falsehoods has led to a marketplace of misinformation unprecedented in size and power. People’s vulnerability to fake news means that they are far less likely to receive accurate political information and are therefore unable to make informed decisions when voting. Because a democratic system relies on an informed populace to determine how it should act, fake news presents a unique threat to U.S. democracy. Although fake news threatens democratic institutions, First Amendment protections for false speech present a significant obstacle for regulatory remedies. This Note explores the ways these speech protections interfere with the government’s ability to protect political discourse—the process that enables it to function effectively—and proposes that the government regulate journalists to ensure that people can rely on legitimate news media to receive accurate information

    Next-Generation Technology and Electoral Democracy: Understanding the Changing Environment

    Get PDF
    Democracies around the world are facing growing threats to their electoral systems in the digital age. Foreign interference in the form of dis- and misinformation has already influenced the results of democratic elections and altered the course of history. This special report, the result of a research project conducted in partnership with the Konrad-Adenauer-Stiftung (KAS) Canada, examines these cyberthreats from a Canadian and German perspective. Both Canada and Germany share common goals centred around protecting human rights, democracy and the rule of law, and international peace and security. Using case studies from experts in fields such as computer science, law and public policy, the special report offers recommendations to guide policy makers and stakeholders on how to protect elections from next-generation technologies and the threats they pose to democracy

    Still Waters Run Deep(fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment

    Get PDF
    This Note explores how deepfake technology can disrupt democracy and influence elections through the protections given to political speech under the First Amendment. Part II describes deepfakes in greater detail and identifies the wide uses for deepfake technology. Part III reflects on how the federal government and states are attempting to regulate deepfakes, mainly to protect individuals from pornographic exploitation and election tampering. Finally, Part IV discusses in detail how the First Amendment creates constitutional barriers in regulating deepfakes

    Living With Lies

    Get PDF
    Disinformation and misinformation have a significant impact on societies and political systems in Latin America and the Caribbean. This issue examines the challenges of combatting fake news and how this process has evolved during the COVID-19 pandemic. Guest editor Ricardo Trotti, Executive Director of the Inter American Press Association, invites contributing authors to analyze issues that arise from misleading health news, disinformation surrounding political and electoral processes, and the need for public education to ensure accurate and sustainable media

    Arming the public with artificial intelligence to counter social bots

    Full text link
    The increased relevance of social media in our daily life has been accompanied by efforts to manipulate online conversations and opinions. Deceptive social bots -- automated or semi-automated accounts designed to impersonate humans -- have been successfully exploited for these kinds of abuse. Researchers have responded by developing AI tools to arm the public in the fight against social bots. Here we review the literature on different types of bots, their impact, and detection methods. We use the case study of Botometer, a popular bot detection tool developed at Indiana University, to illustrate how people interact with AI countermeasures. A user experience survey suggests that bot detection has become an integral part of the social media experience for many users. However, barriers in interpreting the output of AI tools can lead to fundamental misunderstandings. The arms race between machine learning methods to develop sophisticated bots and effective countermeasures makes it necessary to update the training data and features of detection tools. We again use the Botometer case to illustrate both algorithmic and interpretability improvements of bot scores, designed to meet user expectations. We conclude by discussing how future AI developments may affect the fight between malicious bots and the public.Comment: Published in Human Behavior and Emerging Technologie

    Digital Civic Participation and Misinformation during the 2020 Taiwanese Presidential Election

    Get PDF
    From fact-checking chatbots to community-maintained misinformation databases, Taiwan has emerged as a critical case-study for citizen participation in politics online. Due to Taiwan’s geopolitical history with China, the recent 2020 Taiwanese Presidential Election brought fierce levels of online engagement led by citizens from both sides of the strait. In this article, we study misinformation and digital participation on three platforms, namely Line, Twitter, and Taiwan’s Professional Technology Temple (PTT, Taiwan’s equivalent of Reddit). Each of these platforms presents a different facet of the elections. Results reveal that the greatest level of disagreement occurs in discussion about incumbent president Tsai. Chinese users demonstrate emergent coordination and selective discussion around topics like China, Hong Kong, and President Tsai, whereas topics like Covid-19 are avoided. We discover an imbalance of the political presence of Tsai on Twitter, which suggests partisan practices in disinformation regulation. The cases of Taiwan and China point toward a growing trend where regular citizens, enabled by new media, can both exacerbate and hinder the flow of misinformation. The study highlights an overlooked aspect of misinformation studies, beyond the veracity of information itself, that is the clash of ideologies, practices, and cultural history that matter to democratic ideals
    • …
    corecore