6 research outputs found

    Arming the public with artificial intelligence to counter social bots

    Full text link
    The increased relevance of social media in our daily life has been accompanied by efforts to manipulate online conversations and opinions. Deceptive social bots -- automated or semi-automated accounts designed to impersonate humans -- have been successfully exploited for these kinds of abuse. Researchers have responded by developing AI tools to arm the public in the fight against social bots. Here we review the literature on different types of bots, their impact, and detection methods. We use the case study of Botometer, a popular bot detection tool developed at Indiana University, to illustrate how people interact with AI countermeasures. A user experience survey suggests that bot detection has become an integral part of the social media experience for many users. However, barriers in interpreting the output of AI tools can lead to fundamental misunderstandings. The arms race between machine learning methods to develop sophisticated bots and effective countermeasures makes it necessary to update the training data and features of detection tools. We again use the Botometer case to illustrate both algorithmic and interpretability improvements of bot scores, designed to meet user expectations. We conclude by discussing how future AI developments may affect the fight between malicious bots and the public.Comment: Published in Human Behavior and Emerging Technologie

    Dismiss : uma abordagem para análise sociotécnica da desinformação digital

    Get PDF
    Orientador: Dr. Roberto PereiraTese (doutorado) - Universidade Federal do Paraná, Setor de Ciencias Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 28/08/2023Inclui referênciasÁrea de concentração: Ciência da ComputaçãoResumo: Essa tese aborda o desafio de entender e lidar com a desinformação digital como um fenômeno sociotécnico, ou seja, que envolve tanto aspectos das tecnologias utilizadas para comunicação quanto do contexto humano/social em que a desinformação ocorre. Os resultados de nosso mapeamento sistemático da literatura mostraram que projetistas de intervenções para mitigação da desinformação têm dificuldades em lidar com a natureza sociotécnica do fenômeno, tendem a utilizar abordagens disciplinares focadas nos aspectos técnicos da desinformação e abordam o fenômeno de forma segmentada. Essas dificuldades podem levar os projetistas à ignorarem aspectos relevantes para o entendimento do fenômeno e à soluções com potenciais prejudiciais, como a censura ou avisos invasivos. Nesse sentido, essa tese investiga meios para apoiar projetistas a compreenderem o fenômeno pela perspectiva sociotécnica, ajudando a caracterizar casos de desinformação digital e auxiliando no entendimento abrangente de problemas. Como solução, essa tese apresenta a Dismiss - uma aborDagem para análIse Sociotécnica de Deinformações DigItaiS. A Dismiss é fundamentada na Semiótica Organizacional, composta pelo Modelo Conceitual do Ciclo de Vida da Desinformação Digital, artefatos e materiais de apoio que amparam a análise sociotécnica da desinformação. A abordagem representa uma ferramenta epistêmica projetada para proporcionar a reflexão de seus utilizadores sobre as circunstâncias em que a desinformação ocorre, auxiliando na compreensão da origem e consequências da desinformação digital. A Dismiss foi avaliada de forma construtiva ao longo de seu desenvolvimento, usando métodos de grupo focal (11 encontros), estudos em pequena escala (7 casos), e oficinas de análise sociotécnica de casos de desinformação digital com representantes do público-alvo (3 oficinas). Os resultados dos grupos focais e estudos em pequena escala informaram o refinamento da abordagem, sua estrutura, componentes e métodos de aplicação. Os resultados das oficinas indicam a utilidade percebida da abordagem em apoiar a compreensão da desinformação como um fenômeno sociotécnico. Os resultados também indicaram aspectos que podem ser aprimorados na Dismiss, como a quantidade de passos, a explicação de artefatos, e a densidade dos materiais de apoio, informando melhoriasAbstract: This thesis addresses the challenge of understanding and dealing with digital misinformation as a sociotechnical phenomenon, meaning that it involves both aspects of the technologies used for communication and the human/social context in which misinformation occurs. The results of our systematic literature review showed that designers of interventions for mitigating misinformation face difficulties in dealing with the sociotechnical nature of the phenomenon. They tend to employ disciplinary approaches focused on the technical aspects of misinformation and often address the phenomenon in a fragmented manner. These difficulties can lead designers to overlook relevant aspects for understanding the phenomenon and result in potentially harmful solutions, such as censorship or invasive warnings. In this regard, this thesis investigates means to support designers in comprehending the phenomenon from a sociotechnical perspective, helping to characterize cases of digital misinformation and aiding in a comprehensive understanding of the issues. As a solution, this thesis presents Dismiss - an Approach for Sociotechnical Analysis of Digital Misinformation. Dismiss is grounded in Organizational Semiotics, comprised of the Conceptual Model of the Digital Misinformation Lifecycle, artifacts, and supporting materials that underpin the sociotechnical analysis of misinformation. The approach serves as an epistemic tool designed to facilitate users’ reflection on the circumstances in which misinformation occurs, assisting in understanding the origins and consequences of digital misinformation. Dismiss was constructively evaluated throughout its development, utilizing focus group methods (11 meetings), small-scale studies (7 cases), and workshops for the sociotechnical analysis of digital misinformation cases with representatives of the target audience (3 workshops). The results from the focus groups and small-scale studies informed the refinement of the approach, its structure, components, and application methods. The workshop results indicate the perceived utility of the approach in supporting the understanding of misinformation as a sociotechnical phenomenon. The results also highlighted aspects that can be improved in Dismiss, such as the number of steps, artifact explanations, and the density of supporting materials, providing insights for enhancement

    Deception strategies and threats for online discussions

    No full text
    Communication plays a major role in social systems. Effective communications, which requires the transmission of messages between individuals without disruptions or noise, can be a powerful tool to deliver intended impact. Language and style of content can be leveraged to deceive and manipulate recipients. These deception and persuasion strategies can be applied to exert power and amass capital in politics and business. In this work, we provide a modest review of how such deception and persuasion strategies were applied to different communication channels over time. We provide examples of campaigns that occurred over the last century, together with their corresponding dissemination media. In the Internet age, we enjoy access to a vast amount of information and an ability to communicate without borders. However, malicious actors work toward abusing online systems to disseminate disinformation, disrupt communication, and manipulate individuals, with automated tools such as social bots. It is important to study traditional practices of persuasion in order to investigate modern procedures and tools. We provide a discussion of current threats against society while drawing parallels with historical practices and recent research on systems of detection and prevention

    Staatlicher Schutz vor Meinungsrobotern: (verfassungs-)rechtliche Überlegungen zu einer staatlichen Schutzpflicht vor Einflüssen von Meinungsrobotern auf die politische Willensbildung in sozialen Netzwerken

    Get PDF
    Seit langem wird über die staatliche (Regulierungs-)Verantwortung u.a. hinsichtlich sozialer Netzwerkalgorithmen diskutiert. Doch was, wenn die Netzwerke zur politischen Agitation durch Dritte ausgenutzt werden, indem zahlreiche (teil-)automatisierte Nutzeraccounts die Informationsdiffusion und Kommunikation zu beeinflussen versuchen? Ist dann auch hier der Staat als Garant der politischen Willensbildung gefordert? Das Werk versucht diese vor allem verfassungsrechtlich geprägte Frage unter Berücksichtigung sozialpsychologischer und kommunikationswissenschaftlicher Grundlagen und mit Hilfe grundrechtlicher Schutzpflichten zu beantworten. Es leitet aus den kommunikationsgrundrechtlichen Schutzgütern eine entsprechende abstrakte Verantwortung her und überprüft, ob der Staat – insbesondere mit dem Medienstaatsvertrag – dieser Verantwortung in (verfassungsrechtlich) überzeugender Weise nachkommt
    corecore