5 research outputs found

    BoostNet: Bootstrapping detection of socialbots, and a case study from Guatemala

    Full text link
    We present a method to reconstruct networks of socialbots given minimal input. Then we use Kernel Density Estimates of Botometer scores from 47,000 social networking accounts to find clusters of automated accounts, discovering over 5,000 socialbots. This statistical and data driven approach allows for inference of thresholds for socialbot detection, as illustrated in a case study we present from Guatemala.Comment: 7 pages, 4 figure

    Human Supremacy as Posthuman Risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type

    A governance framework for algorithmic accountability and transparency

    Get PDF
    Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refer to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. big data), which can be paired with machine learning methods in order to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's human rights (e.g. critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). This study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on a review and analysis of existing proposals for governance of algorithmic systems, a set of four policy options are proposed, each of which addresses a different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and whistleblowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight and legal liability; and 4. global coordination for algorithmic governance

    Staatlicher Schutz vor Meinungsrobotern: (verfassungs-)rechtliche Ăśberlegungen zu einer staatlichen Schutzpflicht vor EinflĂĽssen von Meinungsrobotern auf die politische Willensbildung in sozialen Netzwerken

    Get PDF
    Seit langem wird über die staatliche (Regulierungs-)Verantwortung u.a. hinsichtlich sozialer Netzwerkalgorithmen diskutiert. Doch was, wenn die Netzwerke zur politischen Agitation durch Dritte ausgenutzt werden, indem zahlreiche (teil-)automatisierte Nutzeraccounts die Informationsdiffusion und Kommunikation zu beeinflussen versuchen? Ist dann auch hier der Staat als Garant der politischen Willensbildung gefordert? Das Werk versucht diese vor allem verfassungsrechtlich geprägte Frage unter Berücksichtigung sozialpsychologischer und kommunikationswissenschaftlicher Grundlagen und mit Hilfe grundrechtlicher Schutzpflichten zu beantworten. Es leitet aus den kommunikationsgrundrechtlichen Schutzgütern eine entsprechende abstrakte Verantwortung her und überprüft, ob der Staat – insbesondere mit dem Medienstaatsvertrag – dieser Verantwortung in (verfassungsrechtlich) überzeugender Weise nachkommt
    corecore