4,317 research outputs found

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Recommender systems and their ethical challenges

    Get PDF
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system

    Platforms, the First Amendment and Online Speech: Regulating the Filters

    Get PDF
    In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity. The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information. This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Automation in Moderation

    Get PDF
    This Article assesses recent efforts to encourage online platforms to use automated means to prevent the dissemination of unlawful online content before it is ever seen or distributed. As lawmakers in Europe and around the world closely scrutinize platforms’ “content moderation” practices, automation and artificial intelligence appear increasingly attractive options for ridding the Internet of many kinds of harmful online content, including defamation, copyright infringement, and terrorist speech. Proponents of these initiatives suggest that requiring platforms to screen user content using automation will promote healthier online discourse and will aid efforts to limit Big Tech’s power.In fact, however, the regulations that incentivize platforms to use automation in content moderation come with unappreciated costs for civil liberties and unexpected benefits for platforms. The new automation techniques exacerbate existing risks to free speech and user privacy and create ripe new sources of information for surveillance, aggravating threats to free expression, associational rights, religious freedoms, and equality. Automation also worsens transparency and accountability deficits. Far from curtailing private power, the new regulations endorse and expand platform authority to police online speech, with little in the way of oversight and few countervailing checks. New regulations of online intermediaries should therefore incorporate checks on the use of automation to avoid exacerbating these dynamics. Carefully drawn transparency obligations, algorithmic accountability mechanisms, and procedural safeguards can help to ameliorate the effects of these regulations on users and competition

    ALGORITHMS AND FUNDAMENTAL RIGHTS: THE CASE OF AUTOMATED ONLINE FILTERS

    Get PDF
    The information that we see on the internet is increasingly tailored by automated ranking and filtering algorithms used by online platforms, which significantly interfere with the exercise of fundamental rights online, particularly the freedom of expression and information. The EU’s regulation of the internet prohibits general monitoring obligations. The paper first analyses the CJEU’s case law which has long resisted attempts to require internet intermediaries to use automated software filters to remove infringing user uploads. This is followed by an analysis of article 17 of the Directive on Copyright in the Digital Single Market, which effectively requires online platforms to use automated filtering to ensure the unavailability of unauthorized copyrighted content. The Commission’s guidance and the AG’s opinion in the annulment action are discussed. The conclusion is that the regulation of the filtering algorithms themselves will be necessary to prevent private censorship and protect fundamental rights online

    Countering Personalized Speech

    Get PDF
    Social media platforms use personalization algorithms to make content curation decisions for each end user. These personalized recommendation decisions are essentially speech conveying a platform\u27s predictions on content relevance for each end user. Yet, they are causing some of the worst problems on the internet. First, they facilitate the precipitous spread of mis- and disinformation by exploiting the very same biases and insecurities that drive end user engagement with such content. Second, they exacerbate social media addiction and related mental health harms by leveraging users\u27 affective needs to drive engagement to greater and greater heights. Lastly, they erode end user privacy and autonomy as both sources and incentives for data collection. As with any harmful speech, the solution is often counterspeech. Free speech jurisprudence considers counterspeech the most speech-protective weapon to combat false or harmful speech. Thus, to combat problematic recommendation decisions, social media platforms, policymakers, and other stakeholders should embolden end users to use counterspeech to reduce the harmful effects of platform personalization. One way to implement this solution is through end user personalization inputs. These inputs reflect end user expression about a platform\u27s recommendation decisions. However, industry-standard personalization inputs are failing to provide effective countermeasures against problematic recommendation decisions. On most, if not all, major social media platforms, the existing inputs confer limited ex post control over the platform\u27s recommendation decisions. In order for end user personalization to achieve the promise of counterspeech, I make several proposals along key regulatory modalities, including revising the architecture of personalization inputs to confer robust ex ante capabilities that filter by content type and characteristics
    • …
    corecore