5 research outputs found

    Design Recommendations for Safer Election Campaigning Online

    No full text
    The internet is a place where the political opinion of voters is ever more formed on platforms and their user-generated content globally. A sphere in which the right to free- dom of expression, information and free and fair election are core human rights normative safeguards for our democracy. Securing this process for whoever is not an easy task, as examples like the 2016 US election, the Brexit campaign or the events of the 6th of Jan- uary 2021 illustrate. The European Union has taken regulatory action to secure the digital manifestations of elections, by issuing legislation like the General Data Protection Regu- lation, the Artificial Intelligence Act, the Digital Services Act (DSA)or the Proposal for a regulation of the European Parliament and the European council on the transparency and targeting of political advertising. The aim is to make platforms more transparent, regarding their algorithms deciding on recommendations, price of the ad, or to standardize content moderation to a certain degree. Platforms on the other hand use their Terms of Service (ToS) to implement their Community Standards – a selection of law-like clauses allowing for deletion or blocking of content – to set quasi-norms to safeguard democracy on- line. The ToS used by very large platforms (VLOP) according to Art 25 DSA however does not include granular clauses for European campaigning. The more recent design solutions on platforms include advertising repositories, or warning labels attached to problematic content to better inform the public. However, moderation of content addressing the heart of democracy and the democratic process per se is crucial for the status of human rights in Europe. The first decision taken on a piece of content – if it should be uploaded on the platform or not – is usually automated and controlled by machine learning algorithms. The system in place selects the pieces of content that will be decided upon in the next process step by a human. The moderation of political speech, however, is not solely text-based but does include a fine line of sarcastic elements, emojis or visual content to express itself which is another obstacle to moderating in an electoral context. This article, therefore, asks the question about how to better safeguard the right to fair elections, the right to freedom of expression and information in online campaigning and elections adhering to the recent European legislation, such as the GDPR, the DSA and the AIA and the proposal on the transparency and targeting of political advertising? The article answers the question by taking a closer look at the publicly available data published by platforms on behalf of their transparency reports. Furthermore, the ToS and Community Standards should be analyzed and compared. The process and architecture of content moderation for the selected online platforms are described and modelled according to the publicly available information. Only by providing a more concrete look at content moderation design and practice better solutions for the digital future of democracy can be crafted.Organisation & Governanc

    Delete or not to Delete: Methodological Reflections on Content Moderation

    No full text
    Content moderation is protecting human rights such as freedom of speech, as well as the right to impart and seek information. Online platforms implement rules to moderate content on their platforms through their Terms of Service (ToS), which provides for the legal grounds to delete content. Content moderation is an example of a socio-technical process. The architecture includes a layer that classifies content according to the ToS, followed by human moderation for selected pieces of content. New regulatory approaches, such as the Digital Services Act (DSA) or the Artificial Intelligence Act (AIA) demand more transparency and explainability for moderation systems and the decisions taken. Therefore, this article answers questions about the socio-technical sphere of human moderation: • How is certainty about content moderation decisions perceived within the moderation process? • How does the measurement of time affect content moderator’s work? • How much context is needed to take a content moderation decision? A sample of 1600 pieces of content was coded according to international and national law, as well as on the Community Standards developed by Meta, mimicking a content moderation scenario that includes lex specialis for content moderation – the German Network.Organisation & Governanc

    Tough Decisions? Supporting System Classification According to the AI Act

    No full text
    The AI Act represents a significant legislative effort by the European Union to govern the use of AI systems according to different risk-related classes, linking varying degrees of compliance obligations to the system's classification. However, it is often critiqued due to the lack of general public comprehension and effectiveness regarding the classification of AI systems to the corresponding risk classes. To mitigate those shortcomings, we propose a Decision-Tree-based framework aimed at increasing robustness, legal compliance and classification clarity with the Regulation. Quantitative evaluation shows that our framework is especially useful to individuals without a legal background, allowing them to improve considerably the accuracy and significantly reduce the time of case classification.Organisation & GovernanceInformation and Communication Technolog

    The 2021 German Federal Election on Social Media: Analysing Electoral Risks Created by Twitter and Facebook

    No full text
    Safeguarding democratic elections is hard. Social media plays a vital role in the discourse around elections and during electoral campaigns. The following article provides an analysis of the ‘systemic electoral risks’ created by Twitter and Facebook and the mitigation strategies employed by the platforms. It is based on the 2020 proposal by the European Commission for the new Digital Services Act (DSA) in the context of the 2021 German federal elections. This article focuses on Twitter and Facebook and their roles during the German federal elections that took place on 26 September 2021. We analysed three systemic electoral risk categories: 1) the dissemination of illegal content, 2) negative effects on electoral rights, and 3) the influence of disinformation and developed systematic categories for this purpose. In conclusion, we discuss how to respond to these challenges as well as avenues for future research.Organisation & Governanc

    Increasing Fairness in Targeted Advertising. The Risk of Gender Stereotyping by Job Ad Algorithms

    No full text
    Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. Yet, optimizing job ads for relevance also carries risks – from problematic gender stereotyping to potential algorithmic discrimination. The winter 2021 Clinic Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms examined the ethical implications of targeted advertising, with a view to developing feasible, fairness-oriented solutions. The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions. The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines – this resulting document – that cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they may also be of interest to policymakers
    corecore