6 research outputs found

    I Am Dissolving into Categories and Labels - Agency Affordances for Embedding and Practicing Digital Sovereignty

    Get PDF
    While the notion of digital sovereignty is loaded with a multitude of meanings referring to various actors, values and contexts, this paper is interested in how to actualize individual digital sovereignty. We do so by introducing the concept of agency affordances, which we see as a precondition for achieving digital sovereignty. We understand this notion as the ability to exercise power to, as autonomy and agency for (digital) self-sovereignty, and as power over the infrastructural sovereignty of the privately owned automated decision-making systems (ADM) systems of digital media platforms. Building our characterization of digital sovereignty on an empirical inquiry into individuals' requirements for agency, our analysis shows that digital sovereignty consists of two distinct but interrelated elements - data sovereignty and algorithmic sovereignty. Enabling practicable digital sovereignty through agency affordances, however, will require going beyond the just technical and extending towards the wider societal (infra)structures. We outline some initial steps on how to achieve that

    Will social media save the world? : The role of social media in (dis)abling social movements

    No full text
    This paper critically discusses the role of social media in the rise and fall of social movements. It goes beyond the techno-deterministic and optimistic view of social media and offers a different insight into how the very structure and architecture of social media platform direct and influence visibility and thus “online success” of social movements. It also discusses the role of Terms and Service and use policies of the platforms. Finally, it outlines the dependency of social movements success on both offline and online factors and events.(VLID)267566

    HOW TO BE ALGORITHMICALLY GOVERNED LIKE THAT: DATA- AND ALGORITHMIC AGENCY FROM USER PERSPECTIVE

    Get PDF
    If users are being 2algorithms,andcompaniesandregulatorsareproposingwaysfor2 algorithms, and companies and regulators are proposing ways for 2 algorithms, with this paper we would like to discuss and propose a third type of governance — one where users have agency, control and governing power(s) over algorithmic systems and their outputs. Our main research question is how do we enable users to actively govern algorithms, instead of passively being governed by them? And what do the users need in order to be algorithmically governed in such a way that will enable more agency, autonomy and control when interacting with AI systems and their outputs. Instead of getting insights in an abstract way, to answer this question, we opted for a guided and supportive process where participants were able to reflect on the process, formulate and elaborate their insights, thoughts, needs and requirements based on their lived experience, i.e., after a real interaction with these algorithmic systems. We conducted a participatory technographic research with 47 participants, through a multi-stage process consisting of a survey, Subject Access Requests (Article 15 of the General Data Protection Regulation), purposeful interaction with the transparency tools of seven chosen platforms and extensive structured research diaries. A quali-quantitative analysis of the insights enabled us to formulate the participants’ requirements of 2and2 and 2 in a way that will enable their agency, control and autonomy. These requirements are translatable and implementable at a user-interaction level, via technology design and through regulatory mechanism

    Toward a Solid Acceptance of the Decentralized Web of Personal Data: Societal and Technological Convergence

    No full text
    Giving individuals more control of their personal data.SCOPUS: no.jinfo:eu-repo/semantics/publishe

    Increasing Fairness in Targeted Advertising. The Risk of Gender Stereotyping by Job Ad Algorithms

    No full text
    Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. Yet, optimizing job ads for relevance also carries risks – from problematic gender stereotyping to potential algorithmic discrimination. The winter 2021 Clinic Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms examined the ethical implications of targeted advertising, with a view to developing feasible, fairness-oriented solutions. The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions. The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines – this resulting document – that cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they may also be of interest to policymakers

    Increasing Fairness in Targeted Advertising. The Risk of Gender Stereotyping by Job Ad Algorithms

    No full text
    Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. Yet, optimizing job ads for relevance also carries risks – from problematic gender stereotyping to potential algorithmic discrimination. The winter 2021 Clinic Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms examined the ethical implications of targeted advertising, with a view to developing feasible, fairness-oriented solutions. The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions. The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines – this resulting document – that cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they may also be of interest to policymakers
    corecore