4 research outputs found

    Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest

    Full text link
    Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse today. The opaque nature of the algorithms these platforms use to curate content raises societal questions. Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes. However, existing auditing methods face fundamental limitations because they function independent of the platforms. Concerns of potential harm have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. Our first contribution is to enumerate the challenges of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4x for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle

    Increasing Fairness in Targeted Advertising. The Risk of Gender Stereotyping by Job Ad Algorithms

    No full text
    Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. Yet, optimizing job ads for relevance also carries risks – from problematic gender stereotyping to potential algorithmic discrimination. The winter 2021 Clinic Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms examined the ethical implications of targeted advertising, with a view to developing feasible, fairness-oriented solutions. The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions. The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines – this resulting document – that cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they may also be of interest to policymakers

    Increasing Fairness in Targeted Advertising. The Risk of Gender Stereotyping by Job Ad Algorithms

    No full text
    Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. Yet, optimizing job ads for relevance also carries risks – from problematic gender stereotyping to potential algorithmic discrimination. The winter 2021 Clinic Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms examined the ethical implications of targeted advertising, with a view to developing feasible, fairness-oriented solutions. The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions. The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines – this resulting document – that cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they may also be of interest to policymakers
    corecore