17 research outputs found

    Fairness and Bias in Algorithmic Hiring

    Full text link
    Employers are adopting algorithmic hiring technology throughout the recruitment pipeline. Algorithmic fairness is especially applicable in this domain due to its high stakes and structural inequalities. Unfortunately, most work in this space provides partial treatment, often constrained by two competing narratives, optimistically focused on replacing biased recruiter decisions or pessimistically pointing to the automation of discrimination. Whether, and more importantly what types of, algorithmic hiring can be less biased and more beneficial to society than low-tech alternatives currently remains unanswered, to the detriment of trustworthiness. This multidisciplinary survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness. Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders

    New Data Security Requirements and the Proceduralization of Mass Surveillance Law after the European Data Retention Case

    Get PDF
    This paper discusses the regulation of mass metadata surveillance in Europe through the lens of the landmark judgment in which the Court of Justice of the European Union struck down the Data Retention Directive. The controversial directive obliged telecom and Internet access providers in Europe to retain metadata of all their customers for intelligence and law enforcement purposes, for a period of up to two years. In the ruling, the Court declared the directive in violation of the human rights to privacy and data protection. The Court also confirmed that the mere collection of metadata interferes with the human right to privacy. In addition, the Court developed three new criteria for assessing the level of data security required from a human rights perspective: security measures should take into account the risk of unlawful access to data, and the data’s quantity and sensitivity. While organizations that campaigned against the directive have welcomed the ruling, we warn for the risk of proceduralization of mass surveillance law. The Court did not fully condemn mass surveillance that relies on metadata, but left open the possibility of mass surveillance if policymakers lay down sufficient procedural safeguards. Such proceduralization brings systematic risks for human rights. Government agencies, with ample resources, can design complicated systems of procedural oversight for mass surveillance - and claim that mass surveillance is lawful, even if it affects millions of innocent people

    Does everyone have a price? Understanding people’s attitude towards online and offline price discrimination

    Get PDF
    Online stores can present a different price to each customer. Such algorithmic personalised pricing can lead to advanced forms of price discrimination based on the characteristics and behaviour of individual consumers. We conducted two consumer surveys among a representative sample of the Dutch population (N=1233 and N=1202), to analyse consumer attitudes towards a list of examples of price discrimination and dynamic pricing. A vast majority finds online price discrimination unfair and unacceptable, and thinks it should be banned. However, some pricing strategies that have been used by companies for decades are almost equally unpopular. We analyse the results to better understand why people dislike many types of price discrimination

    Short paper: initial recommendations for the design of privacy management tools for smartphones

    No full text
    The continuing rise in the popularity of smartphones has led to an accompanying rise in the exposure of users to privacy threats as in the case of unintended leakage of personal information from apps. To improve transparency and the ability of users to control data leakage, the design of privacy-enhancing tools aimed at reducing the burden of informed privacy-decisions should be grounded upon users’ tacit needs and preferences. To this end, the present study explores users’ personal perception and concerns toward privacy and their expectations. Initial recommendations include: (1) consideration of the preferences of users for preserving functionalities of their apps, informing users about both (2) the real benefits and actual possibility of using privacy management tools and (3) suspected applications’ data collection behaviours in a way that matches their real concerns and values.<br
    corecore