28 research outputs found

    The GDPR’s Rules on Data Breaches: Analysing Their Rationales and Effects

    Get PDF
    The General Data Protection Regulation (GDPR) requires an organisation that suffers a data breach to notify the competent Data Protection Authority. The organisation must also inform the relevant individuals, when a data breach threatens their rights and freedoms. This paper focuses on the following question: given the goals of the GDPR’s data breach notification obligation, what are its strengths and weaknesses? We identify six goals of, or rationales for, the GDPR`s data breach notification obligation, and we assess the obligation in the light of those goals. We refer to insights from information security and economics, and present them in a reader-friendly way for lawyers. Our main conclusion is that the GDPR’s data breach rules are likely to contribute to the goals. For instance, the data breach notification obligation can nudge organisations towards better security; such an obligation enables regulators to perform their duties; and such an obligation improves transparency and accountability. However, the paper also warns that we should not have unrealistic expectations of the possibilities for people to protect their interests after a data breach notice. Likewise, we should not have high expectations of people switching to other service providers after receiving a data breach notification. Lastly, the paper calls for Data Protection Authorities to publish more information about reported data breaches. Such information can help to analyse security threats

    Open Data, Privacy and Fair Information Principles: Towards a Balancing Framework

    Get PDF
    Open data are held to contribute to a wide variety of social and political goals, including strengthening transparency, public participation and democratic accountability, promoting economic growth and innovation, and enabling greater public sector efficiency and cost savings. However, releasing government data that contain personal information may threaten privacy and related rights and interests. In this paper we ask how these privacy interests can be respected, without unduly hampering benefits from disclosing public sector information. We propose a balancing framework to help public authorities address this question in different contexts. The framework takes into account different levels of privacy risks for different types of data. It also separates decisions about access and re-use, and highlights a range of different disclosure routes. A circumstance catalogue lists factors that might be considered when assessing whether, under which conditions, and how a dataset can be released. While open data remains an important route for the publication of government information, we conclude that it is not the only route, and there must be clear and robust public interest arguments in order to justify the disclosure of personal information as open data

    Fairness and Bias in Algorithmic Hiring

    Full text link
    Employers are adopting algorithmic hiring technology throughout the recruitment pipeline. Algorithmic fairness is especially applicable in this domain due to its high stakes and structural inequalities. Unfortunately, most work in this space provides partial treatment, often constrained by two competing narratives, optimistically focused on replacing biased recruiter decisions or pessimistically pointing to the automation of discrimination. Whether, and more importantly what types of, algorithmic hiring can be less biased and more beneficial to society than low-tech alternatives currently remains unanswered, to the detriment of trustworthiness. This multidisciplinary survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness. Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders

    New Data Security Requirements and the Proceduralization of Mass Surveillance Law after the European Data Retention Case

    Get PDF
    This paper discusses the regulation of mass metadata surveillance in Europe through the lens of the landmark judgment in which the Court of Justice of the European Union struck down the Data Retention Directive. The controversial directive obliged telecom and Internet access providers in Europe to retain metadata of all their customers for intelligence and law enforcement purposes, for a period of up to two years. In the ruling, the Court declared the directive in violation of the human rights to privacy and data protection. The Court also confirmed that the mere collection of metadata interferes with the human right to privacy. In addition, the Court developed three new criteria for assessing the level of data security required from a human rights perspective: security measures should take into account the risk of unlawful access to data, and the data’s quantity and sensitivity. While organizations that campaigned against the directive have welcomed the ruling, we warn for the risk of proceduralization of mass surveillance law. The Court did not fully condemn mass surveillance that relies on metadata, but left open the possibility of mass surveillance if policymakers lay down sufficient procedural safeguards. Such proceduralization brings systematic risks for human rights. Government agencies, with ample resources, can design complicated systems of procedural oversight for mass surveillance - and claim that mass surveillance is lawful, even if it affects millions of innocent people

    Informed Consent: We Can Do Better to Defend Privacy

    No full text
    Informed consent as a means to protect privacy is flawed, especially when considering the privacy problems of behavioral targeting. Policymakers should pay more attention to a combined approach that both protects and empowers individuals

    Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence

    Get PDF
    Algorithmic decision-making and similar types of artificial intelligence (AI) may lead to improvements in all sectors of society but can also have discriminatory effects. While current non-discrimination law offers people some protection, algorithmic decision-making presents the lawmakers and law enforcement with several challenges. For instance, algorithms can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality. This paper attempts to determine which system of non-discrimination law would best be applied to algorithmic decision-making, considering that algorithms can differentiate on the basis of characteristics that do not correlate with protected grounds of discrimination such as ethnicity or gender. The paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach the issue of algorithmic differentiation. While we focus on Europe, this paper’s concentration on concept and theory rather than specific application, should prove useful for scholars and policymakers from other regions as they encounter similar problems with algorithmic decision-making
    corecore