32,755 research outputs found

    Predictive profiling and its legal limits:Effectiveness gone forever

    Get PDF
    We examine predictive group profiling in the Big Data context as an instrument of governmental control and regulation. We first define profiling by drawing some useful distinctions (section 6.1). We then discuss examples of predictive group profiling from policing (such as parole prediction methods taken from the us) and combatting fraud (the icov and syri systems in the Netherlands) (section 6.2). Three potential risks of profiling – the negative impact on privacy; social sorting and discrimination; and opaque decision-making – are discussed in section 6.3. We then turn to the legal framework. Is profiling by governmental agencies adequately framed? Are existing legal checks and balances sufficient to safeguard civil liberties? We discuss the relationship between profiling and the right to privacy (section 6.4) and between profiling and the prohibition on discrimination (section 6.5). The jurisprudence on the right to privacy clearly sets limits to the use of automated and predictive profiling. Profiling and data screening which interfere without distinction with the privacy of large parts of the population are disproportional. Applications need to have some link to concrete fact to be legitimate. An additional role is played by the prohibition of discrimination, which requires strengthening through the development of audit tools and discrimination-aware algorithms. We then discuss current safeguards in Dutch administrative, criminal procedure and data protection law (section 6.6), and witness a trend of weakening safeguards at the very moment when they should be applied with even more rigor. In our conclusion, we point to the tension between profiling and legal safeguards. These safeguards remain important and need to be overhauled to make them effective again. <br/

    Examined Lives: Informational Privacy and the Subject as Object

    Get PDF
    In the United States, proposals for informational privacy have proved enormously controversial. On a political level, such proposals threaten powerful data processing interests. On a theoretical level, data processors and other data privacy opponents argue that imposing restrictions on the collection, use, and exchange of personal data would ignore established understandings of property, limit individual freedom of choice, violate principles of rational information use, and infringe data processors\u27 freedom of speech. In this article, Professor Julie Cohen explores these theoretical challenges to informational privacy protection. She concludes that categorical arguments from property, choice, truth, and speech lack weight, and mask fundamentally political choices about the allocation of power over information, cost, and opportunity. Each debate, although couched in a rhetoric of individual liberty, effectively reduces individuals to objects of choices and trades made by others. Professor Cohen argues, instead, that the debate about data privacy protection should be grounded in an appreciation of the conditions necessary for individuals to develop and exercise autonomy in fact, and that meaningful autonomy requires a degree of freedom from monitoring, scrutiny, and categorization by others. The article concludes by calling for the design of both legal and technological tools for strong data privacy protection

    The Difference Prevention Makes: Regulating Preventive Justice

    Get PDF
    Since the terrorist attacks of September 11, 2001, the United States and many other countries have adopted a ‘‘paradigm of prevention,’’ employing a range of measures in an attempt to prevent future terrorist attacks. This includes the use of pre textual charges for preventive detention, the expansion of criminal liability to prohibit conduct that precedes terrorism, and expansion of surveillance at home and abroad. Politicians and government officials often speak of prevention as if it is an unqualified good. Everyone wants to prevent the next terrorist attack, after all. And many preventive initiatives, especially where they are not coercive and do not intrude on liberty, are welcome. But the move to a ‘‘preventive justice’’ model also creates potential for significant abuse. These risks suggest that we should be cautious about adopting preventive approaches, especially where they involve coercion. In part I of this essay, I articulate why preventive coercion is a problem. I respond, in particular, to a recent essay by Fred Schauer, ‘‘The Ubiquity of Prevention,’’ which argued that ‘‘it is a mistake to assume that preventive justice is a problem in itself [because] preventive justice is all around us, and it is hard to imagine a functioning society that could avoid it.’’ In part II, I outline the formal constitutional and other constraints that are implicated by preventive measures in the United States, and I demonstrate that these constraints play a relatively small role in the actual operation of preventive measures. In part III, I maintain that informal constraints may actually play a more significant operational role in checking the abuses of prevention

    Quantum surveillance and 'shared secrets'. A biometric step too far? CEPS Liberty and Security in Europe, July 2010

    Get PDF
    It is no longer sensible to regard biometrics as having neutral socio-economic, legal and political impacts. Newer generation biometrics are fluid and include behavioural and emotional data that can be combined with other data. Therefore, a range of issues needs to be reviewed in light of the increasing privatisation of ‘security’ that escapes effective, democratic parliamentary and regulatory control and oversight at national, international and EU levels, argues Juliet Lodge, Professor and co-Director of the Jean Monnet European Centre of Excellence at the University of Leeds, U

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols

    Eavesdropping Whilst You're Shopping: Balancing Personalisation and Privacy in Connected Retail Spaces

    Get PDF
    Physical retailers, who once led the way in tracking with loyalty cards and `reverse appends', now lag behind online competitors. Yet we might be seeing these tables turn, as many increasingly deploy technologies ranging from simple sensors to advanced emotion detection systems, even enabling them to tailor prices and shopping experiences on a per-customer basis. Here, we examine these in-store tracking technologies in the retail context, and evaluate them from both technical and regulatory standpoints. We first introduce the relevant technologies in context, before considering privacy impacts, the current remedies individuals might seek through technology and the law, and those remedies' limitations. To illustrate challenging tensions in this space we consider the feasibility of technical and legal approaches to both a) the recent `Go' store concept from Amazon which requires fine-grained, multi-modal tracking to function as a shop, and b) current challenges in opting in or out of increasingly pervasive passive Wi-Fi tracking. The `Go' store presents significant challenges with its legality in Europe significantly unclear and unilateral, technical measures to avoid biometric tracking likely ineffective. In the case of MAC addresses, we see a difficult-to-reconcile clash between privacy-as-confidentiality and privacy-as-control, and suggest a technical framework which might help balance the two. Significant challenges exist when seeking to balance personalisation with privacy, and researchers must work together, including across the boundaries of preferred privacy definitions, to come up with solutions that draw on both technology and the legal frameworks to provide effective and proportionate protection. Retailers, simultaneously, must ensure that their tracking is not just legal, but worthy of the trust of concerned data subjects.Comment: 10 pages, 1 figure, Proceedings of the PETRAS/IoTUK/IET Living in the Internet of Things Conference, London, United Kingdom, 28-29 March 201

    Surveillance, big data and democracy: lessons for Australia from the US and UK

    Get PDF
    This article argues that current laws are ill-equipped to deal with the multifaceted threats to individual privacy by governments, corporations and our own need to participate in the information society. Introduction In the era of big data, where people find themselves surveilled in ever more finely granulated aspects of their lives, and where the data profiles built from an accumulation of data gathered about themselves and others are used to predict as well as shape their behaviours, the question of privacy protection arises constantly. In this article we interrogate whether the discourse of privacy is sufficient to address this new paradigm of information flow and control. What we confront in this area is a set of practices concerning the collection, aggregation, sharing, interrogation and uses of data on a scale that crosses private and public boundaries, jurisdictional boundaries, and importantly, the boundaries between reality and simulation. The consequences of these practices are emerging as sometimes useful and sometimes damaging to governments, citizens and commercial organisations. Understanding how to regulate this sphere of activity to address the harms, to create an infrastructure of accountability, and to bring more transparency to the practices mentioned, is a challenge of some complexity. Using privacy frameworks may not provide the solutions or protections that ultimately are being sought. This article is concerned with data gathering and surveillance practices, by business and government, and the implications for individual privacy in the face of widespread collection and use of big data. We will firstly outline the practices around data and the issues that arise from such practices. We then consider how courts in the United Kingdom (‘UK’) and the United States (‘US’) are attempting to frame these issues using current legal frameworks, and finish by considering the Australian context. Notably the discourse around privacy protection differs significantly across these jurisdictions, encompassing elements of constitutional rights and freedoms, specific legislative schemes, data protection, anti-terrorist and criminal laws, tort and equity. This lack of a common understanding of what is or what should be encompassed within privacy makes it a very fragile creature indeed. On the basis of the exploration of these issues, we conclude that current laws are ill-equipped to deal with the multifaceted threats to individual privacy by governments, corporations and our own need to participate in the information society

    Online Personal Data Processing and EU Data Protection Reform. CEPS Task Force Report, April 2013

    Get PDF
    This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such

    Risk assessment tools in criminal justice and forensic psychiatry: The need for better data

    Get PDF
    Violence risk assessment tools are increasingly used within criminal justice and forensic psychiatry, however there is little relevant, reliable and unbiased data regarding their predictive accuracy. We argue that such data are needed to (i) prevent excessive reliance on risk assessment scores, (ii) allow matching of different risk assessment tools to different contexts of application, (iii) protect against problematic forms of discrimination and stigmatisation, and (iv) ensure that contentious demographic variables are not prematurely removed from risk assessment tools
    • 

    corecore