15,243 research outputs found

    Surveillance, big data and democracy: lessons for Australia from the US and UK

    Get PDF
    This article argues that current laws are ill-equipped to deal with the multifaceted threats to individual privacy by governments, corporations and our own need to participate in the information society. Introduction In the era of big data, where people find themselves surveilled in ever more finely granulated aspects of their lives, and where the data profiles built from an accumulation of data gathered about themselves and others are used to predict as well as shape their behaviours, the question of privacy protection arises constantly. In this article we interrogate whether the discourse of privacy is sufficient to address this new paradigm of information flow and control. What we confront in this area is a set of practices concerning the collection, aggregation, sharing, interrogation and uses of data on a scale that crosses private and public boundaries, jurisdictional boundaries, and importantly, the boundaries between reality and simulation. The consequences of these practices are emerging as sometimes useful and sometimes damaging to governments, citizens and commercial organisations. Understanding how to regulate this sphere of activity to address the harms, to create an infrastructure of accountability, and to bring more transparency to the practices mentioned, is a challenge of some complexity. Using privacy frameworks may not provide the solutions or protections that ultimately are being sought. This article is concerned with data gathering and surveillance practices, by business and government, and the implications for individual privacy in the face of widespread collection and use of big data. We will firstly outline the practices around data and the issues that arise from such practices. We then consider how courts in the United Kingdom (‘UK’) and the United States (‘US’) are attempting to frame these issues using current legal frameworks, and finish by considering the Australian context. Notably the discourse around privacy protection differs significantly across these jurisdictions, encompassing elements of constitutional rights and freedoms, specific legislative schemes, data protection, anti-terrorist and criminal laws, tort and equity. This lack of a common understanding of what is or what should be encompassed within privacy makes it a very fragile creature indeed. On the basis of the exploration of these issues, we conclude that current laws are ill-equipped to deal with the multifaceted threats to individual privacy by governments, corporations and our own need to participate in the information society

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact

    After Snowden: Regulating Technology-Aided Surveillance in the Digital Age

    Get PDF
    Imagine a state that compels its citizens to inform it at all times of where they are, who they are with, what they are doing, who they are talking to, how they spend their time and money, and even what they are interested in. None of us would want to live there. Human rights groups would condemn the state for denying the most basic elements of human dignity and freedom. Student groups would call for boycotts to show solidarity. We would pity the offending state\u27s citizens for their inability to enjoy the rights and privileges we know to be essential to a liberal democracy. The reality, of course, is that this is our state-with one minor wrinkle. The United States does not directly compel us to share all of the above intimate information with it. Instead, it relies on private sector companies to collect it all, and then it takes it from them at will. We consent to share all of this private information with the companies that connect us to the intensely hyperlinked world in which we now live through our smart phones, tablets, and personal computers. Our cell phones constantly apprise the phone company of where we are, as well as with whom we are talking or texting. When we send emails, we share the addressing information, subject line, and content with the internet service provider. When we search the web or read something online, we reveal our interests to the company that runs the search engine. When we purchase anything with a credit card, we pass on that information to the credit card company. In short, we share virtually everything about our lives--much of it intensely personal-with some private company. It is recorded in an easily collected, stored, and analyzed digital form. We do so consensually, at least in theory, because we could choose to live without using the forms of communication that dominate modem existence. But to do so would require cutting oneself off from most of the world as well. That is a high price for privacy

    The Forgotten Right "to Be Secure"

    Get PDF
    Surveillance methods in the United States operate under the general principle that “use precedes regulation.” While the general principle of “use precedes regulation” is widely understood, its societal costs have yet to be fully realized. In the period between “initial use” and “regulation,” government actors can utilize harmful investigative techniques with relative impunity. Assuming a given technique is ultimately subjected to regulation, its preregulation uses are practically exempted from any such regulation due to qualified immunity (for the actor and municipality) and the exclusionary rule’s good faith exception (for any resulting evidence). This expectation of impunity invites strategic government actors to make frequent and arbitrary uses of harmful investigative techniques during preregulation periods. Regulatory delays tend to run long (often a decade or more) and are attributable in no small part to the stalling methods of law enforcement (through assertions of privilege, deceptive funding requests, and strategic sequencing of criminal investigations). While the societal costs of regulatory delay are high, rising, and difficult to control, the conventional efforts to shorten regulatory delays (through expedited legislation and broader rules of Article III standing) have proved ineffective. This Article introduces an alternative method to control the costs of regulatory delay: locating rights to be “protected” and “free from fear” in the “to be secure” text of the Fourth Amendment. Courts and most commentators interpret the Fourth Amendment to safeguard a mere right to be “spared” unreasonable searches and seizures. A study of the “to be secure” text, however, suggests that the Amendment can be read more broadly: to guarantee a right to be “protected” against unreasonable searches and seizures, and possibly a right to be “free from fear” against such government action. Support for these broad readings of “to be secure” is found in the original meaning of “secure,” the Amendment’s structure, and founding-era discourse regarding searches and seizures. The rights to be “protected” and “free from fear” can be adequately safeguarded by a judicially-created rule against government “adoption” of an investigative method that constitutes an unregulated and unreasonable search or seizure. The upshot of this Fourth Amendment rule against “adoption” is earlier standing to challenge the constitutionality of concealed investigative techniques. Earlier access to courts invites earlier j

    Privacy & law enforcement

    Get PDF

    The Poverty of Posner\u27s Pragmatism: Balancing Away Liberty After 9/11

    Get PDF
    This review of Richard Posner\u27s Not a Suicide Pact: The Constitution in a Time of National Emergency argues that Posner\u27s particular brand of pragmatic utilitarianism is particularly ill-suited to constitutional interpretation, as it seems to negate the very idea of precommitment that is so essential to constitutionalism. Instead, Posner treats the Constitution as little more than an invitation to pragmatic policy judgment, and then employs that judgment through speculative cost-benefit balancing to find constitutionally unobjectionable most of what the Bush Administration has done thus far in the war on terror, including coercive interrogation, incommunicado detention, warrantless wiretapping, and ethnic profiling. Indeed, Posner\u27s Constitution would permit the Administration to go much further than it has - among other things, he defends indefinite preventive detention, banning Islamic extremist rhetoric, mass wiretapping of the entire nation, and making it a crime for newspapers to publish classified information. All of this is permissible, Posner argues, because unless the Constitution bend[s] in the face of threats to our national security, it will break. Ironically, Posner reaches these results with a constitutional theory more in keeping with Chief Justice Earl Warren than Justice Antonin Scalia. Eschewing popular conservative attacks on judicial activism, Posner argues that given the open-ended character of many of the Constitution\u27s most important terms, it is not objectionable, but inevitable, that constitutional law is judge-made. He dismisses the constitutional theories of textualism and originalism favored by many conservative judges and scholars as canards. But having rejected textualism and originalism, Posner proceeds unwittingly to offer a book-length demonstration of what textualists and originalists most fear from constitutional theorists who emphasize the document\u27s open-ended and evolving character. In Posner\u27s approach, the Constitution loses almost any sense of a binding precommitment, and is reduced to a cover for judges to impose their own subjective value judgments on others. The review first discusses Posner\u27s analysis of several specific security-liberty issues, in order to illustrate how his method works in concrete scenarios. I then turn to the broader implications his theory has for constitutional law, which in my view are quite dangerous

    Unlocking the “Virtual Cage” of Wildlife Surveillance

    Get PDF
    The electronic surveillance of wildlife has grown more extensive than ever. For instance, thousands of wolves wear collars transmitting signals to wildlife biologists. Some collars inject wolves with tranquilizers that allow for their immediate capture if they stray outside of the boundaries set by anthropocentric management policies. Hunters have intercepted the signals from surveillance collars and have used this information to track and slaughter the animals. While the ostensible reason for the surveillance programs is to facilitate the peaceful coexistence of humanity and wildlife, the reality is less benign—an outdoor version of Bentham’s Panopticon. This Article reconceptualizes the enterprise of wildlife surveillance. Without suggesting that animals have standing to assert constitutional rights, the Article posits a public interest in protecting the privacy of wildlife. The very notion of wildness implies privacy. The law already protects the bodily integrity of animals to some degree, and a protected zone of privacy is penumbral to this core protection, much the same way that human privacy emanates from narrower guarantees against government intrusion. Policy implications follow that are akin to the rules under the Fourth Amendment limiting the government’s encroachment on human privacy. Just as the police cannot install a wiretap without demonstrating a particularized investigative need for which all less intrusive methods would be insufficient, so too should surveillance of wildlife necessitate a specific showing of urgency. A detached, neutral authority should review all applications for electronic monitoring of wildlife. Violati ons of the rules should result in substantial sanctions. The Article concludes by considering—and refuting—foreseeable objections to heightened requirements for the surveillance of wildlife

    Catalyzing Privacy Law

    Get PDF
    The United States famously lacks a comprehensive federal data privacy law. In the past year, however, over half the states have proposed broad privacy bills or have established task forces to propose possible privacy legislation. Meanwhile, congressional committees are holding hearings on multiple privacy bills. What is catalyzing this legislative momentum? Some believe that Europe’s General Data Protection Regulation (GDPR), which came into force in 2018, is the driving factor. But with the California Consumer Privacy Act (CCPA) which took effect in January 2020, California has emerged as an alternate contender in the race to set the new standard for privacy.Our close comparison of the GDPR and California’s privacy law reveals that the California law is not GDPR-lite: it retains a fundamentally American approach to information privacy. Reviewing the literature on regulatory competition, we argue that California, not Brussels, is catalyzing privacy law across the United States. And what is happening is not a simple story of powerful state actors. It is more accurately characterized as the result of individual networked norm entrepreneurs, influenced and even empowered by data globalization. Our study helps explain the puzzle of why Europe’s data privacy approach failed to spur US legislation for over two decades. Finally, our study answers critical questions of practical interest to individuals—who will protect my privacy?—and to businesses—whose rules should I follow

    Stand in the Place Where Data Live: Data Breaches as Article III Injuries

    Get PDF
    Every day, another hacker gains unauthorized access to information, be it credit card data from grocery stores or fingerprint records from federal databases. Bad actors who orchestrate these data breaches, if they can be found, face clear criminal liability. Still, a hacker’s conviction may not be satisfying to victims whose data was accessed, and so victims may seek proper redress through lawsuits against compromised organizations. In those lawsuits, plaintiff-victims allege promising theories, including that the compromised organization negligently caused the data breach or broke an implied contract to protect customers’ personal information. However, many federal courts see a data breach as essentially harmless, or that data breach plaintiff-victims do not necessarily suffer cognizable legal injuries. In practice, this means that the plaintiffs do not have Article III standing, and courts do not reach merits determinations of fault. Instead, a data breach to these courts is only harmful to the extent that it leads to a subsequent injury, like identity theft or fraud. Therefore, data breach victims must suffer even more harm before they can bring a lawsuit. Other courts under this framework do nonetheless find that data breach plaintiff-victims have standing. However, even those courts still wrongfully check whether the plaintiffs suffered future identity theft, fraud, or other harm. Those courts simply find that such subsequent harm is readily apparent. This Note offers a proper approach to standing in data breach lawsuits. I argue that the moment a victims’ data is exposed without their authorization, they suffer a cognizable common law injury, regardless of whether that data exposure actually causes subsequent harm. Rather than thinking of data breaches as a means to future data misuse, courts should think of data breaches as injurious in and of themselves
    • 

    corecore