176,269 research outputs found

    Legal Archetypes and Metadata Collection

    Get PDF
    In discussions of state surveillance, the values of privacy and security are often set against one another, and people often ask whether privacy is more important than national security.2 I will argue that in one sense privacy is more important than national security. Just what more important means is its own question, though, so I will be more precise. I will argue that national security rationales cannot by themselves justify some kinds of encroachments on individual privacy (including some kinds that the United States has conducted). Specifically, I turn my attention to a recent, well publicized, and recently amended statute (section 215 of the USA Patriot Act3), a surveillance program based on that statute (the National Security Agency’s bulk metadata collection program), and a recent change to that statute that addresses some of the public controversy surrounding the surveillance program (the USA Freedom Act).4 That process (a statute enabling surveillance, a program abiding by that statute, a public controversy, and a change in the law) looks like a paradigm case of law working as it should; but I am not so sure. While the program was plausibly legal, I will argue that it was morally and legally unjustifiable. Specifically, I will argue that the interpretations of section 215 that supported the program violate what Jeremy Waldron calls “legal archetypes,”5 and that changes to the law illustrate one of the central features of legal archetypes and violation of legal archetypes. The paper proceeds as follows: I begin in Part 1 by setting out what I call the “basic argument” in favor of surveillance programs. This is strictly a moral argument about the conditions under which surveillance in the service of national security can be justified. In Part 2, I turn to section 215 and the bulk metadata surveillance program based on that section. I will argue that the program was plausibly legal, though based on an aggressive, envelope-pushing interpretation of the statute. I conclude Part 2 by describing the USA Freedom Act, which amends section 215 in important ways. In Part 3, I change tack. Rather than offering an argument for the conditions under which surveillance is justified (as in Part 1), I use the discussion of the legal interpretations underlying the metadata program to describe a key ambiguity in the basic argument, and to explain a distinct concern in the program. Specifically that it undermines a legal archetype. Moreover, while the USA Freedom Act does not violate legal archetypes, and hence meets a condition for justifiability, it helps illustrate why the bulk metadata program did violate archetypes

    Routing in anonymous networks as a means to prevent traffic analysis

    Get PDF
    Traditionally, traffic analysis is something that has been used to measure and keep track of a network's situation regarding network congestion, networking hardware failures, etc. However, largely due to commercial interests such as targeted advertisement, traffic analysis techniques can also be used to identify and track a single user's movements within the Internet. To counteract this perceived breach of privacy and anonymity, several counters have been developed over time, e.g. proxies used to obfuscate the true source of traffic, making it harder for others to pinpoint your location. Another approach has been the development of so called anonymous overlay networks, application-level virtual networks running on top of the physical IP network. The core concept is that by the way of encryption and obfuscation of traffic patterns, the users of such anonymous networks will gain anonymity and protection against traffic analysis techniques. In this master's thesis we will be taking a look at how message forwarding or packet routing in IP networks functions and how this is exploited in different analysis techniques to single out a visitor to a website or just someone with a message being forwarded through a network device used for traffic analysis. After that we will discuss some examples of anonymous overlay networks and see how well they protect their users from traffic analysis, and how do their respective models hold up against traffic analysis attacks from a malicious entity. Finally, we will present a case study about Tor network's popularity by running a Tor relay node and gathering information on how much data the relay transmits and from where does the traffic originate. CCS-concepts: - Security and privacy ~ Privacy protections - Networks ~ Overlay and other logical network structures - Information systems ~ Traffic analysi

    Privacy protocols

    Full text link
    Security protocols enable secure communication over insecure channels. Privacy protocols enable private interactions over secure channels. Security protocols set up secure channels using cryptographic primitives. Privacy protocols set up private channels using secure channels. But just like some security protocols can be broken without breaking the underlying cryptography, some privacy protocols can be broken without breaking the underlying security. Such privacy attacks have been used to leverage e-commerce against targeted advertising from the outset; but their depth and scope became apparent only with the overwhelming advent of influence campaigns in politics. The blurred boundaries between privacy protocols and privacy attacks present a new challenge for protocol analysis. Covert channels turn out to be concealed not only below overt channels, but also above: subversions, and the level-below attacks are supplemented by sublimations and the level-above attacks.Comment: 38 pages, 6 figure

    Student Privacy in Learning Analytics: An Information Ethics Perspective

    Get PDF
    In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts. We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods

    Big Brother is Listening to You: Digital Eavesdropping in the Advertising Industry

    Get PDF
    In the Digital Age, information is more accessible than ever. Unfortunately, that accessibility has come at the expense of privacy. Now, more and more personal information is in the hands of corporations and governments, for uses not known to the average consumer. Although these entities have long been able to keep tabs on individuals, with the advent of virtual assistants and “always-listening” technologies, the ease by which a third party may extract information from a consumer has only increased. The stark reality is that lawmakers have left the American public behind. While other countries have enacted consumer privacy protections, the United States has no satisfactory legal framework in place to curb data collection by greedy businesses or to regulate how those companies may use and protect consumer data. This Article contemplates one use of that data: digital advertising. Inspired by stories of suspiciously well-targeted advertisements appearing on social media websites, this Article additionally questions whether companies have been honest about their collection of audio data. To address the potential harms consumers may suffer as a result of this deficient privacy protection, this Article proposes a framework wherein companies must acquire users\u27 consent and the government must ensure that businesses do not use consumer information for harmful purposes

    Privacy as a Public Good

    Get PDF
    Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy. But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy

    Catalyzing Privacy Law

    Get PDF
    The United States famously lacks a comprehensive federal data privacy law. In the past year, however, over half the states have proposed broad privacy bills or have established task forces to propose possible privacy legislation. Meanwhile, congressional committees are holding hearings on multiple privacy bills. What is catalyzing this legislative momentum? Some believe that Europe’s General Data Protection Regulation (GDPR), which came into force in 2018, is the driving factor. But with the California Consumer Privacy Act (CCPA) which took effect in January 2020, California has emerged as an alternate contender in the race to set the new standard for privacy.Our close comparison of the GDPR and California’s privacy law reveals that the California law is not GDPR-lite: it retains a fundamentally American approach to information privacy. Reviewing the literature on regulatory competition, we argue that California, not Brussels, is catalyzing privacy law across the United States. And what is happening is not a simple story of powerful state actors. It is more accurately characterized as the result of individual networked norm entrepreneurs, influenced and even empowered by data globalization. Our study helps explain the puzzle of why Europe’s data privacy approach failed to spur US legislation for over two decades. Finally, our study answers critical questions of practical interest to individuals—who will protect my privacy?—and to businesses—whose rules should I follow

    The Internet of Things Connectivity Binge: What are the Implications?

    Get PDF
    Despite wide concern about cyberattacks, outages and privacy violations, most experts believe the Internet of Things will continue to expand successfully the next few years, tying machines to machines and linking people to valuable resources, services and opportunities

    Privacy, Public Goods, and the Tragedy of the Trust Commons: A Response to Professors Fairfield and Engel

    Get PDF
    User trust is an essential resource for the information economy. Without it, users would not provide their personal information and digital businesses could not operate. Digital companies do not protect this trust sufficiently. Instead, many take advantage of it for short-term gain. They act in ways that, over time, will undermine user trust. In so doing, they act against their own best interest. This Article shows that companies behave this way because they face a tragedy of the commons. When a company takes advantage of user trust for profit, it appropriates the full benefit of this action. However, it shares the cost with all other companies that rely on the wellspring of user trust. Each company, acting rationally, has an incentive to appropriate as much of the trust resource as it can. That is why such companies collect, analyze, and “monetize” our personal information in such an unrestrained way. This behavior poses a longer term risk. User trust is like a fishery. It can withstand a certain level of exploitation and renew itself. But over-exploitation can cause it to collapse. Were digital companies collectively to undermine user trust this would not only hurt the users, it would damage the companies themselves. This Article explores commons-management theory for potential solutions to this impending tragedy of the trust commons
    • …
    corecore