71 research outputs found

    Fourth Amendment Fiduciaries

    Get PDF
    Fourth Amendment law is sorely in need of reform. To paraphrase Justice Sotomayor’s concurrence in United States v. Jones, the idea that people have no expectation of privacy in information voluntarily shared with third-parties—the foundation of the widely reviled “third-party doctrine”—makes little sense in the digital age. In truth, however, it is not just the third-party doctrine that needs retooling today. It is the Fourth Amendment’s general approach to the problem of “shared information.” Under existing law, if A shares information with B, A runs the risk of “misplaced trust”—the risk that B will disclose the information to law enforcement. Although the misplaced trust rule makes sense as a default, it comes under strain in cases where A and B have no relationship of trust and the only reason that A shares information with B is to obtain a socially valuable (and practically indispensable) service. In such cases, I argue that the doctrine should treat B as an “information fiduciary” and analyze B’s cooperation with law enforcement—whether voluntary or compelled—as a Fourth Amendment search. The argument develops in three parts. Part I demonstrates that the Court has already identified two settings—if only implicitly—where fiduciary-style protections are necessary to safeguard constitutional privacy: medical care and hotels. When A is a patient and B is a doctor, and, likewise, when A is a guest and B is a hotel manager, the Court has been reluctant to apply the “misplaced trust” rule. Rightly so: the principle is mismatched to the underlying relationship. From there, Part II fleshes out the normative argument. Put simply, we do not “trust” information fiduciaries, in the everyday sense, at all. So it makes little sense—normatively, or even semantically—to speak of trust being “misplaced.” Rather, the information is held for the benefit of the sharing party, and its use should be constrained by implied duties of care and loyalty. Finally, Part III lays the groundwork for determining who are “Fourth Amendment fiduciaries.” The Article concludes by exploring various practical metrics that courts might adopt to answer this question

    Magic Words

    Full text link
    Broadly speaking, this Article has two goals. The first is to demonstrate the prominence of functionalism in the interpretive practices of the Supreme Court. Reading a case like NFIB, it would be easy to conclude that the tension between labels and function reflects a deep rift in our legal order. On reflection, though, the rift turns out to be something of a mirage. While judicial opinions do occasionally employ the rhetoric of label-formalism, we are all functionalists at heart. The Article’s second goal is to explore two exceptions to this norm. One is a faux exception—an exception to functionalism that actually reinforces its primacy. The second is a genuine exception, though very possibly a lamentable one. The faux exception is the use of clear statement rules. In some domains, the Court has held that drafters—be they legislative bodies drafting statutes, or private parties drafting contracts—must use precise language when directing outcomes of an especially momentous or disruptive nature. By imposing this requirement, clear statement rules tether interpretation to labels: they disable courts from looking beyond the words that drafters use. Clear statement rules are thus designed to shut down the interpretive enterprise. And in that sense, although clear statement rules call for label-formalism, they actually underscore the primacy of functionalism. The existence of clear statement rules—that they are necessary in the first instance—suggests that when judges are left to their own devices, they focus on function, not labels. The second exception to functionalism—a real one, though not necessarily a wise one—is a specific doctrinal setting: race equality jurisprudence. There, the focus is often on labels, not function, because the labels in question—racial categories—are understood to work freestanding harm. When confronting race equality cases, the Court does ascribe magical power to labels, but it is a destructive kind of magical power: laws that employ racial labels are ipso facto suspect, no matter their operation or underlying purpose. Drawing on Reva Siegel’s work, I argue that the Court’s aversion to racial labels is divisible into two conceptually distinct views. From one view—the color-blindness view—all race-conscious lawmaking is suspect, and the presence of racial labels is troubling simply because it evinces race-consciousness. From the other view—the anti-balkanization view—racial labels are intrinsically problematic. The Constitution does not necessarily frown on laws that pursue race-related objectives, but it does frown on the use of racial labels to further those objectives. The Article closes on a normative note. I argue that the anti-balkanization view, by transforming racial labels as a source of taboo, clashes with functionalist interpretation. If the anti-balkanization view can be reconciled with our practices, it is because racial labels are genuinely exceptional—because, in light of our history, they really do have negative magic power. I conclude by expressing skepticism about this proposition

    Big Data Policing and the Redistribution of Anxiety

    Get PDF
    By equipping police with data, what are we trying to accomplish? Certain answers ring familiar. For one thing, we are trying to make criminal justice decisions, plagued as they often are by inaccuracy and bias, more refined. For another, we are trying to boost the efficiency of governance institutions-police departments, prosecutor\u27s offices, municipal courts-that operate under the pall of scarcity. For the moment, I want to put answers like these to one side; not because they are wrong, but because they seem like only part of the story. Another goal of big data policing, in addition to those just described, is to produce a social order-a surveillance society-in which people constantly monitor and curate the data-trails they leave behind in everyday life. The idea of self-monitoring in response to surveillance is not new. Data intensifies and extends this dynamic; it does not create the dynamic ex nihilo. But the fact remains: in both scale and scope, data surveillance today lacks meaningful precedent. We are fast approaching a world in which virtually everything one does at t1--every movement one makes in public, every bond one forges on social media, every transaction one participates in-will be recorded and archived, becoming a potential foundation for adverse treatment at t2

    The Constitutional Limits of Private Surveillance

    Get PDF

    Ballership

    Get PDF

    Aggregate Stare Decisis

    Get PDF
    The fate of stare decisis hangs in the wind. Different factions of the Supreme Court are now engaged in open debate—echoing decades of scholarship—about the doctrine’s role in our constitutional system. Broadly speaking, two camps have emerged. The first embraces the orthodox view that stare decisis should reflect “neutral principles” that run orthogonal to a case’s merits; otherwise, it will be incapable of keeping the law stable over time. The second argues that insulating stare decisis from the underlying merits has always been a conceptual mistake. Instead, the doctrine should focus more explicitly on the merits—by diagnosing the magnitude of past error and allowing “egregiously wrong” decisions to be dismantled without constraint. This Article develops a compromise approach: an “aggregate voting rule,” requiring the combined vote across both courts—the one that crafted the holding at t1 and the one scrutinizing it at t2—to total a majority. In other words, the durability of past decisions should depend on the amount of support they were originally able to command. This would capture the main appeal of reform position—the idea that stare decisis should not preclude the correction of significant missteps—but also retain the core of stability that defines the orthodox view. Under the latter, the ideal of respect for precedent drives the doctrine’s content. Under an aggregate voting rule, the same ideal would express itself, instead, in the doctrine’s mechanical structure—freeing judges to focus on the merits, without abandoning the (non-merits) values that have long animated stare decisis. This would facilitate the airing out of disagreement and the forward motion of law, while also encouraging judges to locate avenues of doctrinal compromise

    Plausible Cause : Explanatory Standards in the Age of Powerful Machines

    Get PDF
    The Fourth Amendment\u27s probable cause requirement is not about numbers or statistics. It is about requiring the police to account for their decisions. For a theory of wrongdoing to satisfy probable cause-and warrant a search or seizure-it must be plausible. The police must be able to explain why the observed facts invite an inference of wrongdoing, and judges must have an opportunity to scrutinize that explanation. Until recently, the explanatory aspect of Fourth Amendment suspicion- plausible cause -has been uncontroversial, and central to the Supreme Court\u27s jurisprudence, for a simple reason: explanations have served, in practice, as a guarantor of statistical likelihood. In other words, forcing police to articulate theories of wrongdoing is the means by which courts have traditionally ensured that (roughly) the right persons, houses, papers, and effects are targeted for intrusion. Going forward, however, technological change promises to disrupt the harmony between explanatory standards and statistical accuracy. Powerful machines enable a previously impossible combination: accurate predictions unaccompanied by explanations. As that change takes hold, we will need to think carefully about why explanation-giving matters. When judges assess the sufficiency of explanations offered by police (and other officials), what are they doing? If the answer comes back to error­ reduction-if the point of judicial oversight is simply to maximize the overall number of accurate decisions-machines could theoretically do the job as well as, if not better than, humans. But if the answer involves normative goals beyond error-reduction, automated tools-no matter their power-will remain, at best, partial substitutes for judicial scrutiny. This Article defends the latter view. I argue that statistical accuracy, though important, is not the crux of explanation-giving. Rather, explanatory standards-like probable cause-hold officials accountable to a plurality of sometimes-conflicting constitutional and rule-of-law values that, in our legal system, bound the scope of legitimate authority. Error-reduction is one such value. But there are many others, and sometimes the values work at cross purposes. When judges assess explanations, they navigate a space of value­pluralism: they identify which values are at stake in a given decisional environment and ask, where necessary, if those values have been properly balanced. Unexplained decisions render this process impossible and, in so doing, hobble the judicial role. Ultimately, that role has less to do with analytic power than practiced wisdom. A common argument against replacing judges, and other human experts, with intelligent machines is that machines are not (yet) intelligent enough to take up the mantle. In the age of powerful algorithms, however, this turns out to be a weak-and temporally limited-claim. The better argument, I suggest in closing, is that judging is not solely, or even primarily, about intelligence. It is about prudence
    • …
    corecore