190 research outputs found

    Conjunction and Aggregation

    Get PDF
    This Article begins with the puzzle of why the law avoids the issue of conjunctive probability. Mathematically inclined observers might, for example, employ the product rule, multiplying the probabilities associated with several events or requirements in order to assess a combined likelihood, but judges and lawyers seem otherwise inclined. Courts and statutes might be explicit about the manner in which multiple requirements should be combined, but they are not. Thus, it is often unclear whether a factfinder should assess if condition A was more likely than not to be present - and then go on to see whether condition B satisfied this standard - or whether the factfinder\u27s task is to ascertain if both A and B can together, or at once, satisfy the standard. A mathematically inclined judge or jury that thought a tort defendant .6 likely to have been negligent and .7 likely to have caused plaintiff\u27s harm might conclude that plaintiff had failed to satisfy the preponderance of the evidence standard because the chance of both requirements being met is surely less than either alone and, indeed, less than .5. Yet, the law often instructs the jury to find the defendant liable, or is strangely ambiguous in its instructions. Legal practice seems at odds with scientific logic, or at least with probabilistic reasoning. I will refer to this puzzle as the math-law divide. Although this divide is encountered frequently in law, its puzzling character is unfamiliar to most lawyers and (even) legal scholars, and it is missed entirely by most litigants and judges. This Article seeks to explain or rationalize law\u27s suppression of the product rule, or indeed any explicit alternative strategy for dealing with the conjunction issue. Part I discusses in greater detail the nature of the math-law divide and a number of traditional reactions to the puzzle. The Article then advances the idea that the process of aggregating multiple jurors\u27 assessments hides valuable information. First, Part H.B. posits that the Condorcet Jury Theorem indicates that agreement among multiple jurors might raise our level of confidence in a particular determination beyond what the jurors themselves individually report. Second, Part 11.C. urges that a supermajority\u27s mean or median voter is likely to have a different assessment from that gained from the marginal juror. As such, a supermajority (or unanimity) rule may take the place of the product rule where there are multiple requirements for liability or guilt. An attempt to extract this inframarginal information more directly would likely generate strategic behavior problems in juries. Part III extends this analysis to panels of judges, for whom outcome voting may (somewhat similarly) substitute for the product rule

    For Judicial Majoritarianism

    Get PDF

    Burdens of Persuasion in Civil Cases: Algorithms v. Explanations

    Get PDF
    The conjunction paradox has fascinated generations of scholars, primarily because it brings into focus the apparent incompatibility of equally well accepted conventions. On the one hand, trials should be structured to reduce the total number, or optimize the allocation, of errors. On the other hand, burdens of persuasion are allocated to elements by the standard jury instruction rather than to a case as a whole. Because an error in finding to be true any element of the plaintiff\u27s cause of action will result in an error if liability is found, errors on the overall case accumulate with errors on discrete issues. This, in turn, means that errors will neither be minimized nor optimized (except possibly randomly). Thus, the conventional view concerning the purpose of trial is inconsistent with the conventional view concerning the allocation of burdens of persuasion. Two recent efforts to resolve this conflict are examined in this article. Dean Saul Levmore has argued that the paradox is eliminated or reduced considerably because of either the implications of the Condorcet Jury Theorem or the implications of super majority voting rules. Professor Alex Stein has constructed a micro-economic explanation of negligence that is also offered as resolving the paradox. Neither succeed, and both fail for analogous reasons. First, each makes a series of ad hoc adjustments to the supposedly formal arguments that are out of place in formal reasoning. The result is that neither argument is, in fact, formal; both arguments thus implicitly reject the very formalisms they are supposedly employing in their explanations. Second, both articles mismodel the system of litigation they are trying to explain in an effort to close the gap between their supposedly formal models and the reality of the legal system; and when necessary corrections are made to their respective models of litigation, neither formal argument maps onto the reality of trials, leaving the original problem untouched and unexplained. These two efforts are thus very much similar to the failed effort to give a Bayesian explanation to trials and juridical proof, which similarly failed due to the inability to align the formal requirements of subjective Bayesianism with the reality of modern trials. We also explore the reasons for this consistent misuse of formal arguments in the evidentiary context. Rationality requires, at a minimum, sensitivity to the intellectual tools brought to a task, of which algorithmic theoretical accounts are only one of many. Another, somewhat neglected in legal scholarship, is substantive explanations of legal questions that take into account the surrounding legal landscape. As we show, although the theoretical efforts to domesticate the conjunction paradox fail, a substantive explanation of it can be given that demonstrates the small likelihood of perverse consequences flowing from it. The article thus adds to the growing literature concerning the nature of legal theorizing by demonstrating yet another area where legal theorizing in one of its modern conventional manifestations (involving the search for the algorithmic argument that purportedly explains or justifies an area of law) has been ineffectual, whereas explanations that are informed by the substantive contours of the relevant legal field have considerable promise

    A Bayesian Model of Voting in Juries

    Get PDF
    We take a game-theoretic approach to the analysis of juries by modelling voting as a game of incomplete information. Rather than the usual assumption of two possible signals (one indicating guilt, the other innocence), we allow jurors to perceive a full spectrum of signals. Given any voting rule requiring a fixed fraction of votes to convict, we characterize the unique symmetric equilibrium of the game, and we consider the possibility of asymmetric equilibria: we give a condition under which no asymmetric equilibria exist and show that, without under which no asymmetric equilibria exist and show that, without it, asymmetric equilibria may exist. We offer a condition under which unanimity rule exhibits a bias toward convicting the innocent, regardless of the size of the jury, and we exhibit an example showing this bias can be reversed. And we prove a "jury theorem" for our general model: as the size of the jury increases, the probability of a mistaken judgment goes to zero for every voting rule, except unanimity rule; for unanimity rule, we give a condition under which the probability of a mistake is bounded strictly above zero, and we show that, without this condition, the probability of a mistake may go to zero.

    Countersupermajoritarianism

    Get PDF
    How should the Constitution change? In Originalism and the Good Constitution, John McGinnis and Michael Rappaport argue that it ought to change in only one way: through the formal mechanisms set out in the Constitutionā€™s own Article V. This is so, they claim, because provisions adopted by supermajority vote are more likely to be substantively good. The original Constitution was ratified in just that way, they say, and subsequent changes should be implemented similarly. McGinnis and Rappaport also contend that this substantive goodness is preserved best by a mode of originalist interpretation. In this Review, we press two main arguments. First, we contend that McGinnis and Rappaportā€™s core thesis sidesteps critical problems with elevated voting rules. We also explain how at a crucial point in the book ā€” concerning Reconstruction ā€” the authors trade their commitments to supermajoritarianism and formalism away. Second, we broaden the analysis and suggest that constitutional change can and should occur not just through formal amendment, but also by means of social movements, political mobilizations, media campaigns, legislative agendas, regulatory movement, and much more. Changing the Constitution has always been a variegated process that engages the citizenry through many institutions, by way of many voting thresholds, and using many modes of argument. And that variety helps to make the Constitution good

    Supermajority Politics: Equilibrium Range, Policy Diversity, Utilitarian Welfare, and Political Compromise

    Get PDF
    The standard Bowen model of political competition with single-peaked preferences (Bowen, 1943)predicts party convergence to the median voterā€™s ideal policy, with the number of equilibrium policies not exceeding two. This result assumes majority rule and unidimensional policy space.We extend this model to static and dynamic political economies where the voting rule is a supermajority rule, and the policy space is totally ordered. Votersā€™ strategic behavior is captured by the core in static environments and by the largest consistent set in dynamic environments.In these settings, we determine the exact number of equilibria and show that it is an increasing correspondence of the supermajorityā€™s size. This result has implications for the depth of policy diversity across structurally identical supermajoritarian political economies. We also examine equilibrium eļ¬€ects of supermajority rules on utilitarian welfare and political compromise under uncertainty

    The Parliament of the Experts

    Get PDF
    In the administrative state, how should expert opinions be aggregated and used? If a panel of experts is unanimous on a question of fact, causation, or prediction, can an administrative agency rationally disagree, and on what grounds? If experts are split into a majority view and a minority view, must the agency follow the majority? Should reviewing courts limit agency discretion to select among the conflicting views of experts, or to depart from expert consensus? I argue that voting by expert panels is likely, on average, to be epistemically superior to the substantive judgment of agency heads, in determining questions of fact, causation, or prediction. Nose counting of expert panels should generally be an acceptable basis for decision under the arbitrary and capricious or substantial evidence tests. Moreover, agencies should be obliged to follow the (super)majority view of an expert panel, even if the agency\u27s own judgment is to the contrary, unless the agency can give an epistemically valid second-order reason for rejecting the panel majority\u27s view

    Five to Four: Why Do Bare Majorities Rule on Courts?

    Get PDF
    • ā€¦
    corecore