899 research outputs found

    What We Don\u27t Know About Class Actions but Hope to Know Soon

    Get PDF
    Legislation that would alter class action practice in the federal courts has been pending in Congress. Nearly a decadeā€™s worth of U.S. Supreme Court cases have restricted the scope and ease of use of the class action device. Class action critics argue that class litigation is a ā€œracketā€ that fails to compensate plaintiffs and instead enriches plaintiffsā€™ lawyers at the expense of legitimate business practices. On the other hand, defenders of class actions decry the legislative and judicial forces aligned against them, warning that trends in class action law will eviscerate the practical rights held by consumers and workers. In short, there is considerable controversy over whether class actions are an economic menace or a boon to the little guys. We have two purposes in this brief Article. First, we wish to focus continuing attention on the need for more empirical information about the actual functioning of the federal class action system. Second, we wish to share our current efforts to use a one-of-a-kind collection of docket reports, originally harvested from Public Access to Court Electronic Records (PACER), to fill the empirical gap. Presentation of empirical findings resulting from this effort awaits a future article. However, this Article includes suggestions as to how the federal judiciary and Administrative Office of the United States Courts (ā€œAOā€) could improve data management and data reporting so as to make information about federal class actions more accessible to scholars and others interested in how the class action device operates in practice and what reforms, if any, would be advisable

    Does more for the poor mean less for the poor? The politics of tagging

    Get PDF
    Proposals aimed at improving the welfare of the poor often include indicator targeting, in which non-income characteristics (such as race, gender, or land ownership) that are correlated with income are used to target limited funds to groups likely to include a cincentration of the poor. Previous work shows that efficient use of a fixed budget for poverty reduction requires such targeting, either because agents'income cannot be observed or to reduce distortionary incentives arising from redistributive interventions. Inspite of this, the authors question the political viability of targeting. After constructing a model that is basically an extension of Akerlof's 1978 model of"tagging", they derive three main results: 1) Akerlof's result continues to hold: that, ignoring political considerations, not only will targeting be desirable but recipients of the targeted transfer will receive a greater total transfer than they would if targeting were not possible. 2) A classical social-choice analysis-in which agents vote simultaneously about the level of taxation and the degree of targeting-shows that positive levels of targeted transfers will not exist in equilibrium (an unsurprising finding, given Plott's 1968 theorem). It also shows that a voting equilibrium often will exist with no targeting but with non-zero taxation and redistribution. 3) In a game in which the policymaker chooses the degree of targeting while voters choose the level of taxation, the redistributive efficiency gains from tagging may well fail to outweigh the resulting reduction in funds available for redistribution. These results may be extended readily to account for altruistic agents. The authors stress that even when these results hold, the alternative to targeted transfers - a universally received lump-sum grant financed through a proportional tax - will nonetheless be supported politically and will be quite progressive relative to the pretransfer income distribution.Economic Theory&Research,Services&Transfers to Poor,Poverty Impact Evaluation,Environmental Economics&Policies,Poverty Monitoring&Analysis,Services&Transfers to Poor,Rural Poverty Reduction,Environmental Economics&Policies,Poverty Impact Evaluation,Safety Nets and Transfers

    Enabling high confidence detections of gravitational-wave bursts

    Get PDF
    With the advanced LIGO and Virgo detectors taking observations the detection of gravitational waves is expected within the next few years. Extracting astrophysical information from gravitational wave detections is a well-posed problem and thoroughly studied when detailed models for the waveforms are available. However, one motivation for the field of gravitational wave astronomy is the potential for new discoveries. Recognizing and characterizing unanticipated signals requires data analysis techniques which do not depend on theoretical predictions for the gravitational waveform. Past searches for short-duration un-modeled gravitational wave signals have been hampered by transient noise artifacts, or "glitches," in the detectors. In some cases, even high signal-to-noise simulated astrophysical signals have proven difficult to distinguish from glitches, so that essentially any plausible signal could be detected with at most 2-3 Ļƒ\sigma level confidence. We have put forth the BayesWave algorithm to differentiate between generic gravitational wave transients and glitches, and to provide robust waveform reconstruction and characterization of the astrophysical signals. Here we study BayesWave's capabilities for rejecting glitches while assigning high confidence to detection candidates through analytic approximations to the Bayesian evidence. Analytic results are tested with numerical experiments by adding simulated gravitational wave transient signals to LIGO data collected between 2009 and 2010 and found to be in good agreement.Comment: 15 pages, 6 figures, submitted to PR

    The Reduced Form of Litigation Models and the Plaintiff\u27s Win Rate

    Get PDF
    In this paper I introduce what I call the reduced form approach to studying the plaintiff\u27s win rate in litigation selection models. A reduced form comprises a joint distribution of plaintiff\u27s and defendant\u27s beliefs concerning the probability that the plaintiff would win in the event a dispute were litigated; a conditional win rate function that tells us the actual probability of a plaintiff win in the event of litigation, given the parties\u27 subjective beliefs; and a litigation rule that provides the probability that a case will be litigated given the two parties\u27 beliefs. I show how models with very different-looking structure can be understood in common reduced form terms, and I then use the reduced form to prove several general results. First, a generalized version of the Priest-Klein model can be used to represent any other model\u27s reduced form, even though the Priest-Klein model uses the Landes-Posner-Gould ( LPG ) litigation rule while some other models do not. Second, Shavell\u27s famous any-win-rate result holds generally, even in models with party belief distributions that are both highly accurate and identical across plaintiffs and defendants. Third, there are only limited conditions under which the LPG litigation rule can be rejected empirically; this result undermines the case against the LPG rules\u27 admittedly non-optimizing approach to modeling litigation selection. Finally, I use the reduced form approach to clarify how selection effects complicate the use of data on the plaintiff\u27s win rate to measure changes in legal rules. The result, I suggest, is that recent work by Klerman & Lee advocating the use of such data is unduly optimistic

    Rethinking Summary Judgment Empirics: The Life of the Parties

    Get PDF

    Locking the Doors to Discovery? Assessing the Effects of \u3ci\u3eTwombly\u3c/i\u3e and \u3ci\u3eIqbal\u3c/i\u3e on Access to Discovery

    Get PDF
    Many observers believe the Supreme Courtā€™s Twombly and Iqbal opinions have curtailed access to civil justice. But previous empirical studies looking only at Rule 12(b)(6) grant rates have failed to capture the full effect of these cases because they have not accounted for party selectionā€”changes in party behavior that can be expected following changes in pleading standards. In this Note, I show how party selection can be expected to undermine the empirical usefulness of simple grant-rate comparisons. I then use a conceptual model of party behavior that allows me to derive an adjusted measure of Twombly/Iqbalā€™s impact and show how to estimate a lower bound on this measure using data from recent studies by the Federal Judicial Center. My empirical results suggest that, depending on the nature of the suit in question, Twombly and Iqbal have negatively affected plaintiffs in at least 15% to 21% of cases that faced Rule 12(b)(6) motions in the post-Iqbal data window. Again depending on the nature of the suit, these figures represent between one-fourth and two-fifths of the cases that fail to reach discovery on at least some claims in the post-Iqbal data window

    Can the Dark Arts of the Dismal Science Shed Light on the Empirical Reality of Civil Procedure?

    Get PDF
    Litigation involves human beings, who are likely to be motivated to pursue their interests as they understand them. Empirical civil procedure researchers must take this fact seriously if we are to adequately characterize the effects of policy changes. To make this point concrete, I first step outside the realm of civil procedure and illustrate the importance of accounting for human agency in empirical research. I use the canonical problem of demand estimation in economics to show how what I call the ā€œurn approachā€ to empirical work fails to uncover important empirical relationships by disregarding behavioral aspects of human action. I then show how these concerns permeate a prominent empirical issue in contemporary civil procedure debates: the changes in pleading policy wrought by Bell Atlantic, Corp. v. Twombly and Ashcroft v. Iqbal. Revisiting my own earlier work, I embed the question of how changes in the pleading standard will affect case outcomes in a broad behavioral framework that takes partiesā€™ agency seriously. In the process, I address recent critiques, both of the very idea of using behavioral frameworks to understand civil litigation policy changes, and of certain aspects of my use of real-world litigation data collected by the Federal Judicial Center. As I show, these criticisms are straightforwardly refuted on the merits. The alternative to taking seriously the behavioral context created by the civil justice system ā€” what has occurred so far in too much of the debate over Twombly and Iqbal ā€” is, as one critic of early 20th-century empirical research by legal scholars once put it, ā€œa mindless amassing of statistics without reference to any guiding theory whatsoever.ā€ To do better, we will need to take behavior seriously in studying civil litigation

    Bootstrap-Based Improvements for Inference with Clustered Errors

    Get PDF
    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject considerably. We investigate more accurate inference using cluster bootstrap-t procedures that provide asymptotic refinement. These procedures are evaluated using Monte Carlos, including the much-cited differences-in-differences example of Bertrand, Mullainathan and Duflo (2004). In situations where standard methods lead to rejection rates in excess of ten percent (or more) for tests of nominal size 0.05, our methods can reduce this to five percent. In principle a pairs cluster bootstrap should work well, but in practice a Wild cluster bootstrap performs better.clustered errors; random effects; cluster robust; sandwich; bootstrap; bootstrap-t; clustered bootstrap; pairs bootstrap; wild bootstrap.

    The Triangle of Law and the Role of Evidence in Class Action Litigation

    Get PDF

    Can We Learn Anything About Pleading Changes from Existing Data?

    Get PDF
    In light of the gateway role that the pleading standard can play in our civil litigation system, measuring the empirical effects of pleading policy changes embodied in the Supreme Court\u27s controversial Twombly and Iqbal cases is important. In my earlier paper, Locking the Doors to Discovery, I argued that in doing so, special care is required in formulating the object of empirical study. Taking party behavior seriously, as Locking the Doors does, leads to empirical results suggesting that Twombly and Iqbal have had substantial effects among cases that face Rule 12(b)(6) motions post-Iqbal. This paper responds to potentially important critiques of my empirical implementation made by the FJC\u27s Joe Cecil and Professor David Engstrom. An additional contribution of the present paper is to elucidate some important challenges for empirical work in civil procedure. First, researchers should carefully consider which covariates belong in statistical models, while also taking care in assessing the empirical importance of controlling for covariates. Second, data collection protocols should be designed with behavioral assumptions in mind. But third, researchers should not let the perfect be the enemy of the good: even data protocols that are less than perfectly designed may be broadly useful
    • ā€¦
    corecore