14 research outputs found

    Statistical Precedent: Allocating Judicial Attention

    Get PDF
    The U.S. Courts of Appeals were once admired for their wealth of judicial attention and for their generosity in distributing it. At least by legend, almost all cases were afforded what William Richman and William Reynolds have termed the “Learned Hand Treatment.” Guided by Judge Learned Hand’s commandment that “[t]hou shalt not ration justice,” a panel of three judges would read the briefs, hear oral argument, deliberate at length, and prepare multiple drafts of an opinion. Once finished, the judges would publish their opinion, binding themselves and their colleagues in accordance with the common-law tradition. The final opinion would be circulated to and read by every judge in the circuit, providing nonpanel judges with an opportunity to provide feedback or evaluate a decision for en banc review. And on top of this extensive attention was a reasonable chance for yet more, as the Supreme Court reviewed approximately 3% of the circuit courts’ decisions. But darker days were ahead. A caseload explosion greatly diminished the courts’ reservoir of judicial attention. Between 1960 and 2010, the courts’ caseload increased by 1,436%. The courts responded to this precipitous rise in workload with a series of moves to reduce the time and effort that judges spent on each case. They employed an army of staff attorneys to help decide cases and draft opinions, increased the number of law clerks from one to three or four per judge, and curtailed the availability of oral argument such that in 2017, it was provided in less than 20% of cases... The shortage of attention threatens to undermine the courts’ ability to decide cases correctly and develop the law coherently. Without the time to carefully consider each case, circuit court judges— traditionally serving as the main source of error correction in the federal courts—will inevitably make more errors of their own

    European shrinking rural areas: Key messages for a refreshed long-term vision.

    Get PDF
    The paper begins with a discussion of the concept of 'shrinking', and its origins, outside the realm of rural development. Building on this, the paper shows the distribution of shrinking rural areas across Europe. Using both the project's literature review and findings from its eight case studies the socio-economic processes which drive demographic decline in rural areas are then described. A brief account of the evolution of EU interventions to alleviate the effects of shrinking, and some remarks about the current policy/governance landscape follow. We conclude by considering how a better understanding of the problem and process of shrinking may lead to more effective interventions, within the context of a refreshed long-term vision for Rural Europe. The latter needs to fully acknowledge the expanding repertoire of opportunities confronting rural areas as COVID-19 changes in working behaviour, and the geography of economic activity, accelerate, and fulfil, previously incremental shifts in technology and markets

    Measuring How Much Judges Matter for Case Outcomes

    No full text
    A large empirical literature examines how judges' traits affect how cases get resolved. This literature has led many to conclude that judges matter for case outcomes. But how much do they matter? Existing empirical findings understate the true extent of judicial influence over case outcomes since standard estimation techniques hide some disagreement among judges. We devise a novel machine learning method to reveal additional sources of disagreement. Applying this method to the Ninth Circuit, we estimate that at least 28% of cases could be decided differently based solely on the panel they were assigned to

    Political Appointments and Outcomes in Federal District Courts

    No full text
    Using an original data set of around 70,000 civil rights cases heard by nearly 200 judges, we study the effect of presidential appointments to federal district courts. We provide the first causal estimates of whether lawsuits end differently depending on their assignment to either a Democratic or a Republican appointed judge. We show Republican appointees cause fewer settlements and more dismissals, favoring defendants by around 5 percentage points. We estimate a similarly sized effect for a sample of civil rights appeals heard in the Ninth Circuit, raising questions about the conventional wisdom that politics matters more at higher levels of the judicial hierarchy. We also find that the effect in district courts has increased over time. For cases filed during the Obama presidency, Republican appointees caused pro-defendant outcomes in 7.4% more cases than Democratic appointees. Our results suggest that district courts are an important—although neglected—subject of research for political scientists

    An Algorithmic Assessment of Parole Decisions

    No full text
    Objectives: Parole is an important mechanism for alleviating the extraordinary social and financial costs of mass incarceration. Yet parole boards can also present a major obstacle, denying parole to low-risk inmates who could safely be released from prison. We evaluate a major parole institution, the New York State Parole Board, quantifying the costs of suboptimal decision-making.Methods: Using ensemble Machine Learning, we predict any arrest and any violent felony arrest within three years to generate criminal risk predictions for individuals released on parole in New York from 2012–2015. We quantify the social welfare loss of the Board’s suboptimal decisions by rank ordering inmates by their predicted risk and estimating the crime rates that could be observed with counterfactual risk-based release decisions. We also estimate the release rates that could be achieved holding arrest rates constant. We attend to the “selective labels” problem in several ways, including by testing the validity of the algorithm for individuals who were denied parole but later released after the expiration of their sentence.Results: We conservatively estimate that the Board could have more than doubled the release rate without increasing the total or violent felony arrest rate, and that they could have achieved these gains while simultaneously eliminating racial disparities in release rates.Conclusions: This study demonstrates the use of algorithms for evaluating criminal justice decision-making. Our analyses suggest that many low risk individuals are being unnecessarily incarcerated, highlighting the need for major parole reform

    Big Data, Machine Learning, and the Credibility Revolution in Empirical Legal Studies

    No full text
    The so-called credibility revolution changed empirical research (see Angrist and Pischke 2010). Before the revolution, researchers frequently relied on attempts to statistically model the world to make causal inferences from observational data. They would control for confounders, make functional form assumptions about the relationships between variables, and read regression coefficients on variables of interest as causal estimates. In essence, they would rely heavily on ex post statistical analysis to make causal inferences. The revolution centered around the idea that the only way to truly account for possible sources of bias is to remove the influence of all confounders ex ante through better research design. Thus, since the revolution, researchers have attempted to design studies around sources of random or as-if random variation, either with experiments or what have become known as “quasi-experimental” designs. This credibility revolution has increasingly brought quantitative researchers into agreement that, in the words of Donald Rubin, “design trumps analysis” (Rubin 2008). However, the research landscape has changed dramatically in recent years. We are now in an era of “big data.” At the same time as the internet vastly expanded the number of available data sources, sophisticated computational resources became widely accessible. This has opened up a whole new frontier for social scientists and empirical legal scholars: textual data. Indeed, most of the information we have about law, politics, and society is contained in texts of one kind or another, almost all of which are now digitized and available online. For example, in the 1990s, federal courts began to adopt online case records management—known as CM/ECF—where attorneys, clerks, and judges file and access documents related to each case.1 Using the federal government’s PACER database (available at pacer.gov), researchers (both academic and professional) can now easily access the dockets and filings for each case that is filed in a federal court. LexisNexis, Westlaw, and other companies have further improved access by providing raw text versions of a wide range of legal documents, along with expert-coded metadata to help researchers more easily find what they are looking for. And yet, despite the potential of these newly available resources, the sheer volume presents challenges for researchers. A core problem is how to draw substantively important inferences from a mountain of often unstructured digitized text. To deal with this challenge, researchers are turning their attention back toward the tools of statistical analysis. As many of the essays in this volume demonstrate, there is now a surging interest among researchers in one particularly powerful tool of statistical analysis: machine learning. This chapter addresses the place of machine learning in a post–“credibility revolution” landscape. We begin with an overview of machine learning and then make four main points. First, design still trumps analysis. The lessons of the credibility revolution should not be forgotten in the excitement around machine learning; machine learning does nothing to address the problem of omitted variable bias. Nonetheless, machine learning can improve a researcher’s data analysis. Indeed, with growing concerns about the reliability of even design-based research, perhaps we should be aiming for triangulation rather than design purism. Further, for some questions, we do not have the luxury of waiting for a strong design, and we need a best approximation of answer in the meantime. Second, even design-committed researchers should not ignore machine learning: it can be used in service of design-based studies to make causal estimates less variable, less biased, and more heterogeneous. Third, there are important policy-relevant prediction problems for which machine learning is particularly valuable (e.g., predicting recidivism in the criminal justice system). Yet even with research questions centered around prediction, a focus on design is still essential. As with causal inference, researchers cannot simply rely on statistical models but must also carefully consider threats to the validity of predictions. We briefly review some of these threats: GIGO (“garbage in, garbage out”), selective labels, and Campbell’s law. Fourth, the predictive power of machine learning can be leveraged for descriptive research. Where possible, we illustrate these points using examples drawn from real-world research

    Entertainment As Crime Prevention: Evidence from Chicago Sports Games.

    No full text
    The concern that mass media may be responsible for aggressive and criminal behavior is widespread. Comparatively little consideration has been given to its potential diversionary function. This paper contributes to the emerging body of literature on entertainment as a determinant of crime by analyzing Chicago by-the-minute crime reports during major sporting events. Sports provide an exogenous infusion of TV diversion that we leverage to test the effect of entertainment on crime. Because the scheduling of a sporting event should be random with respect to crime within a given month, day of the week, and time, we use month-time-day-of-week fixed effects to estimate the effect of the sporting events on crime. We compare crime reports by the half hour when Chicago’s NFL, NBA, or MLB teams are playing to crime reports at the same time, day, and month when the teams are not playing. We conduct the same analysis for the Super Bowl, NBA Finals, and MLB World Series. The Super Bowl generates the most dramatic declines: total crime reports decrease by approximately 25 percent (roughly 60 fewer crimes). The decline is partially offset by an increase in crime before the game, most notably in drug and prostitution reports, and an uptick in reports of violent crime immediately after the game. Crime during Chicago Bears Monday night football games is roughly 15 percent lower (30 fewer crimes) than during the same time on non-game nights. Our results show similar but smaller effects for NBA and MLB games. Except for the Super Bowl, we find little evidence for temporal crime displacement before or after the games. In general, we find substantial declines during games across crime types – property, violent, drug, and other – with the largest reductions for drug crime. We believe fewer potential offenders on the streets largely explain the declines in crime

    Machines Finding Injustice

    Get PDF
    With rising caseloads, review systems are increasingly taxed, stymieing traditional methods of case screening. We propose an automated solution: predictive models of legal decisions can be used to identify and focus review resources on outlier decisions—those decisions that are most likely the product of biases, ideological extremism, unusual moods, and carelessness and thus most at odds with a court’s considered, collective judgment. By using algorithms to find and focus human attention on likely injustices, adjudication systems can largely sidestep the most serious objections to the use of algorithms in the law: that algorithms can embed racial biases, deprive parties of due process, impair transparency, and lead to “technological–legal lock-in.
    corecore