1,833 research outputs found

    Criminal History Enhancements Sourcebook

    Get PDF
    Criminal history scores make up one of the two most significant determinants of the punishment an offender receives in a sentencing guidelines jurisdiction. While prior convictions are taken into account by all U.S. sentencing systems, sentencing guidelines make the role of prior crimes more explicit by specifying the counting rules and by indicating the effect of prior convictions on sentence severity. Yet, once established, criminal history scoring formulas go largely unexamined. Moreover, there is great diversity across state and federal jurisdictions in the ways that an offender's criminal record is considered by courts at sentencing. This Sourcebook brings together for the first time information on criminal history enhancements in all existing U.S. sentencing guidelines systems. Building on this base, the Sourcebook examines major variations in the approaches taken by these systems, and identifies the underlying sentencing policy issues raised by such enhancements.The Sourcebook contains the following elements:A summary of criminal history enhancements in all guidelines jurisdictions;An analysis of the critical dimensions of an offender's previous convictions;A discussion of the policy options available to commissions considering amendments to their criminal history enhancements;A bibliography of key readings on the role of prior convictions at sentencing

    Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire

    Get PDF
    Many problems in the criminal justice system would be solved if we could accurately determine which offenders would commit offenses in the future. The likelihood that a person will commit a crime in the future is the single most important consideration that influences sentencing outcomes. It is relevant to the objectives of community protection, specific deterrence, and rehabilitation. The risk of future offending is also a cardinal consideration in bail and probation decisions. Empirical evidence establishes that judges are poor predictors of future offending—their decisions are barely more accurate than the toss of a coin. This undermines the efficacy and integrity of the criminal justice system. Modern artificial intelligence systems are much more accurate in determining if a defendant will commit future crimes. Yet, the move towards using artificial intelligence in the criminal justice system is slowing because of increasing concerns regarding the lack of transparency of algorithms and claims that the algorithms are imbedded with biased and racist sentiments. Criticisms have also been leveled at the reliability of algorithmic determinations. In this Article, we undertake an examination of the desirability of using algorithms to predict future offending and in the process analyze the innate resistance that human have towards deferring decisions of this nature to computers. It emerges that most people have an irrational distrust of computer decision-making. This phenomenon is termed “algorithmic aversion.” We provide a number of recommendations regarding the steps that are necessary to surmount algorithmic aversion and lay the groundwork for the development of fairer and more efficient sentencing, bail, and probation systems

    Punishing Risk

    Get PDF
    Actuarial recidivism risk assessments-statistical predictions of the likelihood of future criminal behavior-drive a number of core criminal justice decisions, including where to police, whom to release on bail, and how to manage correctional institutions. Recently, this predictive approach to criminal justice entered a new arena: sentencing. Actuarial sentencing has quickly gained a number of prominent supporters and is being implemented across the country. This enthusiasm is understandable. Its proponents promise that actuarial data will refine sentencing decisions, increase rehabilitation, and reduce reliance on incarceration. Yet, in the rush to embrace actuarial sentencing, scholars and policy makers have overlooked a crucial point: actuarial risk assessment tools are not intended for use at sentencing. In fact, their creators explicitly warn that these tools were not designed to aid decisions about the length of a sentence or whether to incarcerate someone. Nevertheless, that is precisely how those who endorse actuarial sentencing-including the American Law Institute in the recently revised Model Penal Code for Sentencing-suggest they should be used. Actuarial sentencing is, in short, an unintended, off-label application of actuarial risk information. This Article reexamines the promises of actuarial sentencing in light of this observation and argues that it may cause a number of equally unintended and detrimental consequences. Specifically, it contends that this practice distorts, rather than refines, sentencing decisions. Moreover, it may increase reliance on incarceration and it may do so for reasons that undermine the fairness and integrity of the criminal justice system

    Artificial Intelligence in Criminal Justice Settings:: Where should be the limits of Artificial Intelligence in legal decision-making? Should an AI device make a decision about human justice?

    Get PDF
    The application of Artificial Intelligence (AI) systems for high-stakes decision making is currently out for debate. In the Criminal Justice System, it can provide great benefits as well as aggravate systematic biases and introduce unprecedented ones. Hence, should artificial devices be involved in the decision-making process? And if the answer is affirmative, where should be the limits of that involvement? To answer these questions, this dissertation examines two popular risk assessment tools currently in use in the United States, LS and COMPAS, to discuss the differences between a traditional and an actuarial instrument that rely on computerized algorithms. Further analysis of the later is done in relation with the Fairness, Accountability, Transparency and Ethics (FATE) perspective to be implemented in any technology involving AI. Although the future of AI is uncertain, the ignorance with respect to so many aspects of this kind of innovative methods demand further research on how to make the best use of the several opportunities that it brings

    Equal Protection Under Algorithms: A New Statistical and Legal Framework

    Get PDF
    In this Article, we provide a new statistical and legal framework to understand the legality and fairness of predictive algorithms under the Equal Protection Clause. We begin by reviewing the main legal concerns regarding the use of protected characteristics such as race and the correlates of protected characteristics such as criminal history. The use of race and nonrace correlates in predictive algorithms generates direct and proxy effects of race, respectively, that can lead to racial disparities that many view as unwarranted and discriminatory. These effects have led to the mainstream legal consensus that the use of race and nonrace correlates in predictive algorithms is both problematic and potentially unconstitutional under the Equal Protection Clause. This mainstream position is also reflected in practice, with all commonly used predictive algorithms excluding race and many excluding nonrace correlates such as employment and education. Next, we challenge the mainstream legal position that the use of a protected characteristic always violates the Equal Protection Clause. We develop a statistical framework that formalizes exactly how the direct and proxy effects of race can lead to algorithmic predictions that disadvantage minorities relative to nonminorities. While an overly formalistic solution requires exclusion of race and all potential nonrace correlates, we show that this type of algorithm is unlikely to work in practice because nearly all algorithmic inputs are correlated with race. We then show that there are two simple statistical solutions that can eliminate the direct and proxy effects of race, and which are implementable even when all inputs are correlated with race. We argue that our proposed algorithms uphold the principles of the equal protection doctrine because they ensure that individuals are not treated differently on the basis of membership in a protected class, in stark contrast to commonly used algorithms that unfairly disadvantage minorities despite the exclusion of race. We conclude by empirically testing our proposed algorithms in the context of the New York City pretrial system. We show that nearly all commonly used algorithms violate certain principles underlying the Equal Protection Clause by including variables that are correlated with race, generating substantial proxy effects that unfairly disadvantage Black individuals relative to white individuals. Both of our proposed algorithms substantially reduce the number of Black defendants detained compared to commonly used algorithms by eliminating these proxy effects. These findings suggest a fundamental rethinking of the equal protection doctrine as it applies to predictive algorithms and the folly of relying on commonly used algorithms

    Effective and Ethical Measures of Predicting Criminal Offenders\u27 Risk of Recidivism and Treatment Needs on Risk-Need Assessments

    Get PDF
    Research has shown that the prison population and recidivism rate of criminal offenders have continued to rise over the last thirty years (Coll, Stewart, Juhnke, Thobro, & Haas, 2009). In response, professionals are implementing techniques, such as risk-need assessments, to assist in lowering recidivism. These assessments are empirical tools that professionals use when interviewing offenders to identify their risk of recidivism (Barber-Roja & Rotter, 2015). Previous research has been focused on assessment\u27s predictive accuracy, but there is less data on professionals\u27 perceptions regarding which measures are most effective (Labrecque, Smith, Lovins, & Latessa, 2014). Studies have shown that corrections professionals and treatment providers have interpreted assessment results differently (Marlowe, 2012). In the current study, a quantitative survey with some qualitative elements was used to examine the following questions: 1) What aspects of risk-need assessments do different criminal justice professionals find important to effectively examine offenders\u27 risk of recidivism and treatment needs, and 2) How do professional values relate to offenders\u27 assessment results? Findings have shown that among the 51 respondents, a majority of the sample found risk-need assessments to be effective, as well as useful for treatment purposes. However, significant differences emerged between the occupational groups in the areas of ethical domains and strengths. Results indicate the need for policies to be created to ensure that professionals performing assessments possess qualifying criteria. Implications for social work practice are explored in the context of this paper

    Effective and Ethical Measures of Predicting Criminal Offenders’ Risk of Recidivism and Treatment Needs on Risk-Need Assessments

    Get PDF
    Research has shown that the prison population and recidivism rate of criminal offenders have continued to rise over the last thirty years (Coll, Stewart, Juhnke, Thobro, & Haas, 2009). In response, professionals are implementing techniques, such as risk-need assessments, to assist in lowering recidivism. These assessments are empirical tools that professionals use when interviewing offenders to identify their risk of recidivism (Barber-Roja & Rotter, 2015). Previous research has been focused on assessment’s predictive accuracy, but there is less data on professionals’ perceptions regarding which measures are most effective (Labrecque, Smith, Lovins, & Latessa, 2014). Studies have shown that corrections professionals and treatment providers have interpreted assessment results differently (Marlowe, 2012). In the current study, a quantitative survey with some qualitative elements was used to examine the following questions: 1) What aspects of risk-need assessments do different criminal justice professionals find important to effectively examine offenders’ risk of recidivism and treatment needs, and 2) How do professional values relate to offenders’ assessment results? Findings have shown that among the 51 respondents, a majority of the sample found risk-need assessments to be effective, as well as useful for treatment purposes. However, significant differences emerged between the occupational groups in the areas of ethical domains and strengths. Results indicate the need for policies to be created to ensure that professionals performing assessments possess qualifying criteria. Implications for social work practice are explored in the context of this paper

    Risk, Need, and Racial Inequality: A Machine Learning Analysis of Rearrest in Juvenile Drug Treatment Courts and Traditional Juvenile Courts

    Get PDF
    Juvenile justice system involvement has many impacts on the lives of youth. This often includes negative outcomes for youth who receive highly punitive treatment rather than more rehabilitative approaches. One approach to reforming the juvenile justice system to be rehabilitative is the use of diversion options, such as Juvenile Drug Treatment Courts (JDTCs). JDTCs are intended to offer more personalized interventions for youth based on their risk and need factors as compared to Tradition Juvenile Court (TJC) settings. To better understand the complex interactions of tailored programming and individual factors for justice-involved youth, an integrated theoretical approach, including the Risk-Need-Responsivity framework and Disproportionate Minority Contact, was used to frame the current study. This study applied machine learning analysis techniques (random forests and logistic regression models) to a rigorous, longitudinal secondary dataset of youth in JDTCs and TJCs to determine which risk and protective factors were most important in predicting rearrest up to 1 year following court intake. The sample included 415 youth from JDTCs and TJCs in 10 jurisdictions across the US. Results revealed that both random forest and logistic regression models performed similarly for each court type as well as the combined sample, and that models were most accurate for the JDTC sample and least accurate for the TJC sample. Highly influential risk factors associated with higher likelihood of having at least one rearrest during the study period included higher scores on the family ineffectiveness scale, social risk scale, and crime and violence screener. Alternatively, highly influential protective factors associated with higher likelihood of not having any rearrests during the study period included not having an assessed risk level assigned to youth and being of Hispanic ethnicity. Race and previous juvenile justice system involvement were not important features in preliminary models and therefore were excluded from final models. Implications for future research, data-driven decision-making practices, and the ethics surrounding the use of machine learning models for juvenile justice involved youth are discussed
    • …
    corecore