3,311 research outputs found

    Predicting Precedent: A Psycholinguistic Artificial Intelligence in the Supreme Court

    Get PDF
    Since the proliferation of analytic methodologies and ‘big data’ in the 1980s, there have been multiple studies claiming to offer consistent predictions for Supreme Court behavior. Political scientists focus on analyzing the ideology of judges, with prediction accuracy as high as 70%. Institutionalists, such as Kaufmann (2019), seek to make predictions on verdicts based on a thorough, qualitative analysis of rules and structures, with predictive accuracy as high as 75%. We argue that a psycholinguistic model utilizing machine learning (SCOTUS_AI) can best predict Court outcomes. Extracting sentiment features from parsed briefs through the Linguistic Inquiry and Word Count (LIWC), our results indicate SCOTUS_AI (AUC = .8087; Top K=.9144) outcompetes traditional analysis in both class-controlled accuracy and range of possible, specific outcomes. Moreover, unlike traditional models, SCOTUS_AI can also predict the procedural outcome of the case as one-hot encoded by remand (AUC=.76). Our findings support a psycholinguistic paradigm of case analysis, suggesting that the framing of arguments is a relatively strong predictor of case results. Finally, we cast predictions for the Supreme Court docket, demonstrating that SCOTUS_AI can be practically deployed in the field for individual cases

    Using attention methods to predict judicial outcomes

    Full text link
    Legal Judgment Prediction is one of the most acclaimed fields for the combined area of NLP, AI, and Law. By legal prediction we mean an intelligent systems capable to predict specific judicial characteristics, such as judicial outcome, a judicial class, predict an specific case. In this research, we have used AI classifiers to predict judicial outcomes in the Brazilian legal system. For this purpose, we developed a text crawler to extract data from the official Brazilian electronic legal systems. These texts formed a dataset of second-degree murder and active corruption cases. We applied different classifiers, such as Support Vector Machines and Neural Networks, to predict judicial outcomes by analyzing textual features from the dataset. Our research showed that Regression Trees, Gated Recurring Units and Hierarchical Attention Networks presented higher metrics for different subsets. As a final goal, we explored the weights of one of the algorithms, the Hierarchical Attention Networks, to find a sample of the most important words used to absolve or convict defendants

    Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire

    Get PDF
    Many problems in the criminal justice system would be solved if we could accurately determine which offenders would commit offenses in the future. The likelihood that a person will commit a crime in the future is the single most important consideration that influences sentencing outcomes. It is relevant to the objectives of community protection, specific deterrence, and rehabilitation. The risk of future offending is also a cardinal consideration in bail and probation decisions. Empirical evidence establishes that judges are poor predictors of future offending—their decisions are barely more accurate than the toss of a coin. This undermines the efficacy and integrity of the criminal justice system. Modern artificial intelligence systems are much more accurate in determining if a defendant will commit future crimes. Yet, the move towards using artificial intelligence in the criminal justice system is slowing because of increasing concerns regarding the lack of transparency of algorithms and claims that the algorithms are imbedded with biased and racist sentiments. Criticisms have also been leveled at the reliability of algorithmic determinations. In this Article, we undertake an examination of the desirability of using algorithms to predict future offending and in the process analyze the innate resistance that human have towards deferring decisions of this nature to computers. It emerges that most people have an irrational distrust of computer decision-making. This phenomenon is termed “algorithmic aversion.” We provide a number of recommendations regarding the steps that are necessary to surmount algorithmic aversion and lay the groundwork for the development of fairer and more efficient sentencing, bail, and probation systems

    A deep learning framework for contingent liabilities risk management : predicting Brazilian labor court decisions

    Get PDF
    Estimar o resultado de um processo em litígio é crucial para muitas organizações. Uma aplicação específica são os "Passivos Contingenciais", que se referem a passivos que podem ou não ocorrer dependendo do resultado de um processo judicial em litígio. A metodologia tradicional para estimar essa probabilidade baseia-se na opinião de um advogado quem determina a possibilidade de um processo judicial ser perdido a partir de uma avaliação quantitativa. Esta tese apresenta a um modelo matemático baseado numa arquitetura de Deep Learning cujo objetivo é estimar a probabilidade de ganho ou perda de um processo de litígio, principalmente para ser utilizada na estimação de Passivos Contingenciais. A arquitetura, diferentemente do método tradicional, oferece um maior grau de confiança ao prever o resultado de um processo legal em termos de probabilidade e com um tempo de processamento de segundos. Além do resultado primário, a arquitetura estima uma amostra dos casos mais semelhantes ao processo estimado, que servem de apoio para a realização de estratégias de litígio. Nossa arquitetura foi testada em duas bases de dados de processos legais: (1) o Tribunal Europeu de Direitos Humanos (ECHR) e (2) o 4º Tribunal Regional do Trabalho brasileiro (4TRT). Ela estimou de acordo com nosso conhecimento, o melhor desempenho já publicado (precisão = 0,906) na base de dados da ECHR, uma coleção amplamente utilizada de processos legais, e é o primeiro trabalho a aplicar essa metodologia em um tribunal de trabalho brasileiro. Os resultados mostram que a arquitetura é uma alternativa adequada a ser utilizada contra o método tradicional de estimação do desfecho de um processo em litígio realizado por advogados. Finalmente, validamos nossos resultados com especialistas que confirmaram as possibilidades promissoras da arquitetura. Assim, nos incentivamos os académicos a continuar desenvolvendo pesquisas sobre modelagem matemática na área jurídica, pois é um tema emergente com um futuro promissor e aos usuários a utilizar ferramentas baseadas como a desenvolvida em nosso trabalho, pois fornecem vantagens substanciais em termos de precisão e velocidade sobre os métodos convencionais.Estimating the likely outcome of a litigation process is crucial for many organizations. A specific application is the “Contingents Liabilities,” which refers to liabilities that may or may not occur depending on the result of a pending litigation process (lawsuit). The traditional methodology for estimating this likelihood is based on the opinion from the lawyer’s experience which is based on a qualitative appreciation. This dissertation presents a mathematical modeling framework based on a Deep Learning architecture that estimates the probability outcome of a litigation process (accepted & not accepted) with a particular use on Contingent Liabilities. The framework offers a degree of confidence by describing how likely an event will occur in terms of probability and provides results in seconds. Besides the primary outcome, it offers a sample of the most similar cases to the estimated lawsuit that serve as support to perform litigation strategies. We tested our framework in two litigation process databases from: (1) the European Court of Human Rights (ECHR) and (2) the Brazilian 4th regional labor court. Our framework achieved to our knowledge the best-published performance (precision = 0.906) on the ECHR database, a widely used collection of litigation processes, and it is the first to be applied in a Brazilian labor court. Results show that the framework is a suitable alternative to be used against the traditional method of estimating the verdict outcome from a pending litigation performed by lawyers. Finally, we validated our results with experts who confirmed the promising possibilities of the framework. We encourage academics to continue developing research on mathematical modeling in the legal area as it is an emerging topic with a promising future and practitioners to use tools based as the proposed, as they provides substantial advantages in terms of accuracy and speed over conventional methods

    SPLIT DECISIONS: PRACTICAL MACHINE LEARNING FOR EMPIRICAL LEGAL SCHOLARSHIP

    Get PDF
    Multivariable regression may be the most prevalent and useful task in social science. Empirical legal studies rely heavily on the ordinary least squares method. Conventional regression methods have attained credibility in court, but by no means do they dictate legal outcomes. Using the iconic Boston housing study as a source of price data, this Article introduces machine-learning regression methods. Although decision trees and forest ensembles lack the overt interpretability of linear regression, these methods reduce the opacity of black-box techniques by scoring the relative importance of dataset features. This Article will also address the theoretical tradeoff between bias and variance, as well as the importance of training, cross-validation, and reserving a holdout dataset for testing

    Fair Use and Machine Learning

    Get PDF
    There would be a beaten path to the maker of software that could reliably state whether a use of a copyrighted work was protected as fair use. But applying machine learning to fair use faces considerable hurdles. Fair use has generated hundreds of reported cases, but machine learning works best with examples in greater numbers. More examples may be available, from mining the decision making of web sites, from having humans judge fair use examples just as they label images to teach self-driving cars, and using machine learning itself to generate examples. Beyond the number of examples, the form of the data is more abstract than the concrete examples on which machine learning has succeeded, such as computer vision, viewing recommendations, and even in comparison to machine translation, where the operative unit was the sentence, not a concept that could be distributed across a document. But techniques presently in use do find patterns in data to build more abstract features, and then use the same process to build more abstract features. It may be that such automated processes can provide the conceptual blocks necessary. In addition, tools drawn from knowledge engineering (ironically, the branch of artificial intelligence that of late has been eclipsed by machine learning) may extract concepts from such data as judicial opinions. Such tools would include new methods of knowledge representation and automated tagging. If the data questions are overcome, machine learning provides intriguing possibilities, but also faces challenges from the nature of fair use law. Artificial neural networks have shown formidable performance in classification. Classifying fair use examples raises a number of questions. Fair use law is often considered contradictory, vague, and unpredictable. In computer science terminology, the data is “noisy.” That inconsistency could flummox artificial neural networks, or the networks could disclose consistencies that have eluded commentators. Other algorithms such as nearest neighbor and support vectors could likewise both use and test legal reasoning by analogy. Another approach to machine learning, decision trees, may be simpler than other approaches in some respects, but could work on smaller data sets (addressing one of the data issues above) and provide something that machine learning often lacks: transparency. Decision trees disclose their decision-making process, whereas neural networks, especially deep learning, are opaque black boxes. Finally, unsupervised machine learning could be used to explore fair use case law for patterns, whether they be consistent structures in its jurisprudence, or biases that have played an undisclosed role. Any possible patterns found, however, should be treated as possibilities, pending testing by other means

    On the path to AI

    Get PDF
    This open access book explores machine learning and its impact on how we make sense of the world. It does so by bringing together two ‘revolutions’ in a surprising analogy: the revolution of machine learning, which has placed computing on the path to artificial intelligence, and the revolution in thinking about the law that was spurred by Oliver Wendell Holmes Jr in the last two decades of the 19th century. Holmes reconceived law as prophecy based on experience, prefiguring the buzzwords of the machine learning age—prediction based on datasets. On the path to AI introduces readers to the key concepts of machine learning, discusses the potential applications and limitations of predictions generated by machines using data, and informs current debates amongst scholars, lawyers and policy makers on how it should be used and regulated wisely. Technologists will also find useful lessons learned from the last 120 years of legal grappling with accountability, explainability, and biased data
    corecore