5,748 research outputs found
Learning to Predict Charges for Criminal Cases with Legal Basis
The charge prediction task is to determine appropriate charges for a given
case, which is helpful for legal assistant systems where the user input is fact
description. We argue that relevant law articles play an important role in this
task, and therefore propose an attention-based neural network method to jointly
model the charge prediction task and the relevant article extraction task in a
unified framework. The experimental results show that, besides providing legal
basis, the relevant articles can also clearly improve the charge prediction
results, and our full model can effectively predict appropriate charges for
cases with different expression styles.Comment: 10 pages, accepted by EMNLP 201
Recommended from our members
The limits of human predictions of recidivism.
Dressel and Farid recently found that laypeople were as accurate as statistical algorithms in predicting whether a defendant would reoffend, casting doubt on the value of risk assessment tools in the criminal justice system. We report the results of a replication and extension of Dressel and Farid's experiment. Under conditions similar to the original study, we found nearly identical results, with humans and algorithms performing comparably. However, algorithms beat humans in the three other datasets we examined. The performance gap between humans and algorithms was particularly pronounced when, in a departure from the original study, participants were not provided with immediate feedback on the accuracy of their responses. Algorithms also outperformed humans when the information provided for predictions included an enriched (versus restricted) set of risk factors. These results suggest that algorithms can outperform human predictions of recidivism in ecologically valid settings
ALJP: An Arabic Legal Judgment Prediction in Personal Status Cases Using Machine Learning Models
Legal Judgment Prediction (LJP) aims to predict judgment outcomes based on
case description. Several researchers have developed techniques to assist
potential clients by predicting the outcome in the legal profession. However,
none of the proposed techniques were implemented in Arabic, and only a few
attempts were implemented in English, Chinese, and Hindi. In this paper, we
develop a system that utilizes deep learning (DL) and natural language
processing (NLP) techniques to predict the judgment outcome from Arabic case
scripts, especially in cases of custody and annulment of marriage. This system
will assist judges and attorneys in improving their work and time efficiency
while reducing sentencing disparity. In addition, it will help litigants,
lawyers, and law students analyze the probable outcomes of any given case
before trial. We use a different machine and deep learning models such as
Support Vector Machine (SVM), Logistic regression (LR), Long Short Term Memory
(LSTM), and Bidirectional Long Short-Term Memory (BiLSTM) using representation
techniques such as TF-IDF and word2vec on the developed dataset. Experimental
results demonstrate that compared with the five baseline methods, the SVM model
with word2vec and LR with TF-IDF achieve the highest accuracy of 88% and 78% in
predicting the judgment on custody cases and annulment of marriage,
respectively. Furthermore, the LR and SVM with word2vec and BiLSTM model with
TF-IDF achieved the highest accuracy of 88% and 69% in predicting the
probability of outcomes on custody cases and annulment of marriage,
respectively
Exploiting Contrastive Learning and Numerical Evidence for Confusing Legal Judgment Prediction
Given the fact description text of a legal case, legal judgment prediction
(LJP) aims to predict the case's charge, law article and penalty term. A core
problem of LJP is how to distinguish confusing legal cases, where only subtle
text differences exist. Previous studies fail to distinguish different
classification errors with a standard cross-entropy classification loss, and
ignore the numbers in the fact description for predicting the term of penalty.
To tackle these issues, in this work, first, we propose a moco-based supervised
contrastive learning to learn distinguishable representations, and explore the
best strategy to construct positive example pairs to benefit all three subtasks
of LJP simultaneously. Second, in order to exploit the numbers in legal cases
for predicting the penalty terms of certain cases, we further enhance the
representation of the fact description with extracted crime amounts which are
encoded by a pre-trained numeracy model. Extensive experiments on public
benchmarks show that the proposed method achieves new state-of-the-art results,
especially on confusing legal cases. Ablation studies also demonstrate the
effectiveness of each component.Comment: Accepted to Findings of EMNLP 202
An Enquiry Meet for the Case: Decision Theory, Presumptions, and Evidentiary Burdens in Formulating Antitrust Legal Standards
Presumptions have an important role in antitrust jurisprudence. This article suggests that a careful formulation of the relevant presumptions and associated evidentiary rebuttal burdens can provide the “enquiry meet for the case” across a large array of narrow categories of conduct confronted in antitrust to create a type of “meta” rule of reason. The article begins this project by using decision theory to analyze the types and properties of antitrust presumptions and evidentiary rebuttal burdens and the relationship between them. Depending on the category of conduct and market structure conditions, antitrust presumptions lie along a continuum from conclusive (irrebuttable) anticompetitive, to rebuttable anticompetitive, to competitively neutral, and on to rebuttable procompetitive and conclusive (irrebuttable) procompetitive presumptions. A key source of these presumptions is the likely competitive effects inferred from market conditions. Other sources are policy-based -- deterrence policy concerns and overarching policies involving the goals and premises of antitrust jurisprudence. Rebuttal evidence can either undermine the facts on which the presumptions are based or can provide other evidence to offset the competitive effects likely implied by the presumption. The evidentiary burden to rebut a presumption depends on the strength of the presumption and the availability and reliability of further case-specific evidence. These twin determinants can be combined and understood through the lens of Bayesian decision theory to explain how “the quality of proof required should vary with the circumstances.” The stronger the presumption and less reliable the case-specific evidence in signaling whether the conduct is anticompetitive versus procompetitive, the more difficult it will be for the disfavored party to satisfy the evidentiary burden to rebut the presumption. The evidentiary rebuttal burden generally is a burden of production, but also can involve the burden of persuasion, as with the original Philadelphia National Bank structural presumption, or typical procompetitive presumptions. If a presumption is rebutted with sufficient offsetting evidence to avoid an initial judgment, the presumption generally continues to carry some weakened weight in the post-rebuttal phase of the decision process. That is, a thumb remains on the scale. However, if the presumption is undermined, it is discredited and it carries no weight in the post-rebuttal decision process. The article uses this methodology to analyze various antitrust presumptions. It also analyzes the, burden-shifting rule of reason and suggests that the elements should not be rigidly sequenced in the decision process. The article also begins the project of reviewing, revising and refining existing antitrust presumptions with proposed revisions and refinements in a number of areas. The article invites other commentators to join the project by criticizing these proposals and suggesting others. These presumptions then could be applied by appellate courts and relied upon by lower court, litigants and business planners
- …