47 research outputs found

    Teaching Law and Digital Age Legal Practice with an AI and Law Seminar

    Get PDF
    This article provides a guide and examples for using a seminar on Artificial Intelligence (AI) and Law to teach lessons about legal reasoning and about legal practice in the digital age. Artificial Intelligence and Law is a subfield of AI/ computer science research that focuses on computationally modeling legal reasoning. In at least a few law schools, the AI and Law seminar has regularly taught students fundamental issues about law and legal reasoning by focusing them on the problems these issues pose for scientists attempting to computationally model legal reasoning. AI and Law researchers have designed programs to reason with legal rules, apply legal precedents, predict case outcomes, argue like a legal advocate and visualize legal arguments. The article illustrates some of the pedagogically important lessons that they have learned in the process. As the technology of legal practice catches up with the aspirations of AI and Law researchers, the AI and Law seminar can play a new role in legal education. With advances in such areas as e-discovery, legal information retrieval (IR), and semantic processing of web-based information for electronic contracting, the chances are increasing that, in their legal practices, law students will use, and even depend on, systems that employ AI techniques. As explained in the Article, an AI and Law seminar invites students to think about processes of legal reasoning and legal practice and about how those processes employ information. It teaches how the new digital documents technologies work, what they can and cannot do, how to measure performance, how to evaluate claims about the technologies, and how to be savvy consumers and users of the technologies

    Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process

    Full text link
    Large Language Models (LLMs) and chatbots show significant promise in streamlining the legal intake process. This advancement can greatly reduce the workload and costs for legal aid organizations, improving availability while making legal assistance more accessible to a broader audience. However, a key challenge with current LLMs is their tendency to overconfidently deliver an immediate 'best guess' to a client's question based on the output distribution learned over the training data. This approach often overlooks the client's actual intentions or the specifics of their legal situation. As a result, clients may not realize the importance of providing essential additional context or expressing their underlying intentions, which are crucial for their legal cases. Traditionally, logic based decision trees have been used to automate intake for specific access to justice issues, such as immigration and eviction. But those solutions lack scalability. We demonstrate a proof-of-concept using LLMs to elicit and infer clients' underlying intentions and specific legal circumstances through free-form, language-based interactions. We also propose future research directions to use supervised fine-tuning or offline reinforcement learning to automatically incorporate intention and context elicitation in chatbots without explicit prompting

    Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies

    Full text link
    Thematic analysis and other variants of inductive coding are widely used qualitative analytic methods within empirical legal studies (ELS). We propose a novel framework facilitating effective collaboration of a legal expert with a large language model (LLM) for generating initial codes (phase 2 of thematic analysis), searching for themes (phase 3), and classifying the data in terms of the themes (to kick-start phase 4). We employed the framework for an analysis of a dataset (n=785) of facts descriptions from criminal court opinions regarding thefts. The goal of the analysis was to discover classes of typical thefts. Our results show that the LLM, namely OpenAI's GPT-4, generated reasonable initial codes, and it was capable of improving the quality of the codes based on expert feedback. They also suggest that the model performed well in zero-shot classification of facts descriptions in terms of the themes. Finally, the themes autonomously discovered by the LLM appear to map fairly well to the themes arrived at by legal experts. These findings can be leveraged by legal researchers to guide their decisions in integrating LLMs into their thematic analyses, as well as other inductive coding projects.Comment: 10 pages, 5 figures, 3 table

    Hierarchical a Fortiori Reasoning with Dimensions

    Get PDF
    In recent years, a model of a fortiori argumentation, developed to describe legal reasoning based on precedent, has been successfully applied in the field of artificial intelligence to improve interpretability of data-driven decision systems. In order to make this model more broadly applicable for this purpose, work has been done to expand the knowledge representation on the basis of which it functions, as the original model accommodates only binary propositional information. In particular, two separate expansions of the original model emerged; one which accounts for non-binary input information, and a second which accommodates hierarchically structured reasoning. In the present work we unify these expansions to a single model, incorporating both dimensional and hierarchical information.</p

    Hierarchical a Fortiori Reasoning with Dimensions

    Get PDF
    In recent years, a model of a fortiori argumentation, developed to describe legal reasoning based on precedent, has been successfully applied in the field of artificial intelligence to improve interpretability of data-driven decision systems. In order to make this model more broadly applicable for this purpose, work has been done to expand the knowledge representation on the basis of which it functions, as the original model accommodates only binary propositional information. In particular, two separate expansions of the original model emerged; one which accounts for non-binary input information, and a second which accommodates hierarchically structured reasoning. In the present work we unify these expansions to a single model, incorporating both dimensional and hierarchical information.</p

    Applying CBR to manage argumentation in MAS

    Full text link
    [EN] The application of argumentation theories and techniques in multi-agent systems has become a prolific area of research. Argumentation allows agents to harmonise two types of disagreement situations: internal, when the acquisition of new information (e.g., about the environment or about other agents) produces incoherences in the agents' mental state; and external, when agents that have different positions about a topic engage in a discussion. The focus of this paper is on the latter type of disagreement situations. In those settings, agents must be able to generate, select and send arguments to other agents that will evaluate them in their turn. An efficient way for agents to manage these argumentation abilities is by using case-based reasoning, which has been successfully applied to argumentation from its earliest beginnings. This reasoning methodology also allows agents to learn from their experiences and therefore, to improve their argumentation skills. This paper analyses the advantages of applying case-based reasoning to manage arguments in multi-agent systems dialogues, identifies open issues and proposes new ideas to tackle them.This work was partially supported by CONSOLIDERINGENIO 2010 under grant CSD2007-00022 and by the Spanish government and FEDER funds under CICYT TIN2005-03395 and TIN2006-14630-C0301 projects.Heras Barberá, SM.; Julian Inglada, VJ.; Botti Navarro, VJ. (2010). Applying CBR to manage argumentation in MAS. International Journal of Reasoning-based Intelligent Systems. 2(2):110-117. https://doi.org/10.1504/IJRIS.2010.034906S1101172

    Case-Based Reasoning Systems: From Automation to Decision-Aiding and Stimulation

    Get PDF
    Over the past decade, case-based reasoning (CBR) has emerged as a major research area within the artificial intelligence research field due to both its widespread usage by humans and its appeal as a methodology for building intelligent systems. Conventional CBR systems have been largely designed as automated problem-solvers for producing a solution to a given problem by adapting the solution to a similar, previously solved problem. Such systems have had limited success in real-world applications. More recently, there has been a search for new paradigms and directions for increasing the utility of CBR systems for decision support. This paper focuses on the synergism between the research areas of CBR and decision support systems (DSSs). A conceptual framework for DSSs is presented and used to develop a taxonomy of three different types of CBR systems: 1) conventional, 2) decision-aiding, and 3) stimulative. The major characteristics of each type of CBR system are explained with a particular focus on decision-aiding and stimulative CBR systems. The research implications of the evolution in the design of CBR systems from automation toward decision-aiding and stimulation are also explored
    corecore