118 research outputs found

    A Computational Framework for Formalizing Rules and Managing Changes in Normative Systems

    Get PDF
    Legal texts are typically written in a natural language. However, a legal text that is written in a formal language has the advantage of being subject to automation, at least partially. Such a translation is not easy, and the matter is even more complex because the law changes with time, so if we formalized a legal text that was originally written in natural language, there is a need to keep track of the change. This thesis proposes original developments on these subjects. In order to formalize a legal document, we provide a pipeline for the translation of a legal text from natural to formal language and we apply it to the case of natural resources contracts. In general, adjectives play an important role in a text and they allow to characterize it: for this reason we developed a logical system aimed at reasoning with gradable adjectives. Regarding norm change, we provide an ontology to represent change in a normative system, some basic mechanisms by which an agent may acquire new norms, and a study on the problem of revising a defeasible theory by only changing its facts. Another contribution of this thesis is a general framework for revision that includes the previous points as specific cases

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    An End-to-End Pipeline from Law Text to Logical Formulas

    Get PDF
    We propose a pipeline for converting natural English law texts into logical formulas via a series of structural representations. Text texts are first parsed using a formal grammar derived from light-weight annotations. An intermediate representation called assembly logic is then used for logical interpretation and supports translations to different back-end logics and visualisations. The approach, while rule-based and explainable, is also robust: it can deliver useful results from day one, but allows subsequent refinements and variations

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    A Roadmap for Self-Evolving Communities

    Get PDF
    Self-organisation and self-evolution is evident in physics, chemistry, biology, and human societies. Despite the existing literature on the topic, we believe self-organisation and self-evolution is still missing from the IT tools (whether online or offline) we are building and using. In the last decade, human interactions have been moving more and more towards social media. The time we spend interacting with others in virtual communities and networks is tremendous. Yet, the tools supporting those interactions remain rigid. This position paper argues the need for self-evolving software-enabled communities, and proposes a roadmap for achieving this required self-evolution. The proposal is based on building normative-based communities, where community interactions are regulated by norms and community members are free to discuss and modify their community's norms. The evolution of communities is then dictated by the evolution of its norms.Peer Reviewe

    Working on the Argument Pipeline: Through Flow Issues between Natural Language Argument, Instantiated Arguments, and Argumentation Frameworks

    Get PDF
    In many domains of public discourse such as arguments about public policy, there is an abundance of knowledge to store, query, and reason with. To use this knowledge, we must address two key general problems: first, the problem of the knowledge acquisition bottleneck between forms in which the knowledge is usually expressed, e.g., natural language, and forms which can be automatically processed; second, reasoning with the uncertainties and inconsistencies of the knowledge. Given such complexities, it is labour and knowledge intensive to conduct policy consultations, where participants contribute statements to the policy discourse. Yet, from such a consultation, we want to derive policy positions, where each position is a set of consistent statements, but where positions may be mutually inconsistent. To address these problems and support policy-making consultations, we consider recent automated techniques in natural language processing, instantiating arguments, and reasoning with the arguments in argumentation frameworks. We discuss application and “bridge” issues between these techniques, outlining a pipeline of technologies whereby: expressions in a controlled natural language are parsed and translated into a logic (a literals and rules knowledge base), from which we generate instantiated arguments and their relationships using a logic-based formalism (an argument knowledge base), which is then input to an implemented argumentation framework that calculates extensions of arguments (an argument extensions knowledge base), and finally, we extract consistent sets of expressions (policy positions). The paper reports progress towards reasoning with web-based, distributed, collaborative, incomplete, and inconsistent knowledge bases expressed in natural language

    JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights

    Get PDF
    In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)

    Analysing accident reports using structured and formal methods

    Get PDF
    Formal methods are proposed as a means to improve accident reports, such as the report into the 1996 fire in the Channel Tunnel between the UK and France. The size and complexity of accident reports create difficulties for formal methods, which traditionally suffer from problems of scalability and poor readability. This thesis demonstrates that features of an engineering-style formal modelling process, particularly the structuring of activity and management of information, reduce the impact of these problems and improve the accuracy of formal models of accident reports. This thesis also contributes a detailed analysis of the methodological requirements for constructing accident report models. Structured, methodical construction and mathematical analysis of the models elicits significant problems in the content and argumentation of the reports. Once elicited, these problems can be addressed. This thesis demonstrates the benefits and limitations of taking a wider scope in the modelling process than is commonly adopted for formal accident analysis. We present a deontic action logic as a language for constructing models of accident reports. Deontic action models offer a novel view of the report, which highlights both the expected and actual behaviour in the report, and facilitates examination of the conflict between the two. This thesis contributes an objective analysis of the utility of both deontic and action logic operators to the application of modelling accident reports. A tool is also presented that executes a subset of the logic, including these deontic and action logic operators
    corecore