244 research outputs found

    Modelling legal knowledge for GDPR compliance checking

    Get PDF
    In the last fifteen years, Semantic Web technologies have been successfully applied to the legal domain. By composing all those techniques and theoretical methods, we propose an integrated framework for modelling legal documents and legal knowledge to support legal reasoning, in particular checking compliance. This paper presents a proof-of-concept applied to the GDPR domain, with the aim to detect infringements of privacy compulsory norms or to prevent possible violations using BPMN and Regorous engine

    Akoma Ntoso: Flexibility and Customization to Meet Different Legal Traditions

    Get PDF
    We present different techniques to manage customization of Akoma Ntoso XSD, an OASIS XML vocabulary for legal documents, using native elements, like or , general elements, modules or tools

    Legal Knowledge Extraction for Knowledge Graph Based Question-Answering

    Get PDF
    This paper presents the Open Knowledge Extraction (OKE) tools combined with natural language analysis of the sentence in order to enrich the semantic of the legal knowledge extracted from legal text. In particular the use case is on international private law with specific regard to the Rome I Regulation EC 593/2008, Rome II Regulation EC 864/2007, and Brussels I bis Regulation EU 1215/2012. A Knowledge Graph (KG) is built using OKE and Natural Language Processing (NLP) methods jointly with the main ontology design patterns defined for the legal domain (e.g., event, time, role, agent, right, obligations, jurisdiction). Using critical questions, underlined by legal experts in the domain, we have built a question answering tool capable to support the information retrieval and to answer to these queries. The system should help the legal expert to retrieve the relevant legal information connected with topics, concepts, entities, normative references in order to integrate his/her searching activities

    Making Things Explainable vs Explaining: Requirements and Challenges Under the GDPR

    Get PDF
    open3noAbstract. The European Union (EU) through the High-Level Expert Group on Artificial Intelligence (AI-HLEG) and the General Data Protection Regulation (GDPR) has recently posed an interesting challenge to the eXplainable AI (XAI) community, by demanding a more user-centred approach to explain Automated Decision-Making systems (ADMs). Looking at the relevant literature, XAI is currently focused on producing explainable software and explanations that generally follow an approach we could term One-Size-Fits-All, that is unable to meet a requirement of centring on user needs. One of the causes of this limit is the belief that making things explainable alone is enough to have pragmatic explanations. Thus, insisting on a clear separation between explainabilty (something that can be explained) and explanations, we point to explanatorY AI (YAI) as an alternative and more powerful approach to win the AI-HLEG challenge. YAI builds over XAI with the goal to collect and organize explainable information, articulating it into something we called user-centred explanatory discourses. Through the use of explanatory discourses/narratives we represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space, allowing explainees to interactively explore it and produce the explanation best suited to their needs.openSovrano, Francesco; Vitali, Fabio; Palmirani, MonicaSovrano, Francesco; Vitali, Fabio; Palmirani, Monic

    Metrics, Explainability and the European AI Act Proposal

    Get PDF
    On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion

    Online Publication of Court Decisions in Europe

    Get PDF
    Although nowadays most courts publish decisions on the internet, substantial differences exist between European countries regarding such publication. These differences not only pertain to the extent with which judgments are published and anonymised, but also to their metadata, searchability and reusability. This article, written by Marc van Opijnen, Ginevra Peruginelli, Eleni Kefali and Monica Palmirani, contains a synthesis of a comprehensive comparative study on the publication of court decisions within all Member States of the European Union. Specific attention is paid on the legal and policy frameworks governing case law publication, actual practices, data protection issues, Open Data policies as well as the state of play regarding the implementation of the European Case Law Identifier

    DaPIS: an Ontology-Based Data Protection Icon Set

    Get PDF
    Privacy policies are known to be impenetrable and lengthy texts that are hardly read and poorly understood. This is why the General Data Protection Regulation (GDPR) introduces provisions to enhance information transparency including icons as visual means to clarify data practices. However, the research on the creation and evaluation of graphical symbols for the communication of legal concepts, which are generally abstract and unfamiliar to laypeople, is still in its infancy. Moreover, detailed visual representations can support users’ comprehension of the underlying concepts, but at the expense of simplicity and usability. This Chapter describes a methodology for the creation and evaluation of DaPIS, a machine-readable Data Protection Icon Set that was designed following human-centered methods drawn from the emerging discipline of Legal Design. Participatory design methods have ensured that the perspectives of legal experts, designers and other relevant stake- holders are combined in a fruitful dialogue, while user studies have empirically determined strengths and weaknesses of the icon set as communicative means for the legal sphere. Inputs from other disciplines were also fundamental: canonical principles drawn from aesthetics, ergonomics and semiotics were included in the methodology. Moreover, DaPIS is modeled on PrOnto, an ontology of the GDPR, thus offering a comprehensive solution for the Semantic Web. In combination with the description of a privacy policy in the legal standard XML Akoma Ntoso, such an approach makes the icons machine-readable and automatically retrievable. Icons can thus serve as information markers in lengthy privacy statements and support an efficient navigation of the document. In this way, different representations of legal information can be mapped and connected to enhance its comprehensibility: the lawyer-readable, the machine-readable, and the human-readable layers

    Variants of temporal defeasible logics for modelling norm modifications

    Get PDF
    This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked
    • …
    corecore