129 research outputs found

    Los modelos de semántica de marcos para la representación del conocimiento jurídico en el Derecho Comparado: el caso de la responsabilidad del Estado

    Get PDF
    En aquest article s'analitza en profunditat i es realitza una proposta de representació del coneixement jurídic subjacent al concepte de responsabilitat de l'Estat des d'una perspectiva multilingüe i juscomparativa. Per a això, es proposa d'augmentar la informació dels marcs semàntics (a partir d'ara, marcs) a través dels semantic types en el sistema FrameNet, amb el doble objectiu de servir com a representació interlingua del coneixement jurídic i de formalitzar les causes del desajust lèxic i conceptual dels sistemes jurídics. S'estudia el principi de responsabilitat de l'Estat en els models espanyol, anglès, francès i italià i es demostra com una descripció més detallada del coneixement jurídic, a través de la vinculació dels frame elements (a partir d'ara designats amb l'acrònim FE) dels marcs amb els tipus semàntics [±sentient], possibilita no només la utilització d'aquests com a representació interlingua, sinó, a més, procura explicar les divergències/convergències dels diferents plantejaments del concepte de responsabilitat de l'Estat, ancorats en contextos socioculturals de diferent tradició. La present proposta evidencia els avantatges de l'esmentada formalització com a model explicatiu del procés dinàmic de vaig donar/convergència en la jurisprudència del Tribunal de Justícia de la Unió Europea (a partir d'ara designat amb la sigla TJUE).This article offers an in-depth analysis, and proposes a representation of the legal knowledge underlying the concept of State responsibility from a multilingual and comparative law perspective. To this end, it recommends increasing information on frame semantics (hereinafter, frames) through the semantic types in the FrameNet system, with the double purpose of acting as an interlingual representation of legal knowledge and formalising the causes for lexical and conceptual imbalances in legal systems. The article studies the principle of State responsibility in the Spanish, English, French and Italian models and shows how a more detailed description of legal knowledge through the linking of the frame elements (hereinafter designed by the acronym FE) of the frames with the semantic types [±sentient], makes it feasible not just to use these as an interlingual representation, but also to try to explain the divergences/convergences of the various approaches to the concept of the State responsibility that are rooted in sociocultural contexts of a different tradition. This proposal demonstrates the advantages of this formalisation as a model to explain the dynamic process of divergence/convergence in the case law of the Court of Justice of the European Union (referred to hereinafter by the acronym CJEU).En este artículo se analiza en profundidad y se realiza una propuesta de representación del conocimiento jurídico subyacente al concepto de responsabilidad del Estado desde una perspectiva multilingüe y juscomparativa. Para ello, se propone aumentar la información de los marcos semánticos (a partir de ahora, marcos) a través de los semantic types en el sistema FrameNet, con el doble objetivo de servir como representación interlingüe del conocimiento jurídico y de formalizar las causas del desajuste léxico y conceptual de los sistemas jurídicos. Se estudia el principio de responsabilidad del Estado en los modelos español, inglés, francés e italiano y se demuestra cómo una descripción más detallada del conocimiento jurídico, a través de la vinculación de los frame elements (a partir de ahora designados con el acrónimo FE) de los marcos con los tipos semánticos [±sentient], posibilita no solo la utilización de estos como representación interlingüe, sino, además, procura explicar las divergencias/convergencias de los distintos planteamientos del concepto de responsabilidad del Estado, anclados en contextos socioculturales de diferente tradición. La presente propuesta evidencia las ventajas de dicha formalización como modelo explicativo del proceso dinámico de di/convergencia en la jurisprudencia del Tribunal de Justicia de la Unión Europea (a partir de ahora designado con la sigla TJUE)

    FinnFN 1.0: The Finnish frame semantic database

    Get PDF
    The article describes the process of creating a Finnish language FrameNet or FinnFN, based on the original English language FrameNet hosted at the International Computer Science Institute in Berkeley, California. We outline the goals and results relating to the FinnFN project and especially to the creation of the FinnFrame corpus. The main aim of the project was to test the universal applicability of frame semantics by annotating real Finnish using the same frames and annotation conventions as in the original Berkeley FrameNet project. From Finnish newspaper corpora, 40,721 sentences were automatically retrieved and manually annotated as example sentences evoking certain frames. This became the FinnFrame corpus. Applying the Berkeley FrameNet annotation conventions to the Finnish language required some modifications due to Finnish morphology, and a convention for annotating individual morphemes within words was introduced for phenomena such as compounding, comparatives and case endings. Various questions about cultural salience across the two languages arose during the project, but problematic situations occurred only in a few examples, which we also discuss in the article. The article shows that, barring a few minor instances, the universality hypothesis of frames is largely confirmed for languages as different as Finnish and English.Peer reviewe

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Modular norm models: practical representation and analysis of contractual rights and obligations

    Get PDF
    Compliance analysis requires legal counsel but is generally unavailable in many software projects. Analysis of legal text using logic-based models can help developers understand requirements for the development and use of software-intensive systems throughout its lifecycle. We outline a practical modeling process for norms in legally binding agreements that include contractual rights and obligations. A computational norm model analyzes available rights and required duties based on the satisfiability of situations, a state of affairs, in a given scenario. Our method enables modular norm model extraction, representation, and reasoning. For norm extraction, using the theory of frame semantics, we construct two foundational norm templates for linguistic guidance. These templates correspond to Hohfeld’s concepts of claim-right and its jural correlative, duty. Each template instantiation results in a norm model, encapsulated in a modular unit which we call a super-situation that corresponds to an atomic fragment of law. For hierarchical modularity, super-situations contain a primary norm that participates in relationships with other norm models. Norm compliance values are logically derived from its related situations and propagated to the norm’s containing super-situation, which in turn participates in other super-situations. This modularity allows on-demand incremental modeling and reasoning using simpler model primitives than previous approaches. While we demonstrate the usefulness of our norm models through empirical studies with contractual statements in open source software and privacy domains, its grounding in theories of law and linguistics allows wide applicability

    Data sensitivity detection in chat interactions for privacy protection

    Get PDF
    In recent years, there has been exponential growth in using virtual spaces, including dialogue systems, that handle personal information. The concept of personal privacy in the literature is discussed and controversial, whereas, in the technological field, it directly influences the degree of reliability perceived in the information system (privacy ‘as trust’). This work aims to protect the right to privacy on personal data (GDPR, 2018) and avoid the loss of sensitive content by exploring sensitive information detection (SID) task. It is grounded on the following research questions: (RQ1) What does sensitive data mean? How to define a personal sensitive information domain? (RQ2) How to create a state-of-the-art model for SID?(RQ3) How to evaluate the model? RQ1 theoretically investigates the concepts of privacy and the ontological state-of-the-art representation of personal information. The Data Privacy Vocabulary (DPV) is the taxonomic resource taken as an authoritative reference for the definition of the knowledge domain. Concerning RQ2, we investigate two approaches to classify sensitive data: the first - bottom-up - explores automatic learning methods based on transformer networks, the second - top-down - proposes logical-symbolic methods with the construction of privaframe, a knowledge graph of compositional frames representing personal data categories. Both approaches are tested. For the evaluation - RQ3 – we create SPeDaC, a sentence-level labeled resource. This can be used as a benchmark or training in the SID task, filling the gap of a shared resource in this field. If the approach based on artificial neural networks confirms the validity of the direction adopted in the most recent studies on SID, the logical-symbolic approach emerges as the preferred way for the classification of fine-grained personal data categories, thanks to the semantic-grounded tailor modeling it allows. At the same time, the results highlight the strong potential of hybrid architectures in solving automatic tasks

    Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations

    Full text link
    Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the "knowledge acquisition" problem). To address these issues, we devise a novel logical reasoner called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query. We use a simple QA example from a children's story to motivate Braid's design and explain how the various components work together to produce a coherent logical explanation. Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results while providing frame-based explanations.Comment: Accepted at AAAI-202

    Abstract syntax as interlingua: Scaling up the grammatical framework from controlled languages to robust pipelines

    Get PDF
    Syntax is an interlingual representation used in compilers. Grammatical Framework (GF) applies the abstract syntax idea to natural languages. The development of GF started in 1998, first as a tool for controlled language implementations, where it has gained an established position in both academic and commercial projects. GF provides grammar resources for over 40 languages, enabling accurate generation and translation, as well as grammar engineering tools and components for mobile and Web applications. On the research side, the focus in the last ten years has been on scaling up GF to wide-coverage language processing. The concept of abstract syntax offers a unified view on many other approaches: Universal Dependencies, WordNets, FrameNets, Construction Grammars, and Abstract Meaning Representations. This makes it possible for GF to utilize data from the other approaches and to build robust pipelines. In return, GF can contribute to data-driven approaches by methods to transfer resources from one language to others, to augment data by rule-based generation, to check the consistency of hand-annotated corpora, and to pipe analyses into high-precision semantic back ends. This article gives an overview of the use of abstract syntax as interlingua through both established and emerging NLP applications involving GF
    corecore