11 research outputs found

    Multi-agent blackboard architecture for supporting legal decision making

    Get PDF
    Our research objective is to design a system to support legal decision-making using the multi-agent blackboard architecture. Agents represent experts that may apply various knowledge processing algorithms and knowledge sources. Experts cooperate with each other using blackboard to store facts about current case. Knowledge is represented as a set of rules. Inference process is based on bottom-up control (forward chaining). The goal of our system is to find rationales for arguments supporting different decisions for a given case using precedents and statutory knowledge. Our system also uses top-down knowledge from statutes and precedents to interactively query the user for additional facts, when such facts could affect the judgment. The rationales for various judgments are presented to the user, who may choose the most appropriate one. We present two example scenarios in Polish traffic law to illustrate the features of our system. Based on these results, we argue that the blackboard architecture provides an effecive approach to model situations where a multitude of possibly conflicting factors must be taken into account in decision making. We briefly discuss two such scenarios: incorporating moral and ethical factors in decision making by autonomous systems (e.g. self-driven cars), and integrating eudaimonic (well-being) factors in modeling mobility patterns in a smart city

    Modeling and Publishing Legislation and Case Law as Linked Open Data

    Get PDF
    Viranomaiset julkaisevat oikeudellista tietoa verkossa avoimesti usein ihmisluettavissa PDF- ja HTML-muodoissa. Kuitenkin tiedonhaun tehostamiseksi ja tiedon ymmärtämisen helpottamiseksi tarvitaan älykkäitä palveluita sekä älykästä tietomallinnusta. Lisäksi lainsäädännön kansainvälistyessä eri organisaatioilla on tarve edistää oikeudellista tiedonvaihtoa yli kansallisten rajojen, mikä edellyttää aineistojen esitystavan yhtenäistämistä. Tässä diplomityössä tutkitaan, miten linkitetyn datan teknologioilla voidaan mallintaa ja julkaista lainsäädäntö sekä oikeuskäytäntö siten, että julkaisu palvelee laajasti eri oikeudellisen tiedon käyttötapauksia. Työ sisälsi RDF-tietomallien ja datamuunnoksen kehittämisen, datan rikastamisen sekä ohjelmointirajapintojen ja sovellusprototyyppien toteuttamisen. Lopputuloksena syntyi Semanttinen Finlex -palvelu, jossa Suomen lainsäädäntö sekä korkeimman oikeuden ja korkeimman hallinto-oikeuden ratkaisut on julkaistu keskeisiltä osin linkitettynä avoimena datana noudattaen eurooppalaisia tunniste- ja metatietostandardeja.Governments publish legal information openly online usually in the form of human readable PDF and HTML documents. However, to facilitate search and understanding of legal information intelligent services and intelligent data modeling are required. Moreover, as law becomes international the authorities have a growing need for enhancing information exchange which calls for standardized ways of presenting information. This thesis examines how Linked Data technologies can be applied to modeling and publishing legislation and case law to support different use cases extensively. The work involved development of RDF data models and data conversion, data enriching and implementation of application programming interfaces and application prototypes. The end result is Semantic Finlex, a legal Linked Open Data service that hosts central part of the Finnish legislation, judgments of the Supreme Court and judgments of the Supreme Administrative Court published using European standards for identifiers and metadata schemas

    Reasoning with Data Flows and Policy Propagation Rules

    Get PDF
    Data-oriented systems and applications are at the centre of current developments of the World Wide Web. In these scenarios, assessing what policies propagate from the licenses of data sources to the output of a given data-intensive system is an important problem. Both policies and data flows can be described with Semantic Web languages. Although it is possible to define Policy Propagation Rules (PPR) by associating policies to data flow steps, this activity results in a huge number of rules to be stored and managed. In a recent paper, we introduced strategies for reducing the size of a PPR knowledge base by using an ontology of the possible relations between data objects, the Datanode ontology, and applying the (A)AAAA methodology, a knowledge engineering approach that exploits Formal Concept Analysis (FCA). In this article, we investigate whether this reasoning is feasible and how it can be performed. For this purpose, we study the impact of compressing a rule base associated with an inference mechanism on the performance of the reasoning process. Moreover, we report on an extension of the (A)AAAA methodology that includes a coherency check algorithm, that makes this reasoning possible. We show how this compression, in addition to being beneficial to the management of the knowledge base, also has a positive impact on the performance and resource requirements of the reasoning process for policy propagation

    Proposal complexity and report allocation in the European Parliament

    Get PDF
    Experience and loyalty have been identified as major explanations for why Members of the European Parliament (MEPs) are selected as committee rapporteurs in the European Parliament. Yet, existing research implicitly assumes that these explanations operate in isolation of what the report is about. In this article, we hypothesize that the effects of experience and loyalty on MEPs’ chances to become rapporteurs should be conditioned by the complexity of the Commission's legislative proposal. We show that party group coordinators indeed distribute the most complex legislative tasks to highly experienced MEPs but cannot confirm such a conditional relationship for the effect of loyalty. Our study contributes to the literature on the legislative organization in the European Parliament by highlighting the role of proposal complexity for the report allocation process

    Representation Of Case Law For Argumentative Reasoning

    Get PDF
    Modelling argumentation based on legal cases has been a central topic of AI and Law since its very beginnings. The current established view is that facts must be determined on the basis of evidence. Next, these facts must be used to ascribe legally significant predicates (factors and issues) to the case, on the basis of which the outcome can be established. This thesis aims to provide a method to encapsulate the knowledge of bodies of case law from various legal domains using a recent development in AI knowledge representation, Abstract Dialectical Frameworks (ADFs), as the central feature of the design method. Three legal domains in the US Courts are used throughout the thesis: The domain of the Automobile Exception to the Fourth Amendment, which has been freshly analysed in terms of factors in this thesis; the US Trade Secrets domain analysed from well-known legal case-based reasoning systems (CATO and IBP); and the Wild Animals domain analysed extensively in AI and Law. In this work, ADFs play a role akin to that of Entity-Relationship models in the design of database systems to design and implement programs intended to decide cases, described as sets of factors, according to a theory of a particular domain based on a set of precedent cases relating to that domain. The ADFs in this thesis are instantiated from different starting points: factor-based representation of oral dialogues and factor-based analysis of legal opinions. A legal dialogue representation model is defined for the US Supreme Court Oral Hearing dialogues. The role of these hearings is to identify the components that can form the basis of an argument that will resolve the case. Dialogue moves used by participants have been identified as the dialogue proceeds to assert and modify argument components in term of issues, factors and facts, and to produce what are called Argument Component Trees (ACTs) for each participant in the dialogue, showing how these components relate to one another. The resulting trees can be then merged and used as input to decide the accepted components using an ADF. The model is illustrated using two landmark case studies in the Automobile Exception domain: Carney v. California and US v. Chadwick. A legal justification model is defined to capture knowledge in a legal domain and to provide justification and transparency of legal decisions. First, a legal domain ADF is instantiated from the factor hierarchy of CATO and IBP, then the method is applied to the other two legal domains. In each domain, the cases are expressed in terms of factors organised into an ADF, from which an executable program can be implemented in a straightforward way by taking advantage of the closeness of the acceptance conditions of the ADF to components of an executable program. The proposed method is evaluated to test the ease of implementation, the efficacy of the resulting program, the ease of refinement, transparency of the reasoning and transferability across legal domains. This evaluation suggests ways of improving the decision by incorporating the case facts, and considering justification and reasoning using portions of precedents. The final result is ANGELIC (ADF for kNowledGe Encapsulation of Legal Information from Cases), a method for producing programs that decide the cases with a high degree of accuracy in multiple domains

    Amicus Curiae before International Courts and Tribunals

    Get PDF
    Amicus curiae participation in international courts is steadily growing since the late 1990 despite lack of clarity on the concept’s nature, function and utility in international dispute settlement. Does amicus curiae infuse international judicial proceedings with alternative views, including the public interest in a case, as often advocated by NGOs? Does it increase the legitimacy and transparency of international dispute settlement, or the coherence of international law? Or is it an unhelpful impostor that impedes negotiated solutions and derails the proceedings at the expense of the parties to advance its own agenda? By way of an empirical-comparative analysis of the laws and practices of the ICJ, the ITLOS, the ECtHR, the IACtHR, the IACtHPR, WTO panels and the Appellate Body, and investment arbitration the dissertation examines the status quo of amicus curiae before international courts and tribunals to determine if the current amicus curiae practice is of added value to international proceedings and international dispute settlement in general. The dissertation shows that there is no common concept of international amicus curiae, but that amicus curiae before the international courts examined share a few characteristics. A proposed functional systematization highlights overlaps and diverging uses of the concept before international courts and helps scholars and practitioners to assess the opportunities and limits of the concept. Analysis of the concept’s current regulatory framework and its substantive effectiveness reveals a hesitation in particular by courts with a strong adversarial tradition to take into account the views of a non-party despite the positive experience with the concept in regional human rights courts. The dissertation concludes that neither the expectations nor the concerns attached to amicus curiae participation in international proceedings have materialized. It argues that the concept can contribute to improved decisions and decision-making in international dispute settlement if regulated and used properly

    Discurso Web, Modelos Teóricos e Sistemas de Argumentação: Implicações para a Tomada de Decisão

    Get PDF
    A crescente importância atribuída pelas organizações à tomada de decisão em contexto web requer que sejam definidos e implementados mecanismos mais eficientes para apoiar as atividades da mesma. As redes sociais, enquanto espaços de colaboração, permitem que os atores sociais interajam entre si independentemente do local e espaço onde se encontram. Um aspeto importante dos ambientes virtuais é que, de uma forma ou de outra, se verifica a existência de discussão argumentativa e nesse sentido a web constitui-se como uma excelente ferramenta para apoiar a representação, divulgação e recuperação do conhecimento, pois pode realçar a expressão argumentativa, devido à sua omnipresença e abertura. Através da captura e análise do discurso web, é possível obter informação relevante que reflete a opinião e a manifestação de pontos de vista que podem ser úteis para a tomada de decisão. Para o efeito, é importante identificar e perceber a estrutura da rede de atores sociais, a comunicação, contexto, linguagem, linguística e conteúdo do discurso web, visto que as palavras e frases estão sempre carregadas de sentidos e intenções, que podem ser definidas de forma diferente.A argumentação tem o seu foco nos diálogos com o objetivo de aumentar ou diminuir a aceitação de um ponto de vista, a fim de chegar a uma conclusão através de um raciocínio lógico. Nesse contexto, temos assistido a um aumento importante no desenvolvimento de sistemas de colaboração centrados na web, que funcionam como facilitadores da argumentação. Nestes, diferentes pontos de vista podem ser apresentados, contestados e avaliados, e a tomada de decisão colaborativa é efetuada através de debates e negociações entre um grupo de indivíduos. Tendo em conta este novo paradigma, este trabalho objetivou consolidar conhecimentos associados ao discurso web, à argumentação e decisão, aos modelos teóricos de argumentação e aos sistemas de argumentação computacional para a web.The growing importance given by organizations to decision-making in the web context requires more efficient mechanisms to be defined and implemented, in order to support its activities. Social networks whilst collaboration spaces, enable social actors to interact regardless of their location. An important aspect of virtual environments is that, one way or another, argumentative discussions exist and the web is constituted as an excellent tool to support representation, dissemination and knowledge retrieval, as it can enhance the argumentative expression due to its ubiquity and openness. By capturing and analyzing web discourses, organizations can obtain relevant information reflecting the expression of viewpoints that might be useful for decision-making. To achieve this goal, and since words and phrases are always charged with meanings and intentions that might be defined in different ways, it is important to identify and understand the structure of the actors’ social network, communication, context, language, linguistics and content of web discourse. Argumentation has its focus on dialogues, aiming to increase or decrease the acceptance of a point of view in order to reach a conclusion through logical reasoning. In this context, we have seen a significant development of web-centric collaboration systems, which act as argumentation facilitators. Within these systems, different viewpoints can be presented, challenged and evaluated, while collaborative decision-making is carried out through debates and negotiations between groups. Given this new paradigm, this study aims to consolidate existing knowledge on web discourse, argumentation and decision, argumentation theoretical models and argumentation computational systems for the web.info:eu-repo/semantics/publishedVersio

    Scalable Quality Assessment of Linked Data

    Get PDF
    In a world where the information economy is booming, poor data quality can lead to adverse consequences, including social and economical problems such as decrease in revenue. Furthermore, data-driven indus- tries are not just relying on their own (proprietary) data silos, but are also continuously aggregating data from different sources. This aggregation could then be re-distributed back to “data lakes”. However, this data (including Linked Data) is not necessarily checked for its quality prior to its use. Large volumes of data are being exchanged in a standard and interoperable format between organisations and published as Linked Data to facilitate their re-use. Some organisations, such as government institutions, take a step further and open their data. The Linked Open Data Cloud is a witness to this. However, similar to data in data lakes, it is challenging to determine the quality of this heterogeneous data, and subsequently to make this information explicit to data consumers. Despite the availability of a number of tools and frameworks to assess Linked Data quality, the current solutions do not aggregate a holistic approach that enables both the assessment of datasets and also provides consumers with quality results that can then be used to find, compare and rank datasets’ fitness for use. In this thesis we investigate methods to assess the quality of (possibly large) linked datasets with the intent that data consumers can then use the assessment results to find datasets that are fit for use, that is; finding the right dataset for the task at hand. Moreover, the benefits of quality assessment are two-fold: (1) data consumers do not need to blindly rely on subjective measures to choose a dataset, but base their choice on multiple factors such as the intrinsic structure of the dataset, therefore fostering trust and reputation between the publishers and consumers on more objective foundations; and (2) data publishers can be encouraged to improve their datasets so that they can be re-used more. Furthermore, our approach scales for large datasets. In this regard, we also look into improving the efficiency of quality metrics using various approximation techniques. However the trade-off is that consumers will not get the exact quality value, but a very close estimate which anyway provides the required guidance towards fitness for use. The central point of this thesis is not on data quality improvement, nonetheless, we still need to understand what data quality means to the consumers who are searching for potential datasets. This thesis looks into the challenges faced to detect quality problems in linked datasets presenting quality results in a standardised machine-readable and interoperable format for which agents can make sense out of to help human consumers identifying the fitness for use dataset. Our proposed approach is more consumer-centric where it looks into (1) making the assessment of quality as easy as possible, that is, allowing stakeholders, possibly non-experts, to identify and easily define quality metrics and to initiate the assessment; and (2) making results (quality metadata and quality reports) easy for stakeholders to understand, or at least interoperable with other systems to facilitate a possible data quality pipeline. Finally, our framework is used to assess the quality of a number of heterogeneous (large) linked datasets, where each assessment returns a quality metadata graph that can be consumed by agents as Linked Data. In turn, these agents can intelligently interpret a dataset’s quality with regard to multiple dimensions and observations, and thus provide further insight to consumers regarding its fitness for use
    corecore