601 research outputs found

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    Understanding and Evaluating Assurance Cases

    Get PDF
    Assurance cases are a method for providing assurance for a system by giving an argument to justify a claim about the system, based on evidence about its design, development, and tested behavior. In comparison with assurance based on guidelines or standards (which essentially specify only the evidence to be produced), the chief novelty in assurance cases is provision of an explicit argument. In principle, this can allow assurance cases to be more finely tuned to the specific circumstances of the system, and more agile than guidelines in adapting to new techniques and applications. The first part of this report (Sections 1-4) provides an introduction to assurance cases. Although this material should be accessible to all those with an interest in these topics, the examples focus on software for airborne systems, traditionally assured using the DO-178C guidelines and its predecessors. A brief survey of some existing assurance cases is provided in Section 5. The second part (Section 6) considers the criteria, methods, and tools that may be used to evaluate whether an assurance case provides sufficient confidence that a particular system or service is fit for its intended use. An assurance case cannot provide unequivocal "proof" for its claim, so much of the discussion focuses on the interpretation of such less-than-definitive arguments, and on methods to counteract confirmation bias and other fallibilities in human reasoning

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    Implicatures and Discourse Structure

    Get PDF
    International audienceOne of the characteristic marks of Gricean implicatures in general, and scalar implicatures in particular, examples of which are given in (1), is that they are the result of a defeasible inference. (1a) John had some of the cookies (1b)John had some of the cookies. In fact he had them all. (1a) invites the inference that John didn't have all the cookies,an inference that can be defeated by additional information, as in (1b). Scalar inferences like that in (1a) thus depend upon some sort of nonmonotonic reasoning over semantic contents. They share this characteristic of defeasiblility with inferences that result in the presence of discourse relations that link discourse segments together into a discourse structure for a coherent text or dialogue---call these inferences discourse or D inferences. I have studied these inferences about discourse structure, their effects on content and how they are computed in the theory known as Segmented Discourse Representation Theory or SDRT. In this paper I investigate how the tools used to infer discourse relations apply to what Griceans and others call scalar or quantity implicatures. The benefits of this investigation are three fold: at the theoretical level, we have a unified and relatively simple framework for computing defeasible inferences both of the quantity and discourse structure varieties; further, we can capture what ' s right about the intuitions of so called "localist" views about scalar implicatures; finally, this framework permits us to investigate how D-inferences and scalar inferences might interact, in particular how discourse structure might trigger scalar inferences, thus explaining the variability(Chemla, 2008) or even non-existence of embedded implicatures noted recently (e.g., Geurts and Pouscoulous, 2009), and their occasional noncancellability. The view of scalar inferences that emerges from this study is also rather different from the way both localists and Neo- Griceans conceive of them. Both localists and Neo-Griceans view implicatures as emerging from pragmatic reasoning processes that are strictly separated from the calculation of semantic values; where they differ is at what level the pragmatic implicatures are calculated. Localists take them to be calculated in parallel with semantic composition, whereas Neo-Griceans take them to have as input the complete semantic content of the assertion. My view is that scalar inferences depend on discourse structure and large view of semantic content in which semantics and pragmatics interact in a complex way to produce an interpretation of an utterance or a discourse
    • …
    corecore