1,637 research outputs found

    A Brief History of Updates of Answer-Set Programs

    Get PDF
    Funding Information: The authors would like to thank José Alferes, Martin Baláz, Federico Banti, Antonio Brogi, Martin Homola, Luís Moniz Pereira, Halina Przymusinska, Teodor C. Przymusinski, and Theresa Swift, with whom they worked on the topic of this paper over the years, as well as Ricardo Gonçalves and Matthias Knorr for valuable comments on an earlier draft of this paper. The authors would also like to thank the anonymous reviewers for their insightful comments and suggestions, which greatly helped us improve this paper. The authors were partially supported by Fundação para a Ciência e Tecnologia through projects FORGET (PTDC/CCI-INF/32219/2017) and RIVER (PTDC/CCI-COM/30952/2017), and strategic project NOVA LINCS (UIDB/04516/2020). Publisher Copyright: © The Author(s), 2022. Published by Cambridge University Press.Over the last couple of decades, there has been a considerable effort devoted to the problem of updating logic programs under the stable model semantics (a.k.a. answer-set programs) or, in other words, the problem of characterising the result of bringing up-to-date a logic program when the world it describes changes. Whereas the state-of-the-art approaches are guided by the same basic intuitions and aspirations as belief updates in the context of classical logic, they build upon fundamentally different principles and methods, which have prevented a unifying framework that could embrace both belief and rule updates. In this paper, we will overview some of the main approaches and results related to answer-set programming updates, while pointing out some of the main challenges that research in this topic has faced.publishersversionpublishe

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    Efficient Groundness Analysis in Prolog

    Get PDF
    Boolean functions can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, a variety of issues pertaining to the efficient Prolog implementation of groundness analysis are investigated, focusing on the domain of definite Boolean functions, Def. The systematic design of the representation of an abstract domain is discussed in relation to its impact on the algorithmic complexity of the domain operations; the most frequently called operations should be the most lightweight. This methodology is applied to Def, resulting in a new representation, together with new algorithms for its domain operations utilising previously unexploited properties of Def -- for instance, quadratic-time entailment checking. The iteration strategy driving the analysis is also discussed and a simple, but very effective, optimisation of induced magic is described. The analysis can be implemented straightforwardly in Prolog and the use of a non-ground representation results in an efficient, scalable tool which does not require widening to be invoked, even on the largest benchmarks. An extensive experimental evaluation is givenComment: 31 pages To appear in Theory and Practice of Logic Programmin

    Implementing Groundness Analysis with Definite Boolean Functions

    Get PDF
    The domain of definite Boolean functions, Def, can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, previously unexploited computational properties of Def are utilised to develop an efficient and succinct groundness analyser that can be coded in Prolog. In particular, entailment checking is used to prevent unnecessary least upper bound calculations. It is also demonstrated that join can be defined in terms of other operations, thereby eliminating code and removing the need for preprocessing formulae to a normal form. This saves space and time. Furthermore, the join can be adapted to straightforwardly implement the downward closure operator that arises in set sharing analyses. Experimental results indicate that the new Def implementation gives favourable results in comparison with BDD-based groundness analyses

    Distributed on-line safety monitor based on safety assessment model and multi-agent system

    Get PDF
    On-line safety monitoring, i.e. the tasks of fault detection and diagnosis, alarm annunciation, and fault controlling, is essential in the operational phase of critical systems. Over the last 30 years, considerable work in this area has resulted in approaches that exploit models of the normal operational behaviour and failure of a system. Typically, these models incorporate on-line knowledge of the monitored system and enable qualitative and quantitative reasoning about the symptoms, causes and possible effects of faults. Recently, monitors that exploit knowledge derived from the application of off-line safety assessment techniques have been proposed. The motivation for that work has been the observation that, in current practice, vast amounts of knowledge derived from off-line safety assessments cease to be useful following the certification and deployment of a system. The concept is potentially very useful. However, the monitors that have been proposed so far are limited in their potential because they are monolithic and centralised, and therefore, have limited applicability in systems that have a distributed nature and incorporate large numbers of components that interact collaboratively in dynamic cooperative structures. On the other hand, recent work on multi-agent systems shows that the distributed reasoning paradigm could cope with the nature of such systems. This thesis proposes a distributed on-line safety monitor which combines the benefits of using knowledge derived from off-line safety assessments with the benefits of the distributed reasoning of the multi-agent system. The monitor consists of a multi-agent system incorporating a number of Belief-Desire-Intention (BDI) agents which operate on a distributed monitoring model that contains reference knowledge derived from off-line safety assessments. Guided by the monitoring model, agents are hierarchically deployed to observe the operational conditions across various levels of the hierarchy of the monitored system and work collaboratively to integrate and deliver safety monitoring tasks. These tasks include detection of parameter deviations, diagnosis of underlying causes, alarm annunciation and application of fault corrective measures. In order to avoid alarm avalanches and latent misleading alarms, the monitor optimises alarm annunciation by suppressing unimportant and false alarms, filtering spurious sensory measurements and incorporating helpful alarm information that is announced at the correct time. The thesis discusses the relevant literature, describes the structure and algorithms of the proposed monitor, and through experiments, it shows the benefits of the monitor which range from increasing the composability, extensibility and flexibility of on-line safety monitoring to ultimately developing an effective and cost-effective monitor. The approach is evaluated in two case studies and in the light of the results the thesis discusses and concludes both limitations and relative merits compared to earlier safety monitoring concepts
    corecore