5,349 research outputs found

    Abstract verification and debugging of constraint logic programs

    Get PDF
    The technique of Abstract Interpretation [13] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program verification and debugging [3],[10],[7]

    Intensional Updates

    Get PDF

    Conformance Checking Based on Multi-Perspective Declarative Process Models

    Full text link
    Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behaviors provided in the form of a process model. The majority of these approaches require the input process model to be procedural (e.g., a Petri net). However, in turbulent environments, characterized by high variability, the process behavior is less stable and predictable. In these environments, procedural process models are less suitable to describe a business process. Declarative specifications, working in an open world assumption, allow the modeler to express several possible execution paths as a compact set of constraints. Any process execution that does not contradict these constraints is allowed. One of the open challenges in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. In this paper, we close this gap by providing a framework for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented in three real life case studies

    Constraint Handling Rules with Binders, Patterns and Generic Quantification

    Full text link
    Constraint Handling Rules provide descriptions for constraint solvers. However, they fall short when those constraints specify some binding structure, like higher-rank types in a constraint-based type inference algorithm. In this paper, the term syntax of constraints is replaced by λ\lambda-tree syntax, in which binding is explicit; and a new \nabla generic quantifier is introduced, which is used to create new fresh constants.Comment: Paper presented at the 33nd International Conference on Logic Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017 16 pages, LaTeX, no PDF figure

    BCAUS Project description and consideration of separation of data and control

    Get PDF
    The commonly stated truths that data may be segregated from program control in generic expert system shells and that such tools support straightforward knowledge representation were examined. The ideal of separation of data from program control in expert systems is difficult to realize for a variety of reasons. One approach to achieving this goal is to integrate hybrid collections of specialized shells and tools instead of producing custom systems built with a single all purpose expert system tool. Aspects of these issues are examined in the context of a specific diagnostic expert system application, the Backup Control Mode Analysis and Utility System (BCAUS), being developed for the Gamma Ray Observatory (GRO) spacecraft. The project and the knowledge gained in working on the project are described

    Practical Run-time Checking via Unobtrusive Property Caching

    Full text link
    The use of annotations, referred to as assertions or contracts, to describe program properties for which run-time tests are to be generated, has become frequent in dynamic programing languages. However, the frameworks proposed to support such run-time testing generally incur high time and/or space overheads over standard program execution. We present an approach for reducing this overhead that is based on the use of memoization to cache intermediate results of check evaluation, avoiding repeated checking of previously verified properties. Compared to approaches that reduce checking frequency, our proposal has the advantage of being exhaustive (i.e., all tests are checked at all points) while still being much more efficient than standard run-time checking. Compared to the limited previous work on memoization, it performs the task without requiring modifications to data structure representation or checking code. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao run-time checking framework, which allows us to provide an operational semantics with checks and caching. We also report on a prototype implementation and provide some experimental results that support that using a relatively small cache leads to significant decreases in run-time checking overhead.Comment: 30 pages, 1 table, 170 figures; added appendix with plots; To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Wittgenstein and the memory debate

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/science/journal/0732118X Copyright Elsevier Ltd. DOI: 10.1016/j.newideapsych.2008.04.015In this paper, I survey the impact on neuropsychology of Wittgenstein’s elucidations of memory. Wittgenstein discredited the storage and imprint models of memory, dissolved the conceptual link between memory and mental images or representations and, upholding the context-sensitivity of memory, made room for a family resemblance concept of memory, where remembering can also amount to doing or saying something. While neuropsychology is still generally under the spell of archival and physiological notions of memory, Wittgenstein's reconceptions can be seen at work in its leading-edge practitioners. However, neuroscientists, generally, are finding memory difficult to demarcate from other cognitive and noncognitive processes, and I suggest this is largely due to their considering automatic responses as part of memory, termed nondeclarative or implicit memory. Taking my lead from Wittgenstein's On Certainty, I argue that there is only remembering where there is also some kind of mnemonic effort or attention, and therefore that so-called implicit memory is not memory at all, but a basic, noncognitive certainty.Peer reviewe

    The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism

    Get PDF
    The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as new tasks and picks up on autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader ranger of overall twelve emotional states. In this paper, we describe these four Sub-Challenges, Challenge conditions, baselines, and a new feature set by the openSMILE toolkit, provided to the participants. \em Bj\"orn Schuller1^1, Stefan Steidl2^2, Anton Batliner1^1, Alessandro Vinciarelli3,4^{3,4}, Klaus Scherer5^5}\\ {\em Fabien Ringeval6^6, Mohamed Chetouani7^7, Felix Weninger1^1, Florian Eyben1^1, Erik Marchi1^1, }\\ {\em Hugues Salamin3^3, Anna Polychroniou3^3, Fabio Valente4^4, Samuel Kim4^4
    corecore