9 research outputs found

    Extending the Finite Domain Solver of GNU Prolog

    No full text
    International audienceThis paper describes three significant extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to overcome its current limitations. The preliminary performance evaluation shows a limited slowdown factor with respect to the initial solver. This factor is widely counterbalanced by the new possibilities and the robustness of the solver. Furthermore these results are preliminary and we propose some directions to limit this overhead

    Domain-specific languages in Prolog for declarative expert knowledge in rules and ontologies

    Get PDF
    Declarative if–then rules have proven very useful in many applications of expert sys- tems. They can be managed in deductive databases and evaluated using the well-known forward-chaining approach. For domain-experts, however, the syntax of rules becomes complicated quickly, and already many different knowledge representation formalisms ex- ist. Expert knowledge is often acquired in story form using interviews. In this paper, we discuss its representation by defining domain-specific languages (Dsls) for declarative ex- pert rules. They can be embedded in Prolog systems in internal Dsls using term expan- sion and as external Dsls using definite clause grammars and quasi-quotations – for more sophisticated syntaxes. Based on the declarative rules and the integration with the Prolog-based deductive database system DDbase, multiple rules acquired in practical case studies can be combined, compared, graphically analysed by domain-experts, and evaluated, resulting in an extensi- ble system for expert knowledge. As a result, the actual modeling Dsl becomes executable; the declarative forward-chaining evaluation of deductive databases can be understood by the domain experts. Our Dsl for rules can be further improved by integrating ontologies and rule annotations

    Sort-based grouping and aggregation

    Full text link
    Database query processing requires algorithms for duplicate removal, grouping, and aggregation. Three algorithms exist: in-stream aggregation is most efficient by far but requires sorted input; sort-based aggregation relies on external merge sort; and hash aggregation relies on an in-memory hash table plus hash partitioning to temporary storage. Cost-based query optimization chooses which algorithm to use based on several factors including input and output sizes, the sort order of the input, and the need for sorted output. For example, hash-based aggregation is ideal for small output (e.g., TPC-H Query 1), whereas sorting the entire input and aggregating after sorting are preferable when both aggregation input and output are large and the output needs to be sorted for a subsequent operation such as a merge join. Unfortunately, the size information required for a sound choice is often inaccurate or unavailable during query optimization, leading to sub-optimal algorithm choices. To address this challenge, this paper introduces a new algorithm for sort-based duplicate removal, grouping, and aggregation. The new algorithm always performs at least as well as both traditional hash-based and traditional sort-based algorithms. It can serve as a system's only aggregation algorithm for unsorted inputs, thus preventing erroneous algorithm choices. Furthermore, the new algorithm produces sorted output that can speed up subsequent operations. Google's F1 Query uses the new algorithm in production workloads that aggregate petabytes of data every day

    Abstract Diagnosis for tccp using a Linear Temporal Logic

    Full text link
    Automatic techniques for program verification usually suffer the well-known state explosion problem. Most of the classical approaches are based on browsing the structure of some form of model (which rep- resents the behavior of the program) to check if a given specification is valid. This implies that a part of the model has to be built, and some- times the needed fragment is quite huge. In this work, we provide an alternative automatic decision method to check whether a given property, specified in a linear temporal logic, is valid w.r.t. a tccp program. Our proposal (based on abstract interpreta- tion techniques) does not require to build any model at all. Our results guarantee correctness but, as usual when using an abstract semantics, completeness is lost.Comini, M.; Titolo, L.; Villanueva García, A. (2014). Abstract Diagnosis for tccp using a Linear Temporal Logic. http://hdl.handle.net/10251/3569

    An Abstract Interpretation Framework for Diagnosis and Verification of Timed Concurrent Constraint Languages

    Get PDF
    In this thesis, we propose a semantic framework for tccp based on abstract interpretation with the main purpose of formally verifying and debugging tccp programs. A key point for the efficacy of the resulting methodologies is the adequacy of the concrete semantics. Thus, in this thesis, much effort has been devoted to the development of a suitable small-step denotational semantics for the tccp language to start with. Our denotational semantics models precisely the small-step behavior of tccp and is suitable to be used within the abstract interpretation framework. Namely, it is defined in a compositional and bottom-up way, it is as condensed as possible (it does not contain redundant elements), and it is goal-independent (its calculus does not depend on the semantic evaluation of a specific initial agent). Another contribution of this thesis is the definition (by abstraction of our small-step denotational semantics) of a big-step denotational semantics that abstracts away from the information about the evolution of the state and keeps only the the first and the last (if it exists) state. We show that this big-step semantics is essentially equivalent to the input-output semantics. In order to fulfill our goal of formally validate tccp programs, we build different approximations of our small-step denotational semantics by using standard abstract interpretation techniques. In this way we obtain debugging and verification tools which are correct by construction. More specifically, we propose two abstract semantics that are used to formally debug tccp programs. The first one approximates the information content of tccp behavioral traces, while the second one approximates our small-step semantics with temporal logic formulas. By applying abstract diagnosis with these abstract semantics we obtain two fully-automatic verification methods for tccp

    Maintaining the correctness of transactional memory programs

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaThis dissertation addresses the challenge of maintaining the correctness of transactional memory programs, while improving its parallelism with small transactions and relaxed isolation levels. The efficiency of the transactional memory systems depends directly on the level of parallelism, which in turn depends on the conflict rate. A high conflict rate between memory transactions can be addressed by reducing the scope of transactions, but this approach may turn the application prone to the occurrence of atomicity violations. Another way to address this issue is to ignore some of the conflicts by using a relaxed isolation level, such as snapshot isolation, at the cost of introducing write-skews serialization anomalies that break the consistency guarantees provided by a stronger consistency property, such as opacity. In order to tackle the correctness issues raised by the atomicity violations and the write-skew anomalies, we propose two static analysis techniques: one based in a novel static analysis algorithm that works on a dependency graph of program variables and detects atomicity violations; and a second one based in a shape analysis technique supported by separation logic augmented with heap path expressions, a novel representation based on sequences of heap dereferences that certifies if a transactional memory program executing under snapshot isolation is free from writeskew anomalies. The evaluation of the runtime execution of a transactional memory algorithm using snapshot isolation requires a framework that allows an efficient implementation of a multi-version algorithm and, at the same time, enables its comparison with other existing transactional memory algorithms. In the Java programming language there was no framework satisfying both these requirements. Hence, we extended an existing software transactional memory framework that already supported efficient implementations of some transactional memory algorithms, to also support the efficient implementation of multi-version algorithms. The key insight for this extension is the support for storing the transactional metadata adjacent to memory locations. We illustrate the benefits of our approach by analyzing its impact with both single- and multi-version transactional memory algorithms using several transactional workloads.Fundação para a Ciência e Tecnologia - PhD research grant SFRH/BD/41765/2007, and in the research projects Synergy-VM (PTDC/EIA-EIA/113613/2009), and RepComp (PTDC/EIAEIA/ 108963/2008
    corecore