419 research outputs found

    Type-Based Detection of XML Query-Update Independence

    Get PDF
    This paper presents a novel static analysis technique to detect XML query-update independence, in the presence of a schema. Rather than types, our system infers chains of types. Each chain represents a path that can be traversed on a valid document during query/update evaluation. The resulting independence analysis is precise, although it raises a challenging issue: recursive schemas may lead to infer infinitely many chains. A sound and complete approximation technique ensuring a finite analysis in any case is presented, together with an efficient implementation performing the chain-based analysis in polynomial space and time.Comment: VLDB201

    Incremental View Maintenance For Collection Programming

    Get PDF
    In the context of incremental view maintenance (IVM), delta query derivation is an essential technique for speeding up the processing of large, dynamic datasets. The goal is to generate delta queries that, given a small change in the input, can update the materialized view more efficiently than via recomputation. In this work we propose the first solution for the efficient incrementalization of positive nested relational calculus (NRC+) on bags (with integer multiplicities). More precisely, we model the cost of NRC+ operators and classify queries as efficiently incrementalizable if their delta has a strictly lower cost than full re-evaluation. Then, we identify IncNRC+; a large fragment of NRC+ that is efficiently incrementalizable and we provide a semantics-preserving translation that takes any NRC+ query to a collection of IncNRC+ queries. Furthermore, we prove that incremental maintenance for NRC+ is within the complexity class NC0 and we showcase how recursive IVM, a technique that has provided significant speedups over traditional IVM in the case of flat queries [25], can also be applied to IncNRC+.Comment: 24 pages (12 pages plus appendix

    Change Mining in Adaptive Process Management Systems

    Get PDF
    The wide-spread adoption of process-aware information systems has resulted in a bulk of computerized information about real-world processes. This data can be utilized for process performance analysis as well as for process improvement. In this context process mining offers promising perspectives. So far, existing mining techniques have been applied to operational processes, i.e., knowledge is extracted from execution logs (process discovery), or execution logs are compared with some a-priori process model (conformance checking). However, execution logs only constitute one kind of data gathered during process enactment. In particular, adaptive processes provide additional information about process changes (e.g., ad-hoc changes of single process instances) which can be used to enable organizational learning. In this paper we present an approach for mining change logs in adaptive process management systems. The change process discovered through process mining provides an aggregated overview of all changes that happened so far. This, in turn, can serve as basis for all kinds of process improvement actions, e.g., it may trigger process redesign or better control mechanisms

    Quantum Information Dynamics and Open World Science

    Get PDF
    One of the fundamental insights of quantum mechanics is that complete knowledge of the state of a quantum system is not possible. Such incomplete knowledge of a physical system is the norm rather than the exception. This is becoming increasingly apparent as we apply scientific methods to increasingly complex situations. Empirically intensive disciplines in the biological, human, and geosciences all operate in situations where valid conclusions must be drawn, but deductive completeness is impossible. This paper argues that such situations are emerging examples of {it Open World} Science. In this paradigm, scientific models are known to be acting with incomplete information. Open World models acknowledge their incompleteness, and respond positively when new information becomes available. Many methods for creating Open World models have been explored analytically in quantitative disciplines such as statistics, and the increasingly mature area of machine learning. This paper examines the role of quantum theory and quantum logic in the underpinnings of Open World models, examining the importance of structural features of such as non-commutativity, degrees of similarity, induction, and the impact of observation. Quantum mechanics is not a problem around the edges of classical theory, but is rather a secure bridgehead in the world of science to come

    06121 Abstracts Collection -- Atomicity: A Unifying Concept in Computer Science

    Get PDF
    From 19.03.06 to 24.03.06, the Dagstuhl Seminar 06121 ``Atomicity: A Unifying Concept in Computer Science\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    A technique for detecting wait-notify deadlocks in Java

    Get PDF
    Deadlock analysis of object-oriented programs that dynamically create threads and objects is complex, because these programs may have an infinite number of states. In this thesis, I analyze the correctness of wait - notify patterns (e.g. deadlock freedom) by using a newly introduced technique that consists in an analysis model that is a basic concurrent language with a formal semantic. I detect deadlocks by associating a Petri Net graph to each process of the input program. This model allows to check if a deadlock occur by analysing the reachability tree. The technique presented is a basic step of a more complex and complete project, since in my work I only consider programs with one object

    Rehearsal: A Configuration Verification Tool for Puppet

    Full text link
    Large-scale data centers and cloud computing have turned system configuration into a challenging problem. Several widely-publicized outages have been blamed not on software bugs, but on configuration bugs. To cope, thousands of organizations use system configuration languages to manage their computing infrastructure. Of these, Puppet is the most widely used with thousands of paying customers and many more open-source users. The heart of Puppet is a domain-specific language that describes the state of a system. Puppet already performs some basic static checks, but they only prevent a narrow range of errors. Furthermore, testing is ineffective because many errors are only triggered under specific machine states that are difficult to predict and reproduce. With several examples, we show that a key problem with Puppet is that configurations can be non-deterministic. This paper presents Rehearsal, a verification tool for Puppet configurations. Rehearsal implements a sound, complete, and scalable determinacy analysis for Puppet. To develop it, we (1) present a formal semantics for Puppet, (2) use several analyses to shrink our models to a tractable size, and (3) frame determinism-checking as decidable formulas for an SMT solver. Rehearsal then leverages the determinacy analysis to check other important properties, such as idempotency. Finally, we apply Rehearsal to several real-world Puppet configurations.Comment: In proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) 201

    Verifying Strong Eventual Consistency in Distributed Systems

    Get PDF
    Data replication is used in distributed systems to maintain up-to-date copies of shared data across multiple computers in a network. However, despite decades of research, algorithms for achieving consistency in replicated systems are still poorly understood. Indeed, many published algorithms have later been shown to be incorrect, even some that were accompanied by supposed mechanised proofs of correctness. In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework in the Isabelle/HOL interactive proof assistant for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. We find that our framework is highly reusable, developing proofs of correctness for the latter two CRDTs in a few hours and with relatively little CRDT-specific code
    • 

    corecore