194 research outputs found

    Model checking Branching-Time Properties of Multi-Pushdown Systems is Hard

    Full text link
    We address the model checking problem for shared memory concurrent programs modeled as multi-pushdown systems. We consider here boolean programs with a finite number of threads and recursive procedures. It is well-known that the model checking problem is undecidable for this class of programs. In this paper, we investigate the decidability and the complexity of this problem under the assumption of bounded context-switching defined by Qadeer and Rehof, and of phase-boundedness proposed by La Torre et al. On the model checking of such systems against temporal logics and in particular branching time logics such as the modal Îź\mu-calculus or CTL has received little attention. It is known that parity games, which are closely related to the modal Îź\mu-calculus, are decidable for the class of bounded-phase systems (and hence for bounded-context switching as well), but with non-elementary complexity (Seth). A natural question is whether this high complexity is inevitable and what are the ways to get around it. This paper addresses these questions and unfortunately, and somewhat surprisingly, it shows that branching model checking for MPDSs is inherently an hard problem with no easy solution. We show that parity games on MPDS under phase-bounding restriction is non-elementary. Our main result shows that model checking a kk context bounded MPDS against a simple fragment of CTL, consisting of formulas that whose temporal operators come from the set {\EF, \EX}, has a non-elementary lower bound

    Programmability of Chemical Reaction Networks

    Get PDF
    Motivated by the intriguing complexity of biochemical circuitry within individual cells we study Stochastic Chemical Reaction Networks (SCRNs), a formal model that considers a set of chemical reactions acting on a finite number of molecules in a well-stirred solution according to standard chemical kinetics equations. SCRNs have been widely used for describing naturally occurring (bio)chemical systems, and with the advent of synthetic biology they become a promising language for the design of artificial biochemical circuits. Our interest here is the computational power of SCRNs and how they relate to more conventional models of computation. We survey known connections and give new connections between SCRNs and Boolean Logic Circuits, Vector Addition Systems, Petri Nets, Gate Implementability, Primitive Recursive Functions, Register Machines, Fractran, and Turing Machines. A theme to these investigations is the thin line between decidable and undecidable questions about SCRN behavior

    Robustness against Relaxed Memory Models

    Get PDF
    Sequential Consistency (SC) is the memory model traditionally applied by programmers and verification tools for the analysis of multithreaded programs. SC guarantees that instructions of each thread are executed atomically and in program order. Modern CPUs implement memory models that relax the SC guarantees: threads can execute instructions out of order, stores to the memory can be observed by different threads in different order. As a result of these relaxations, multithreaded programs can show unexpected, potentially undesired behaviors, when run on real hardware. The robustness problem asks if a program has the same behaviors under SC and under a relaxed memory model. Behaviors are formalized in terms of happens-before relations — dataflow and control-flow relations between executed instructions. Programs that are robust against a memory model produce the same results under this memory model and under SC. This means, they only need to be verified under SC, and the verification results will carry over to the relaxed setting. Interestingly, robustness is a suitable correctness criterion not only for multithreaded programs, but also for parallel programs running on computer clusters. Parallel programs written in Partitioned Global Address Space (PGAS) programming model, when executed on cluster, consist of multiple processes, each running on its cluster node. These processes can directly access memories of each other over the network, without the need of explicit synchronization. Reorderings and delays introduced on the network level, just as the reorderings done by the CPUs, may result into unexpected behaviors that are hard to reproduce and fix. Our first contribution is a generic approach for solving robustness against relaxed memory models. The approach involves two steps: combinatorial analysis, followed by an algorithmic development. The aim of combinatorial analysis is to show that among program computations violating robustness there is always a computation in a certain normal form, where reorderings are applied in a restricted way. In the algorithmic development we work out a decision procedure for checking whether a program has violating normal-form computations. Our second contribution is an application of the generic approach to widely implemented memory models, including Total Store Order used in Intel x86 and Sun SPARC architectures, the memory model of Power architecture, and the PGAS memory model. We reduce robustness against TSO to SC state reachability for a modified input program. Robustness against Power and PGAS is reduced to language emptiness for a novel class of automata — multiheaded automata. The reductions lead to new decidability results. In particular, robustness is PSPACE-complete for all the considered memory models

    Rehabilitation of structures below the earths surface using fibre reinforced polymer shell augers and fabrication of plastic compounder

    Get PDF
    Many ageing highway bridges are in distress, requiring urgent repairs or rehabilitation. About 10% of the total highway bridges require rehabilitation in United States of America (Houlihan, April 2015). Reconstruction of these bridges requires large sums of money and time consuming conventional rehab schemes. Herein, a novel rehab scheme with Fiber Reinforced Polymer (FRP) composites has been evaluated in the laboratory as FRPs exhibit high strength to weight ratio, high stiffness and excellent corrosion resistance. Current methods to repair pile systems are limited by access issues as the piles are extended typically underwater and below the mudline. Traditional methods require cofferdams or other barrier systems to work in the dry or divers to work in the wet. Access is also hindered as the piles support the existing bridge; i.e. there is a structure overhead and repairs have to be made around the existing pile supporting super structure. Proper repairs typically require digging below the mudline, which complicates any traditional repair method. The repair method proposed herein seeks to solve both the access and excavation issues by combining an augering attachment to FRP formwork that can be installed around an existing pile. Twisting the FRP formwork engages the auger, which drives the forms below the mudline. This method can be adopted above water, eliminating the need for barrier systems or divers. The auger attachment can be modified based on soil conditions. As the shell bores down, additional shells can be attached to the previously attached shell by an overlapping joint and also reinforcing the shell with FRP composite wrap to prevent potential buckling under torsion and to minimize moisture ingress through the shell. Based on the compactness of the soil and/or hardness, FRP composites can be selected with proper fiber volume fraction and orientation. In this study, field conditions have been replicated in lab and three different kinds of Auger attachments to the FRP composite shell are tested with manual application of torque to understand various challenges in driving these shells below the mudline. The stresses developed in the shell, both by the application of torque and resistance offered by soil are measured using of strain gages. The strain gage readings are evaluated with respect to strain limits of FRP composite to understand the FRP-soil interaction and to attain safe shell design. Plastic waste is the other major issue at present. The production is increasing every year rapidly but the recycling is not. The plastic waste that is not recycled is dumped in oceans or landfilled causing disturbance in ecological cycle. To reduce ecological disturbance, a compounding machine converting the plastic waste into a structurally useful product has been discussed herein as a second part of this report

    Formally Verified Space-Safety for Program Transformations

    Get PDF
    Existing work on compilers has often primarily concerned itself with preserving behavior, but programs have other facets besides their observable behavior. We expect that the performance of our code is preserved and bettered by the compiler, not made worse. Unfortunately, that\u27s exactly what sometimes occurs in modern optimizing compilers. Poor representations or incorrect optimizations may preserve the correct behavior, but push that program into a different complexity class entirely. We\u27ve seen such blowups like this occurring in practice, and many transformations have pitfalls which can cause issues. Even when a program is not dramatically worsened, it can cause the program to use more resources than expected, causing issues in resource-constrained environments, and increasing garbage-collection pauses. While several researchers have noticed potential issues, there have been a relative dearth of proofs for space-safety, and none at all concerning non-local optimizations. This work expands upon existing notions of space-safety, allowing them to be used to reason about long-running programs with both input and output, while ensuring that the program maintains some temporal locality of space costs. In addition, this work includes new proof techniques which can handle more dramatic shifts in the program and heap structure than existing methods, as well as more frequent garbage collection. The results are formalized in Coq, including a proof of space-safety for lifting data up in scope, which increases sharing and saves duplicate work, but may also catastrophically increase space usage, if done incorrectly

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    • …
    corecore