1,109 research outputs found

    Decision Procedure for Entailment of Symbolic Heaps with Arrays

    Full text link
    This paper gives a decision procedure for the validity of en- tailment of symbolic heaps in separation logic with Presburger arithmetic and arrays. The correctness of the decision procedure is proved under the condition that sizes of arrays in the succedent are not existentially bound. This condition is independent of the condition proposed by the CADE-2017 paper by Brotherston et al, namely, one of them does not imply the other. For improving efficiency of the decision procedure, some techniques are also presented. The main idea of the decision procedure is a novel translation of an entailment of symbolic heaps into a formula in Presburger arithmetic, and to combine it with an external SMT solver. This paper also gives experimental results by an implementation, which shows that the decision procedure works efficiently enough to use

    Deciding Entailments in Inductive Separation Logic with Tree Automata

    Full text link
    Separation Logic (SL) with inductive definitions is a natural formalism for specifying complex recursive data structures, used in compositional verification of programs manipulating such structures. The key ingredient of any automated verification procedure based on SL is the decidability of the entailment problem. In this work, we reduce the entailment problem for a non-trivial subset of SL describing trees (and beyond) to the language inclusion of tree automata (TA). Our reduction provides tight complexity bounds for the problem and shows that entailment in our fragment is EXPTIME-complete. For practical purposes, we leverage from recent advances in automata theory, such as inclusion checking for non-deterministic TA avoiding explicit determinization. We implemented our method and present promising preliminary experimental results

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    The Tree Width of Separation Logic with Recursive Definitions

    Full text link
    Separation Logic is a widely used formalism for describing dynamically allocated linked data structures, such as lists, trees, etc. The decidability status of various fragments of the logic constitutes a long standing open problem. Current results report on techniques to decide satisfiability and validity of entailments for Separation Logic(s) over lists (possibly with data). In this paper we establish a more general decidability result. We prove that any Separation Logic formula using rather general recursively defined predicates is decidable for satisfiability, and moreover, entailments between such formulae are decidable for validity. These predicates are general enough to define (doubly-) linked lists, trees, and structures more general than trees, such as trees whose leaves are chained in a list. The decidability proofs are by reduction to decidability of Monadic Second Order Logic on graphs with bounded tree width.Comment: 30 pages, 2 figure

    Feeding the world: improving photosynthetic efficiency for sustainable crop production

    Get PDF
    A number of recent studies have provided strong support demonstrating that improving the photosynthetic processes through genetic engineering can provide an avenue to improve yield potential. The major focus of this review is on improvement of the Calvin–Benson cycle and electron transport. Consideration is also given to how altering regulatory process may provide an additional route to increase photosynthetic efficiency. Here we summarize some of the recent successes that have been observed through genetic manipulation of photosynthesis, showing that, in both the glasshouse and the field, yield can be increased by >40%. These results provide a clear demonstration of the potential for increasing yield through improvements in photosynthesis. In the final section, we consider the need to stack improvement in photosynthetic traits with traits that target the yield gap in order to provide robust germplasm for different crops across the globe

    Internal Anisotropy of Collision Cascades

    Full text link
    We investigate the internal anisotropy of collision cascades arising from the branching structure. We show that the global fractal dimension cannot give an adequate description of the geometrical structure of cascades because it is insensitive to the internal anisotropy. In order to give a more elaborate description we introduce an angular correlation function, which takes into account the direction of the local growth of the branches of the cascades. It is demonstrated that the angular correlation function gives a quantitative description of the directionality and the interrelation of branches. The power law decay of the angular correlation is evidenced and characterized by an exponent and an angular correlation length different from the radius of gyration. It is demonstrated that the overlapping of subcascades has a strong effect on the angular correlation.Comment: RevteX, 8 pages, 6 .eps figures include

    Admit your weakness: Verifying correctness on TSO architectures

    Get PDF
    “The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification

    Foundations for decision problems in separation logic with general inductive predicates

    Get PDF
    Abstract. We establish foundational results on the computational com-plexity of deciding entailment in Separation Logic with general induc-tive predicates whose underlying base language allows for pure formulas, pointers and existentially quantified variables. We show that entailment is in general undecidable, and ExpTime-hard in a fragment recently shown to be decidable by Iosif et al. Moreover, entailment in the base language is ΠP2-complete, the upper bound even holds in the presence of list predicates. We additionally show that entailment in essentially any fragment of Separation Logic allowing for general inductive predicates is intractable even when strong syntactic restrictions are imposed.

    Plasma membrane calcium ATPase (PMCA4): A housekeeper for RT-PCR relative quantification of polytopic membrane proteins

    Get PDF
    BACKGROUND: Although relative quantification of real-time RT-PCR data can provide valuable information, one limitation remains the selection of an appropriate reference gene. No one gene has emerged as a universal reference gene and much debate surrounds some of the more commonly used reference genes, such as glyceraldehyde-3-phosphate dehydrogenase (GAPDH). At this time, no gene encoding for a plasma membrane protein serves as a reference gene, and relative quantification of plasma membrane proteins is performed with genes encoding soluble proteins, which differ greatly in quantity and in targeting and trafficking from plasma membrane proteins. In this work, our aim was to identify a housekeeping gene, ideally one that codes for a plasma membrane protein, whose expression remains the same regardless of drug treatment and across a wide range of tissues to be used for relative quantification of real-time RT-PCR data for ATP binding cassette (ABC) plasma membrane transporters. RESULTS: In studies evaluating the expression levels of two commonly used reference genes coding for soluble proteins and two genes coding for membrane proteins, one plasma membrane protein, plasma membrane calcium-ATPase 4 (PMCA4), was comparable to the two reference genes already in use. In addition, PMCA4 expression shows little variation across eight drug-treated cell lines and was found to be superior to GAPDH and HPRT1, commonly used reference genes. Finally, we show PMCA4 used as a reference gene for normalizing ABC transporter expression in a drug-resistant lung carcinoma cell line. CONCLUSION: We have found that PMCA4 is a good housekeeping gene for normalization of gene expression for polytopic membrane proteins including transporters and receptors

    What Developers Want and Need from Program Analysis: An Empirical Study

    Get PDF
    Program Analysis has been a rich and fruitful field of research for many decades, and countless high quality program analysis tools have been produced by academia. Though there are some well-known examples of tools that have found their way into routine use by practitioners, a common challenge faced by researchers is knowing how to achieve broad and lasting adoption of their tools. In an effort to understand what makes a program analyzer most attractive to developers, we mounted a multi-method investigation at Microsoft. Through interviews and surveys of developers as well as analysis of defect data, we provide insight and answers to four high level research questions that can help researchers design program analyzers meeting the needs of software developers. First, we explore what barriers hinder the adoption of program analyzers, like poorly expressed warning messages. Second, we shed light on what functionality developers want from analyzers, including the types of code issues that developers care about. Next, we answer what non-functional characteristics an analyzer should have to be widely used, how the analyzer should fit into the development process, and how its results should be reported. Finally, we investigate defects in one of Microsoft's flagship software services, to understand what types of code issues are most important to minimize, potentially through program analysis
    • …
    corecore