895 research outputs found

    Decision Procedure for Entailment of Symbolic Heaps with Arrays

    Full text link
    This paper gives a decision procedure for the validity of en- tailment of symbolic heaps in separation logic with Presburger arithmetic and arrays. The correctness of the decision procedure is proved under the condition that sizes of arrays in the succedent are not existentially bound. This condition is independent of the condition proposed by the CADE-2017 paper by Brotherston et al, namely, one of them does not imply the other. For improving efficiency of the decision procedure, some techniques are also presented. The main idea of the decision procedure is a novel translation of an entailment of symbolic heaps into a formula in Presburger arithmetic, and to combine it with an external SMT solver. This paper also gives experimental results by an implementation, which shows that the decision procedure works efficiently enough to use

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    Admit your weakness: Verifying correctness on TSO architectures

    Get PDF
    “The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification

    What Developers Want and Need from Program Analysis: An Empirical Study

    Get PDF
    Program Analysis has been a rich and fruitful field of research for many decades, and countless high quality program analysis tools have been produced by academia. Though there are some well-known examples of tools that have found their way into routine use by practitioners, a common challenge faced by researchers is knowing how to achieve broad and lasting adoption of their tools. In an effort to understand what makes a program analyzer most attractive to developers, we mounted a multi-method investigation at Microsoft. Through interviews and surveys of developers as well as analysis of defect data, we provide insight and answers to four high level research questions that can help researchers design program analyzers meeting the needs of software developers. First, we explore what barriers hinder the adoption of program analyzers, like poorly expressed warning messages. Second, we shed light on what functionality developers want from analyzers, including the types of code issues that developers care about. Next, we answer what non-functional characteristics an analyzer should have to be widely used, how the analyzer should fit into the development process, and how its results should be reported. Finally, we investigate defects in one of Microsoft's flagship software services, to understand what types of code issues are most important to minimize, potentially through program analysis

    Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging.

    Get PDF
    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated.The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects.Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis.MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use

    Foundations for decision problems in separation logic with general inductive predicates

    Get PDF
    Abstract. We establish foundational results on the computational com-plexity of deciding entailment in Separation Logic with general induc-tive predicates whose underlying base language allows for pure formulas, pointers and existentially quantified variables. We show that entailment is in general undecidable, and ExpTime-hard in a fragment recently shown to be decidable by Iosif et al. Moreover, entailment in the base language is ΠP2-complete, the upper bound even holds in the presence of list predicates. We additionally show that entailment in essentially any fragment of Separation Logic allowing for general inductive predicates is intractable even when strong syntactic restrictions are imposed.

    Blood–brain barrier impairment in patients living with hiv: Predictors and associated biomarkers

    Get PDF
    Despite the substantial changes resulting from the introduction of combination antiretroviral therapy (cART), the prevalence of HIV-associated neurocognitive disorders (HAND) remains substantial. Blood–brain barrier impairment (BBBi) is a frequent feature in people living with HIV (PLWH) and it may persist despite effective antiretroviral treatment. A cross-sectional study was performed in PLWH who underwent lumbar puncture for clinical reasons or research protocols and several cerebrospinal fluid biomarkers were studied. BBBi was defined as cerebrospinal fluid-to-serum albumin ratio (CSAR) >6.5 (<40 years) or >8 (>40 years). We included 464 participants: 147 cART-naïve and 317 on cART. Male sex was prevalent in both groups (72.1% and 72.2% respectively); median age was 44 (38–52) years in naïve and 49 (43–57) years in treated subjects. BBBi was observed in 35.4% naïve and in 22.7% treated participants; the use of integrase inhibitors was associated with a lower prevalence (18.3 vs. 30.9%, p = 0.050). At multivariate binary logistic regression (including age and sex) nadir CD4 cell count (p = 0.034), presence of central nervous system (CNS) opportunistic infections (p = 0.024) and cerebrospinal fluid (CSF) HIV RNA (p = 0.002) in naïve participants and male sex (p = 0.021), a history of CNS opportunistic infections (p = 0.001) and CSF HIV RNA (p = 0.034) in treated patients were independently associated with BBBi. CSF cells and neopterin were significantly higher in participants with BBBi. BBBi was prevalent in naïve and treated PLWH and it was associated with CSF HIV RNA and neopterin. Systemic control of viral replication seems to be essential for BBB integrity while sex and treatment influence need further studies

    Model checking for symbolic-heap separation logic with inductive predicates

    Get PDF
    We investigate the model checking problem for symbolic-heap separation logic with user-defined inductive predicates, i.e., the problem of checking that a given stack-heap memory state satisfies a given formula in this language, as arises e.g. in software testing or runtime verification. First, we show that the problem is decidable; specifically, we present a bottom-up fixed point algorithm that decides the problem and runs in exponential time in the size of the problem instance. Second, we show that, while model checking for the full language is EXPTIME-complete, the problem becomes NP-complete or PTIME-solvable when we impose natural syntactic restrictions on the schemata defining the inductive predicates. We additionally present NP and PTIME algorithms for these restricted fragments. Finally, we report on the experimental performance of our procedures on a variety of specifications extracted from programs, exercising multiple combinations of syntactic restrictions

    Compositional Satisfiability Solving in Separation Logic

    Get PDF
    We introduce a novel decision procedure to the satisfiability problem in array separation logic combined with general inductively defined predicates and arithmetic. Our proposal differentiates itself from existing works by solving satisfiability through compositional reasoning. First, following Fermat’s method of infinite descent, it infers for every inductive definition a “base” that precisely characterises the satisfiability. It then utilises the base to derive such a base for any formula where these inductive predicates reside in. Especially, we identify an expressive decidable fragment for the compositionality. We have implemented the proposal in a tool and evaluated it over challenging problems. The experimental results show that the compositional satisfiability solving is efficient and our tool is effective and efficient when compared with existing solvers

    Ecological equivalence: a realistic assumption for niche theory as a testable alternative to neutral theory

    Get PDF
    Hubbell's 2001 neutral theory unifies biodiversity and biogeography by modelling steady-state distributions of species richness and abundances across spatio-temporal scales. Accurate predictions have issued from its core premise that all species have identical vital rates. Yet no ecologist believes that species are identical in reality. Here I explain this paradox in terms of the ecological equivalence that species must achieve at their coexistence equilibrium, defined by zero net fitness for all regardless of intrinsic differences between them. I show that the distinction of realised from intrinsic vital rates is crucial to evaluating community resilience. An analysis of competitive interactions reveals how zero-sum patterns of abundance emerge for species with contrasting life-history traits as for identical species. I develop a stochastic model to simulate community assembly from a random drift of invasions sustaining the dynamics of recruitment following deaths and extinctions. Species are allocated identical intrinsic vital rates for neutral dynamics, or random intrinsic vital rates and competitive abilities for niche dynamics either on a continuous scale or between dominant-fugitive extremes. Resulting communities have steady-state distributions of the same type for more or less extremely differentiated species as for identical species. All produce negatively skewed log-normal distributions of species abundance, zero-sum relationships of total abundance to area, and Arrhenius relationships of species to area. Intrinsically identical species nevertheless support fewer total individuals, because their densities impact as strongly on each other as on themselves. Truly neutral communities have measurably lower abundance/area and higher species/abundance ratios. Neutral scenarios can be parameterized as null hypotheses for testing competitive release, which is a sure signal of niche dynamics. Ignoring the true strength of interactions between and within species risks a substantial misrepresentation of community resilience to habitat los
    corecore