172 research outputs found

    Incompleteness of relational simulations in the blocking paradigm

    Get PDF
    Refinement is the notion of development between formal specifications For specifications given in a relational formalism downward and upward simulations are the standard method to verify that a refinement holds their usefulness based upon their soundness and joint completeness This is known to be true for total relational specifications and has been claimed to hold for partial relational specifications in both the non-blocking and blocking interpretations In this paper we show that downward and upward simulations in the blocking interpretation where domains are guards are not Jointly complete This contradicts earlier claims in the literature We illustrate this with an example (based on one recently constructed by Reeves and Streader) and then construct a proof to show why Joint completeness fails in general (C) 2010 Elsevier B V All rights reserve

    Activity-Independent Prespecification of Synaptic Partners in the Visual Map of Drosophila

    Get PDF
    SummarySpecifying synaptic partners and regulating synaptic numbers are at least partly activity-dependent processes during visual map formation in all systems investigated to date [1–5]. In Drosophila, six photoreceptors that view the same point in visual space have to be sorted into synaptic modules called cartridges in order to form a visuotopically correct map [6, 7]. Synapse numbers per photoreceptor terminal and cartridge are both precisely regulated [8–10]. However, it is unknown whether an activity-dependent mechanism or a genetically encoded developmental program regulates synapse numbers. We performed a large-scale quantitative ultrastructural analysis of photoreceptor synapses in mutants affecting the generation of electrical potentials (norpA, trp;trpl), neurotransmitter release (hdc, syt), vesicle endocytosis (synj), the trafficking of specific guidance molecules during photoreceptor targeting (sec15), a specific guidance receptor required for visual map formation (Dlar), and 57 other novel synaptic mutants affecting 43 genes. Remarkably, in all these mutants, individual photoreceptors form the correct number of synapses per presynaptic terminal independently of cartridge composition. Hence, our data show that each photoreceptor forms a precise and constant number of afferent synapses independently of neuronal activity and partner accuracy. Our data suggest cell-autonomous control of synapse numbers as part of a developmental program of activity-independent steps that lead to a “hard-wired” visual map in the fly brain

    Nondeterministic Relational Semantics of a while Program

    Get PDF
    A relational semantics is a mapping of programs to relations. We consider that the input-output semantics of a program is given by a relation on its set of states; in a nondeterministic context, this relation is calculated by considering the worst behavior of the program (demonic relational semantics). In this paper, we concentrate on while loops. Calculating the relational abstraction (semantics) of a loop is difficult, but showing the correctness of any candidate abstraction is much easier. For functional programs, Mills has described a checking method known as the while statement verification rule. A programming theorem for iterative constructs is proposed, proved, demonstrated and applied for an example. This theorem can be considered as a generalization of the while statement verification to nondeterministic loops.&nbsp

    Probabilistic Semantics for RoboChart A Weakest Completion Approach

    Get PDF
    We outline a probabilistic denotational semantics for the RoboChart language, a diagrammatic, domain-specific notation for de- scribing robotic controllers with their hardware platforms and operating environments. We do this using a powerful (but perhaps not so well known) semantic technique: He, Morgan, and McIver’s weakest completion semantics, which is based on Hoare and He’s Unifying Theories of Programming. In this approach, we do the following: (1) start with the standard semantics for a nondeterministic programming language; (2) propose a new probabilistic semantic domain; (3) propose a forgetful function from the probabilistic semantic domain to the standard semantic domain; (4) use the converse of the forgetful function to embed the standard semantic domain in the probabilistic semantic domain; (5) demonstrate that this embedding preserves program structure; (6) define the probabilistic choice operator. Weakest completion semantics guides the semantic definition of new languages by building on existing semantics and, in this case, tackling a notoriously thorny issue: the relationship between demonic and probabilistic choice. Consistency ensures that programming intuitions, development techniques, and proof methods can be carried over from the standard language to the probabilistic one. We largely follow He et al., our contribution being an explication of the technique with meticulous proofs suitable for mechanisation in Isabelle/UTP

    Demonic fixed points

    Get PDF
    We deal with a relational model for the demonic semantics of programs. The demonic semantics of a while loop is given as a fixed point of a function involving the demonic operators. This motivates us to investigate the fixed points of these functions. We give the expression of the greatest fixed point with respect to the demonic ordering (demonic inclusion) of the semantic function. We prove that this greatest fixed coincides with the least fixed point with respect to the usual ordering (angelic inclusion) of the same function. This is followed by an example of application

    Sensitivity analysis for causality in observational studies for regulatory science

    Full text link
    Recognizing the importance of real-world data (RWD) for regulatory purposes, the United States (US) Congress passed the 21st Century Cures Act1 mandating the development of Food and Drug Administration (FDA) guidance on regulatory use of real-world evidence. The Forum on the Integration of Observational and Randomized Data (FIORD) conducted a meeting bringing together various stakeholder groups to build consensus around best practices for the use of RWD to support regulatory science. Our companion paper describes in detail the context and discussion carried out in the meeting, which includes a recommendation to use a causal roadmap for complete pre-specification of study designs using RWD. This article discusses one step of the roadmap: the specification of a procedure for sensitivity analysis, defined as a procedure for testing the robustness of substantive conclusions to violations of assumptions made in the causal roadmap. We include a worked-out example of a sensitivity analysis from a RWD study on the effectiveness of Nifurtimox in treating Chagas disease, as well as an overview of various methods available for sensitivity analysis in causal inference, emphasizing practical considerations on their use for regulatory purposes

    Regulation of branching dynamics by axon-intrinsic asymmetries in Tyrosine Kinase Receptor signaling

    Get PDF
    Axonal branching allows a neuron to connect to several targets, increasing neuronal circuit complexity. While axonal branching is well described, the mechanisms that control it remain largely unknown. We find that in the Drosophila CNS branches develop through a process of excessive growth followed by pruning. In vivo high-resolution live imaging of developing brains as well as loss and gain of function experiments show that activation of Epidermal Growth Factor Receptor (EGFR) is necessary for branch dynamics and the final branching pattern. Live imaging also reveals that intrinsic asymmetry in EGFR localization regulates the balance between dynamic and static filopodia. Elimination of signaling asymmetry by either loss or gain of EGFR function results in reduced dynamics leading to excessive branch formation. In summary, we propose that the dynamic process of axon branch development is mediated by differential local distribution of signaling receptors

    Thirty-seven years of relational Hoare logic: remarks on its principles and history

    Full text link
    Relational Hoare logics extend the applicability of modular, deductive verification to encompass important 2-run properties including dependency requirements such as confidentiality and program relations such as equivalence or similarity between program versions. A considerable number of recent works introduce different relational Hoare logics without yet converging on a core set of proof rules. This paper looks backwards to little known early work. This brings to light some principles that clarify and organize the rules as well as suggesting a new rule and a new notion of completeness.Comment: A version appears in proceedings of ISOLA 2020. Version2: fix typos, minor clarifications, add a citation. Version3: copy edits, add citations on completeness. Version 4: minor corrections. Version 5: restore missing precond in loop rul

    Computer simulation of glioma growth and morphology

    Get PDF
    Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion

    On refinement of software architectures

    Get PDF
    Although increasingly popular, software component techniques still lack suitable formal foundations on top of which rigorous methodologies for the description and analysis of software architectures could be built. This paper aims to contribute in this direction: building on previous work by the authors on coalgebraic semantics, it discusses component refinement at three different but interrelated levels: behavioural, syntactic, i.e., relative to component interfaces, and architectural. Software architectures are defined through component aggregation. On the other hand, such aggregations, no matter how large and complex they are, can also be dealt with as components themselves, which paves the way to a discipline of hierarchical design. In this context, a major contribution of this paper is the introduction of a set of rules for architectural refinement. Keywords: Software component, software architecture, refinement, coalgebra.Fundação para a Ciência e a Tecnologia (FCT
    • …
    corecore