30,400 research outputs found

    Identifying the consequences of dynamic treatment strategies: A decision-theoretic overview

    Full text link
    We consider the problem of learning about and comparing the consequences of dynamic treatment strategies on the basis of observational data. We formulate this within a probabilistic decision-theoretic framework. Our approach is compared with related work by Robins and others: in particular, we show how Robins's 'G-computation' algorithm arises naturally from this decision-theoretic perspective. Careful attention is paid to the mathematical and substantive conditions required to justify the use of this formula. These conditions revolve around a property we term stability, which relates the probabilistic behaviours of observational and interventional regimes. We show how an assumption of 'sequential randomization' (or 'no unmeasured confounders'), or an alternative assumption of 'sequential irrelevance', can be used to infer stability. Probabilistic influence diagrams are used to simplify manipulations, and their power and limitations are discussed. We compare our approach with alternative formulations based on causal DAGs or potential response models. We aim to show that formulating the problem of assessing dynamic treatment strategies as a problem of decision analysis brings clarity, simplicity and generality.Comment: 49 pages, 15 figure

    Requirements modelling and formal analysis using graph operations

    Get PDF
    The increasing complexity of enterprise systems requires a more advanced analysis of the representation of services expected than is currently possible. Consequently, the specification stage, which could be facilitated by formal verification, becomes very important to the system life-cycle. This paper presents a formal modelling approach, which may be used in order to better represent the reality of the system and to verify the awaited or existing system’s properties, taking into account the environmental characteristics. For that, we firstly propose a formalization process based upon properties specification, and secondly we use Conceptual Graphs operations to develop reasoning mechanisms of verifying requirements statements. The graphic visualization of these reasoning enables us to correctly capture the system specifications by making it easier to determine if desired properties hold. It is applied to the field of Enterprise modelling

    Nestedness in Networks: A Theoretical Model and Some Applications

    Get PDF
    We develop a dynamic network formation model that can explain the observed nestedness in real-world networks. Links are formed on the basis of agents’ centrality and have an exponentially distributed life time. We use stochastic stability to identify the networks to which the network formation process converges and find that they are nested split graphs. We completely determine the topological properties of the stochastically stable networks and show that they match features exhibited by real-world networks. Using four different network datasets, we empirically test our model and show that it fits well the observed networks.Nestedness, Bonacich centrality, network formation, nested split graphs

    Graph cluster randomization: network exposure to multiple universes

    Full text link
    A/B testing is a standard approach for evaluating the effect of online experiments; the goal is to estimate the `average treatment effect' of a new feature or condition by exposing a sample of the overall population to it. A drawback with A/B testing is that it is poorly suited for experiments involving social interference, when the treatment of individuals spills over to neighboring individuals along an underlying social network. In this work, we propose a novel methodology using graph clustering to analyze average treatment effects under social interference. To begin, we characterize graph-theoretic conditions under which individuals can be considered to be `network exposed' to an experiment. We then show how graph cluster randomization admits an efficient exact algorithm to compute the probabilities for each vertex being network exposed under several of these exposure conditions. Using these probabilities as inverse weights, a Horvitz-Thompson estimator can then provide an effect estimate that is unbiased, provided that the exposure model has been properly specified. Given an estimator that is unbiased, we focus on minimizing the variance. First, we develop simple sufficient conditions for the variance of the estimator to be asymptotically small in n, the size of the graph. However, for general randomization schemes, this variance can be lower bounded by an exponential function of the degrees of a graph. In contrast, we show that if a graph satisfies a restricted-growth condition on the growth rate of neighborhoods, then there exists a natural clustering algorithm, based on vertex neighborhoods, for which the variance of the estimator can be upper bounded by a linear function of the degrees. Thus we show that proper cluster randomization can lead to exponentially lower estimator variance when experimentally measuring average treatment effects under interference.Comment: 9 pages, 2 figure

    Morphing surfaces for the control of boundary layer transition

    Get PDF
    A structure configured to modify its surface morphology between a smooth state and a rough state in response to an applied stress. In demonstrated examples, a soft (PDMS) substrate is produced, and is pre-strained. A relatively stiff overlayer of a metal, such as chromium and gold, is applied to the substrate. When the pre-strained substrate is allowed to relax, the free surface of the stiff overlayer is forced to become distorted, yielding a free surface having a roughness of less than 1 millimeter. Repeated application and removal of the applied stress has been shown to yield reproducible changes in the morphology of the free surface. An application of such morphing free surface is to control a boundary layer transition of an aerodynamic fluid flowing over the surface
    corecore