8,344 research outputs found

    Estimating the causal effect of a time-varying treatment on time-to-event using structural nested failure time models

    Full text link
    In this paper we review an approach to estimating the causal effect of a time-varying treatment on time to some event of interest. This approach is designed for the situation where the treatment may have been repeatedly adapted to patient characteristics, which themselves may also be time-dependent. In this situation the effect of the treatment cannot simply be estimated by conditioning on the patient characteristics, as these may themselves be indicators of the treatment effect. This so-called time-dependent confounding is typical in observational studies. We discuss a new class of failure time models, structural nested failure time models, which can be used to estimate the causal effect of a time-varying treatment, and present methods for estimating and testing the parameters of these models

    Reducing Prawn-trawl Bycatch in Australia: An Overview and an Example from Queensland

    Get PDF
    Prawn trawling occurs in most states of Australia in tropical, subtropical, and temperate waters. Bycatch occurs to some degree in all Australian trawl fisheries, and there is pressure to reduce the levels of trawl fishery bycatch. This paper gives a brief overview of the bycatch issues and technological solutions that have been evaluated or adopted in Australian prawn-trawl fi sheries. Turtle excluder devices (TED’s) and bycatch reduction devices (BRD’s) are the principal solutions to bycatch in Australian prawn-trawl fisheries. This paper focuses on a major prawn-trawl fishery of northeastern Australia, and the results of commercial use of TED’s and BRD’s in the Queensland east coast trawl fishery are presented. New industry designs are described, and the status of TED and BRD adoption and regulation is summarized. The implementation of technological solutions to reduce fishery bycatch is assumed generally to assist prawn-trawl fisheries within Australia in achieving legislative requirements for minimal environmental impact and ecological sustainable development

    Stability of continuously pumped atom lasers

    Get PDF
    A multimode model of a continuously pumped atom laser is shown to be unstable below a critical value of the scattering length. Above the critical scattering length, the atom laser reaches a steady state, the stability of which increases with pumping. Below this limit the laser does not reach a steady state. This instability results from the competition between gain and loss for the excited states of the lasing mode. It will determine a fundamental limit for the linewidth of an atom laser beam.Comment: 4 page

    Nested Markov Properties for Acyclic Directed Mixed Graphs

    Full text link
    Directed acyclic graph (DAG) models may be characterized in at least four different ways: via a factorization, the d-separation criterion, the moralization criterion, and the local Markov property. As pointed out by Robins (1986, 1999), Verma and Pearl (1990), and Tian and Pearl (2002b), marginals of DAG models also imply equality constraints that are not conditional independences. The well-known `Verma constraint' is an example. Constraints of this type were used for testing edges (Shpitser et al., 2009), and an efficient marginalization scheme via variable elimination (Shpitser et al., 2011). We show that equality constraints like the `Verma constraint' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via Markov properties and a factorization, a graphical model associated with acyclic directed mixed graphs (ADMGs). We show that marginal distributions of DAG models lie in this model, prove that a characterization of these constraints given in (Tian and Pearl, 2002b) gives an alternative definition of the model, and finally show that the fixing operation we used to define the model can be used to give a particularly simple characterization of identifiable causal effects in hidden variable graphical causal models.Comment: 67 pages (not including appendix and references), 8 figure

    Sparse Nested Markov models with Log-linear Parameters

    Full text link
    Hidden variables are ubiquitous in practical data analysis, and therefore modeling marginal densities and doing inference with the resulting models is an important problem in statistics, machine learning, and causal inference. Recently, a new type of graphical model, called the nested Markov model, was developed which captures equality constraints found in marginals of directed acyclic graph (DAG) models. Some of these constraints, such as the so called `Verma constraint', strictly generalize conditional independence. To make modeling and inference with nested Markov models practical, it is necessary to limit the number of parameters in the model, while still correctly capturing the constraints in the marginal of a DAG model. Placing such limits is similar in spirit to sparsity methods for undirected graphical models, and regression models. In this paper, we give a log-linear parameterization which allows sparse modeling with nested Markov models. We illustrate the advantages of this parameterization with a simulation study.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013

    Second look at the spread of epidemics on networks

    Full text link
    In an important paper, M.E.J. Newman claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semi-directed random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly-connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In an appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.Comment: 29 pages, 5 figure

    Using longitudinal targeted maximum likelihood estimation in complex settings with dynamic interventions.

    Get PDF
    Longitudinal targeted maximum likelihood estimation (LTMLE) has very rarely been used to estimate dynamic treatment effects in the context of time-dependent confounding affected by prior treatment when faced with long follow-up times, multiple time-varying confounders, and complex associational relationships simultaneously. Reasons for this include the potential computational burden, technical challenges, restricted modeling options for long follow-up times, and limited practical guidance in the literature. However, LTMLE has desirable asymptotic properties, ie, it is doubly robust, and can yield valid inference when used in conjunction with machine learning. It also has the advantage of easy-to-calculate analytic standard errors in contrast to the g-formula, which requires bootstrapping. We use a topical and sophisticated question from HIV treatment research to show that LTMLE can be used successfully in complex realistic settings, and we compare results to competing estimators. Our example illustrates the following practical challenges common to many epidemiological studies: (1) long follow-up time (30 months); (2) gradually declining sample size; (3) limited support for some intervention rules of interest; (4) a high-dimensional set of potential adjustment variables, increasing both the need and the challenge of integrating appropriate machine learning methods; and (5) consideration of collider bias. Our analyses, as well as simulations, shed new light on the application of LTMLE in complex and realistic settings: We show that (1) LTMLE can yield stable and good estimates, even when confronted with small samples and limited modeling options; (2) machine learning utilized with a small set of simple learners (if more complex ones cannot be fitted) can outperform a single, complex model, which is tailored to incorporate prior clinical knowledge; and (3) performance can vary considerably depending on interventions and their support in the data, and therefore critical quality checks should accompany every LTMLE analysis. We provide guidance for the practical application of LTMLE

    Computational Topology Techniques for Characterizing Time-Series Data

    Full text link
    Topological data analysis (TDA), while abstract, allows a characterization of time-series data obtained from nonlinear and complex dynamical systems. Though it is surprising that such an abstract measure of structure - counting pieces and holes - could be useful for real-world data, TDA lets us compare different systems, and even do membership testing or change-point detection. However, TDA is computationally expensive and involves a number of free parameters. This complexity can be obviated by coarse-graining, using a construct called the witness complex. The parametric dependence gives rise to the concept of persistent homology: how shape changes with scale. Its results allow us to distinguish time-series data from different systems - e.g., the same note played on different musical instruments.Comment: 12 pages, 6 Figures, 1 Table, The Sixteenth International Symposium on Intelligent Data Analysis (IDA 2017

    Pulsed pumping of a Bose-Einstein condensate

    Full text link
    In this work, we examine a system for coherent transfer of atoms into a Bose-Einstein condensate. We utilize two spatially separate Bose-Einstein condensates in different hyperfine ground states held in the same dc magnetic trap. By means of a pulsed transfer of atoms, we are able to show a clear resonance in the timing of the transfer, both in temperature and number, from which we draw conclusions about the underlying physical process. The results are discussed in the context of the recently demonstrated pumped atom laser.Comment: 5 pages, 5 figures, published in Physical Review

    Tactile Interactions with a Humanoid Robot : Novel Play Scenario Implementations with Children with Autism

    Get PDF
    Acknowledgments: This work has been partially supported by the European Commission under contract number FP7-231500-ROBOSKIN. Open Access: This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.The work presented in this paper was part of our investigation in the ROBOSKIN project. The project has developed new robot capabilities based on the tactile feedback provided by novel robotic skin, with the aim to provide cognitive mechanisms to improve human-robot interaction capabilities. This article presents two novel tactile play scenarios developed for robot-assisted play for children with autism. The play scenarios were developed against specific educational and therapeutic objectives that were discussed with teachers and therapists. These objectives were classified with reference to the ICF-CY, the International Classification of Functioning – version for Children and Youth. The article presents a detailed description of the play scenarios, and case study examples of their implementation in HRI studies with children with autism and the humanoid robot KASPAR.Peer reviewedFinal Published versio
    • …
    corecore