9,283 research outputs found

    Estimating the causal effect of a time-varying treatment on time-to-event using structural nested failure time models

    Full text link
    In this paper we review an approach to estimating the causal effect of a time-varying treatment on time to some event of interest. This approach is designed for the situation where the treatment may have been repeatedly adapted to patient characteristics, which themselves may also be time-dependent. In this situation the effect of the treatment cannot simply be estimated by conditioning on the patient characteristics, as these may themselves be indicators of the treatment effect. This so-called time-dependent confounding is typical in observational studies. We discuss a new class of failure time models, structural nested failure time models, which can be used to estimate the causal effect of a time-varying treatment, and present methods for estimating and testing the parameters of these models

    Reducing Prawn-trawl Bycatch in Australia: An Overview and an Example from Queensland

    Get PDF
    Prawn trawling occurs in most states of Australia in tropical, subtropical, and temperate waters. Bycatch occurs to some degree in all Australian trawl fisheries, and there is pressure to reduce the levels of trawl fishery bycatch. This paper gives a brief overview of the bycatch issues and technological solutions that have been evaluated or adopted in Australian prawn-trawl fi sheries. Turtle excluder devices (TED’s) and bycatch reduction devices (BRD’s) are the principal solutions to bycatch in Australian prawn-trawl fisheries. This paper focuses on a major prawn-trawl fishery of northeastern Australia, and the results of commercial use of TED’s and BRD’s in the Queensland east coast trawl fishery are presented. New industry designs are described, and the status of TED and BRD adoption and regulation is summarized. The implementation of technological solutions to reduce fishery bycatch is assumed generally to assist prawn-trawl fisheries within Australia in achieving legislative requirements for minimal environmental impact and ecological sustainable development

    Nested Markov Properties for Acyclic Directed Mixed Graphs

    Full text link
    Directed acyclic graph (DAG) models may be characterized in at least four different ways: via a factorization, the d-separation criterion, the moralization criterion, and the local Markov property. As pointed out by Robins (1986, 1999), Verma and Pearl (1990), and Tian and Pearl (2002b), marginals of DAG models also imply equality constraints that are not conditional independences. The well-known `Verma constraint' is an example. Constraints of this type were used for testing edges (Shpitser et al., 2009), and an efficient marginalization scheme via variable elimination (Shpitser et al., 2011). We show that equality constraints like the `Verma constraint' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via Markov properties and a factorization, a graphical model associated with acyclic directed mixed graphs (ADMGs). We show that marginal distributions of DAG models lie in this model, prove that a characterization of these constraints given in (Tian and Pearl, 2002b) gives an alternative definition of the model, and finally show that the fixing operation we used to define the model can be used to give a particularly simple characterization of identifiable causal effects in hidden variable graphical causal models.Comment: 67 pages (not including appendix and references), 8 figure

    Computational Topology Techniques for Characterizing Time-Series Data

    Full text link
    Topological data analysis (TDA), while abstract, allows a characterization of time-series data obtained from nonlinear and complex dynamical systems. Though it is surprising that such an abstract measure of structure - counting pieces and holes - could be useful for real-world data, TDA lets us compare different systems, and even do membership testing or change-point detection. However, TDA is computationally expensive and involves a number of free parameters. This complexity can be obviated by coarse-graining, using a construct called the witness complex. The parametric dependence gives rise to the concept of persistent homology: how shape changes with scale. Its results allow us to distinguish time-series data from different systems - e.g., the same note played on different musical instruments.Comment: 12 pages, 6 Figures, 1 Table, The Sixteenth International Symposium on Intelligent Data Analysis (IDA 2017

    Stability of continuously pumped atom lasers

    Get PDF
    A multimode model of a continuously pumped atom laser is shown to be unstable below a critical value of the scattering length. Above the critical scattering length, the atom laser reaches a steady state, the stability of which increases with pumping. Below this limit the laser does not reach a steady state. This instability results from the competition between gain and loss for the excited states of the lasing mode. It will determine a fundamental limit for the linewidth of an atom laser beam.Comment: 4 page

    Using longitudinal targeted maximum likelihood estimation in complex settings with dynamic interventions.

    Get PDF
    Longitudinal targeted maximum likelihood estimation (LTMLE) has very rarely been used to estimate dynamic treatment effects in the context of time-dependent confounding affected by prior treatment when faced with long follow-up times, multiple time-varying confounders, and complex associational relationships simultaneously. Reasons for this include the potential computational burden, technical challenges, restricted modeling options for long follow-up times, and limited practical guidance in the literature. However, LTMLE has desirable asymptotic properties, ie, it is doubly robust, and can yield valid inference when used in conjunction with machine learning. It also has the advantage of easy-to-calculate analytic standard errors in contrast to the g-formula, which requires bootstrapping. We use a topical and sophisticated question from HIV treatment research to show that LTMLE can be used successfully in complex realistic settings, and we compare results to competing estimators. Our example illustrates the following practical challenges common to many epidemiological studies: (1) long follow-up time (30 months); (2) gradually declining sample size; (3) limited support for some intervention rules of interest; (4) a high-dimensional set of potential adjustment variables, increasing both the need and the challenge of integrating appropriate machine learning methods; and (5) consideration of collider bias. Our analyses, as well as simulations, shed new light on the application of LTMLE in complex and realistic settings: We show that (1) LTMLE can yield stable and good estimates, even when confronted with small samples and limited modeling options; (2) machine learning utilized with a small set of simple learners (if more complex ones cannot be fitted) can outperform a single, complex model, which is tailored to incorporate prior clinical knowledge; and (3) performance can vary considerably depending on interventions and their support in the data, and therefore critical quality checks should accompany every LTMLE analysis. We provide guidance for the practical application of LTMLE

    A multibeam atom laser: coherent atom beam splitting from a single far detuned laser

    Full text link
    We report the experimental realisation of a multibeam atom laser. A single continuous atom laser is outcoupled from a Bose-Einstein condensate (BEC) via an optical Raman transition. The atom laser is subsequently split into up to five atomic beams with slightly different momenta, resulting in multiple, nearly co-propagating, coherent beams which could be of use in interferometric experiments. The splitting process itself is a novel realization of Bragg diffraction, driven by each of the optical Raman laser beams independently. This presents a significantly simpler implementation of an atomic beam splitter, one of the main elements of coherent atom optics

    Pseudorehearsal in value function approximation

    Full text link
    Catastrophic forgetting is of special importance in reinforcement learning, as the data distribution is generally non-stationary over time. We study and compare several pseudorehearsal approaches for Q-learning with function approximation in a pole balancing task. We have found that pseudorehearsal seems to assist learning even in such very simple problems, given proper initialization of the rehearsal parameters

    Second look at the spread of epidemics on networks

    Full text link
    In an important paper, M.E.J. Newman claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semi-directed random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly-connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In an appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.Comment: 29 pages, 5 figure

    Urban encounters: juxtapositions of difference and the communicative interface of global cities

    Get PDF
    This article explores the communicative interface of global cities, especially as it is shaped in the juxtapositions of difference in culturally diverse urban neighbourhoods. These urban zones present powerful examples, where different groups live cheek by jowl, in close proximity and in intimate interaction — desired or unavoidable. In these urban locations, the need to manage difference is synonymous to making them liveable and one's own. In seeking (and sometimes finding) a location in the city and a location in the world, urban dwellers shape their communication practices as forms of everyday, mundane and bottom-up tactics for the management of diversity. The article looks at three particular areas where cultural diversity and urban communication practices come together into meaningful political and cultural relations for a sustainable cosmopolitan life: citizenship, imagination and identity
    corecore