3,042 research outputs found

    Second look at the spread of epidemics on networks

    Full text link
    In an important paper, M.E.J. Newman claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semi-directed random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly-connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In an appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.Comment: 29 pages, 5 figure

    SEMIPARAMETRIC BIVARIATE QUANTILE-QUANTILE REGRESSION FOR ANALYZING SEMI-COMPETING RISKS DATA

    Get PDF
    In this paper, we consider estimation of the effect of a randomized treatment on time to disease progression and death, possibly adjusting for high-dimensional baseline prognostic factors. We assume that patients may or may not have a specific type of disease progression prior to death and those who have this endpoint are followed for their survival information. Progression and survival may also be censored due to loss to follow-up or study termination. We posit a semi-parametric bivariate quantile-quantile regression failure time model and show how to construct estimators of the regression parameters. The causal interpretation of the parameters depends on non-identifiable assumptions. We discuss two assumptions: the first applies to situations where it is reasonable to view disease progression as well defined after death and the second applies to situations where such a view is unreasonable. We conduct a simulation study and analyze data from a randomized trial for the treatment of brain cancer

    Quantifying structure in networks

    Full text link
    We investigate exponential families of random graph distributions as a framework for systematic quantification of structure in networks. In this paper we restrict ourselves to undirected unlabeled graphs. For these graphs, the counts of subgraphs with no more than k links are a sufficient statistics for the exponential families of graphs with interactions between at most k links. In this framework we investigate the dependencies between several observables commonly used to quantify structure in networks, such as the degree distribution, cluster and assortativity coefficients.Comment: 17 pages, 3 figure

    Exponential Random Graph Modeling for Complex Brain Networks

    Get PDF
    Exponential random graph models (ERGMs), also known as p* models, have been utilized extensively in the social science literature to study complex networks and how their global structure depends on underlying structural components. However, the literature on their use in biological networks (especially brain networks) has remained sparse. Descriptive models based on a specific feature of the graph (clustering coefficient, degree distribution, etc.) have dominated connectivity research in neuroscience. Corresponding generative models have been developed to reproduce one of these features. However, the complexity inherent in whole-brain network data necessitates the development and use of tools that allow the systematic exploration of several features simultaneously and how they interact to form the global network architecture. ERGMs provide a statistically principled approach to the assessment of how a set of interacting local brain network features gives rise to the global structure. We illustrate the utility of ERGMs for modeling, analyzing, and simulating complex whole-brain networks with network data from normal subjects. We also provide a foundation for the selection of important local features through the implementation and assessment of three selection approaches: a traditional p-value based backward selection approach, an information criterion approach (AIC), and a graphical goodness of fit (GOF) approach. The graphical GOF approach serves as the best method given the scientific interest in being able to capture and reproduce the structure of fitted brain networks

    Differentially Private Exponential Random Graphs

    Full text link
    We propose methods to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network. Proposed techniques aim at fitting and estimating a wide class of exponential random graph models (ERGMs) in a differentially private manner, and thus offer rigorous privacy guarantees. More specifically, we use the randomized response mechanism to release networks under ϵ\epsilon-edge differential privacy. To maintain utility for statistical inference, treating the original graph as missing, we propose a way to use likelihood based inference and Markov chain Monte Carlo (MCMC) techniques to fit ERGMs to the produced synthetic networks. We demonstrate the usefulness of the proposed techniques on a real data example.Comment: minor edit

    A slow gravity compensated Atom Laser

    Full text link
    We report on a slow guided atom laser beam outcoupled from a Bose-Einstein condensate of 87Rb atoms in a hybrid trap. The acceleration of the atom laser beam can be controlled by compensating the gravitational acceleration and we reach residual accelerations as low as 0.0027 g. The outcoupling mechanism allows for the production of a constant flux of 4.5x10^6 atoms per second and due to transverse guiding we obtain an upper limit for the mean beam width of 4.6 \mu\m. The transverse velocity spread is only 0.2 mm/s and thus an upper limit for the beam quality parameter is M^2=2.5. We demonstrate the potential of the long interrogation times available with this atom laser beam by measuring the trap frequency in a single measurement. The small beam width together with the long evolution and interrogation time makes this atom laser beam a promising tool for continuous interferometric measurements.Comment: 7 pages, 8 figures, to be published in Applied Physics

    Memory-Efficient Incremental Learning Through Feature Adaptation

    Full text link
    We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes, instead of the images themselves, unlike most existing work. Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly. We assume that the model is updated incrementally for new classes as new data becomes available sequentially.This requires adapting the previously stored feature vectors to the updated feature space without having access to the corresponding original training images. Feature adaptation is learned with a multi-layer perceptron, which is trained on feature pairs corresponding to the outputs of the original and updated network on a training image. We validate experimentally that such a transformation generalizes well to the features of the previous set of classes, and maps features to a discriminative subspace in the feature space. As a result, the classifier is optimized jointly over new and old classes without requiring old class images. Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks, while having at least an order of magnitude lower memory footprint compared to image-preserving strategies

    Atom laser coherence and its control via feedback

    Full text link
    We present a quantum-mechanical treatment of the coherence properties of a single-mode atom laser. Specifically, we focus on the quantum phase noise of the atomic field as expressed by the first-order coherence function, for which we derive analytical expressions in various regimes. The decay of this function is characterized by the coherence time, or its reciprocal, the linewidth. A crucial contributor to the linewidth is the collisional interaction of the atoms. We find four distinct regimes for the linewidth with increasing interaction strength. These range from the standard laser linewidth, through quadratic and linear regimes, to another constant regime due to quantum revivals of the coherence function. The laser output is only coherent (Bose degenerate) up to the linear regime. However, we show that application of a quantum nondemolition measurement and feedback scheme will increase, by many orders of magnitude, the range of interaction strengths for which it remains coherent.Comment: 15 pages, 6 figures, revtex
    • …
    corecore