1,430 research outputs found

    A process of rumor scotching on finite populations

    Get PDF
    Rumor spreading is a ubiquitous phenomenon in social and technological networks. Traditional models consider that the rumor is propagated by pairwise interactions between spreaders and ignorants. Spreaders can become stiflers only after contacting spreaders or stiflers. Here we propose a model that considers the traditional assumptions, but stiflers are active and try to scotch the rumor to the spreaders. An analytical treatment based on the theory of convergence of density dependent Markov chains is developed to analyze how the final proportion of ignorants behaves asymptotically in a finite homogeneously mixing population. We perform Monte Carlo simulations in random graphs and scale-free networks and verify that the results obtained for homogeneously mixing populations can be approximated for random graphs, but are not suitable for scale-free networks. Furthermore, regarding the process on a heterogeneous mixing population, we obtain a set of differential equations that describes the time evolution of the probability that an individual is in each state. Our model can be applied to study systems in which informed agents try to stop the rumor propagation. In addition, our results can be considered to develop optimal information dissemination strategies and approaches to control rumor propagation.Comment: 13 pages, 11 figure

    Networks and the epidemiology of infectious disease

    Get PDF
    The science of networks has revolutionised research into the dynamics of interacting elements. It could be argued that epidemiology in particular has embraced the potential of network theory more than any other discipline. Here we review the growing body of research concerning the spread of infectious diseases on networks, focusing on the interplay between network theory and epidemiology. The review is split into four main sections, which examine: the types of network relevant to epidemiology; the multitude of ways these networks can be characterised; the statistical methods that can be applied to infer the epidemiological parameters on a realised network; and finally simulation and analytical methods to determine epidemic dynamics on a given network. Given the breadth of areas covered and the ever-expanding number of publications, a comprehensive review of all work is impossible. Instead, we provide a personalised overview into the areas of network epidemiology that have seen the greatest progress in recent years or have the greatest potential to provide novel insights. As such, considerable importance is placed on analytical approaches and statistical methods which are both rapidly expanding fields. Throughout this review we restrict our attention to epidemiological issues

    Modeling and pricing cyber insurance: Idiosyncratic, systematic, and systemic risks

    Get PDF
    The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures

    Markovian and stochastic differential equation based approaches to computer virus propagation dynamics and some models for survival distributions

    Get PDF
    This dissertation is divided in two Parts. The first Part explores probabilistic modeling of propagation of computer \u27malware\u27 (generally referred to as \u27virus\u27) across a network of computers, and investigates modeling improvements achieved by introducing a random latency period during which an infected computer in the network is unable to infect others. In the second Part, two approaches for modeling life distributions in univariate and bivariate setups are developed. In Part I, homogeneous and non-homogeneous stochastic susceptible-exposed-infectious- recovered (SEIR) models are specifically explored for the propagation of computer virus over the Internet by borrowing ideas from mathematical epidemiology. Large computer networks such as the Internet have become essential in today\u27s technological societies and even critical to the financial viability of the national and the global economy. However, the easy access and widespread use of the Internet makes it a prime target for malicious activities, such as introduction of computer viruses, which pose a major threat to large computer networks. Since an understanding of the underlying dynamics of their propagation is essential in efforts to control them, a fair amount of research attention has been devoted to model the propagation of computer viruses, starting from basic deterministic models with ordinary differential equations (ODEs) through stochastic models of increasing realism. In the spirit of exploring more realistic probability models that seek to explain the time dependent transient behavior of computer virus propagation by exploiting the essential stochastic nature of contacts and communications among computers, the present study introduces a new refinement in such efforts to consider the suitability and use of the stochastic SEIR model of mathematical epidemiology in the context of computer viruses propagation. We adapt the stochastic SEIR model to the study of computer viruses prevalence by incorporating the idea of a latent period during which computer is in an \u27exposed state\u27 in the sense that the computer is infected but cannot yet infect other computers until the latency is over. The transition parameters of the SEIR model are estimated using real computer viruses data. We develop the maximum likelihood (MLE) and Bayesian estimators for the SEIR model parameters, and apply them to the \u27Code Red worm\u27 data. Since network structure can be a possibly important factor in virus propagation, multi-group stochastic SEIR models for the spreading of computer virus in heterogeneous networks are explored next. For the multi-group stochastic SEIR model using Markovian approach, the method of maximum likelihood estimation for model parameters of interest are derived. The method of least squares is used to estimate the model parameters of interest in the multi-group stochastic SEIR-SDE model, based on stochastic differential equations. The models and methodologies are applied to Code Red worm data. Simulations based on different models proposed in this dissertation and deterministic/ stochastic models available in the literature are conducted and compared. Based on such comparisons, we conclude that (i) stochastic models using SEIR framework appear to be relatively much superior than previous models of computer virus propagation - even up to its saturation level, and (ii) there is no appreciable difference between homogeneous and heterogeneous (multi-group) models. The \u27no difference\u27 finding of course may possibly be influenced by the criterion used to assign computers in the overall network to different groups. In our study, the grouping of computers in the total network into subgroups or, clusters were based on their geographical location only, since no other grouping criterion were available in the Code Red worm data. Part II covers two approaches for modeling life distributions in univariate and bivariate setups. In the univariate case, a new partial order based on the idea of \u27star-shaped functions\u27 is introduced and explored. In the bivariate context; a class of models for joint lifetime distributions that extends the idea of univariate proportional hazards in a suitable way to the bivariate case is proposed. The expectation-maximization (EM) method is used to estimate the model parameters of interest. For the purpose of illustration, the bivariate proportional hazard model and the method of parameter estimation are applied to two real data sets

    A Review of Multi- Compartment Infectious Disease Models

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156488/2/insr12402.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156488/1/insr12402_am.pd

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), Covilhã, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    Networking - A Statistical Physics Perspective

    Get PDF
    Efficient networking has a substantial economic and societal impact in a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption require new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with non-linear large scale systems. This paper aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.Comment: (Review article) 71 pages, 14 figure

    Network Infusion to Infer Information Sources in Networks

    Get PDF
    Several models exist for diffusion of signals across biological, social, or engineered networks. However, the inverse problem of identifying the source of such propagated information appears more difficult even in the presence of multiple network snapshots, and especially for the single-snapshot case, given the many alternative, often similar, progression of diffusion that may lead to the same observed snapshots. Mathematically, this problem can be undertaken using a diffusion kernel that represents diffusion processes in a given network, but computing this kernel is computationally challenging in general. Here, we propose a path-based network diffusion kernel which considers edge-disjoint shortest paths among pairs of nodes in the network and can be computed efficiently for both homogeneous and heterogeneous continuous-time diffusion models. We use this network diffusion kernel to solve the inverse diffusion problem, which we term Network Infusion (NI), using both likelihood maximization and error minimization. The minimum error NI algorithm is based on an asymmetric Hamming premetric function and can balance between false positive and false negative error types. We apply this framework for both single-source and multi-source diffusion, for both single-snapshot and multi-snapshot observations, and using both uninformative and informative prior probabilities for candidate source nodes. We also provide proofs that under a standard susceptible-infected diffusion model, (1) the maximum-likelihood NI is mean-field optimal for tree structures or sufficiently sparse Erdos-Renyi graphs, (2) the minimum-error algorithm is mean-field optimal for regular tree structures, and (3) for sufficiently-distant sources, the multi-source solution is mean-field optimal in the regular tree structure. Moreover, we provide techniques to learn diffusion model parameters such as observation times. We apply NI to several synthetic networks and compare its performance to centrality-based and distance-based methods for Erdos-Renyi graphs, power-law networks, symmetric and asymmetric grids. Moreover, we use NI in two real-world applications. First, we identify the news sources for 3,553 stories in the Digg social news network, and validate our results based on annotated information, that was not provided to our algorithm. Second, we use NI to identify infusion hubs of human diseases, defined as gene candidates that can explain the connectivity pattern of disease-related genes in the human regulatory network. NI identifies infusion hubs of several human diseases including T1D, Parkinson, MS, SLE, Psoriasis and Schizophrenia. We show that, the inferred infusion hubs are biologically relevant and often not identifiable using the raw p-values

    Identifying Infection Sources and Regions in Large Networks

    Full text link
    Identifying the infection sources in a network, including the index cases that introduce a contagious disease into a population network, the servers that inject a computer virus into a computer network, or the individuals who started a rumor in a social network, plays a critical role in limiting the damage caused by the infection through timely quarantine of the sources. We consider the problem of estimating the infection sources and the infection regions (subsets of nodes infected by each source) in a network, based only on knowledge of which nodes are infected and their connections, and when the number of sources is unknown a priori. We derive estimators for the infection sources and their infection regions based on approximations of the infection sequences count. We prove that if there are at most two infection sources in a geometric tree, our estimator identifies the true source or sources with probability going to one as the number of infected nodes increases. When there are more than two infection sources, and when the maximum possible number of infection sources is known, we propose an algorithm with quadratic complexity to estimate the actual number and identities of the infection sources. Simulations on various kinds of networks, including tree networks, small-world networks and real world power grid networks, and tests on two real data sets are provided to verify the performance of our estimators
    corecore