21,087 research outputs found

    STAND: A Spatio-Temporal Algorithm for Network Diffusion Simulation

    Full text link
    Information, ideas, and diseases, or more generally, contagions, spread over space and time through individual transmissions via social networks, as well as through external sources. A detailed picture of any diffusion process can be achieved only when both a good network structure and individual diffusion pathways are obtained. The advent of rich social, media and locational data allows us to study and model this diffusion process in more detail than previously possible. Nevertheless, how information, ideas or diseases are propagated through the network as an overall process is difficult to trace. This propagation is continuous over space and time, where individual transmissions occur at different rates via complex, latent connections. To tackle this challenge, a probabilistic spatiotemporal algorithm for network diffusion (STAND) is developed based on the survival model in this research. Both time and spatial distance are used as explanatory variables to simulate the diffusion process over two different network structures. The aim is to provide a more detailed measure of how different contagions are transmitted through various networks where nodes are geographic places at a large scale

    Reaction-diffusion spatial modeling of COVID-19: Greece and Andalusia as case examples

    Get PDF
    We examine the spatial modeling of the outbreak of COVID-19 in two regions: the autonomous community of Andalusia in Spain and the mainland of Greece. We start with a 0D compartmental epidemiological model consisting of Susceptible, Exposed, Asymptomatic, (symptomatically) Infected, Hospitalized, Recovered, and deceased populations. We emphasize the importance of the viral latent period and the key role of an asymptomatic population. We optimize model parameters for both regions by comparing predictions to the cumulative number of infected and total number of deaths via minimizing the 2\ell^2 norm of the difference between predictions and observed data. We consider the sensitivity of model predictions on reasonable variations of model parameters and initial conditions, addressing issues of parameter identifiability. We model both pre-quarantine and post-quarantine evolution of the epidemic by a time-dependent change of the viral transmission rates that arises in response to containment measures. Subsequently, a spatially distributed version of the 0D model in the form of reaction-diffusion equations is developed. We consider that, after an initial localized seeding of the infection, its spread is governed by the diffusion (and 0D model "reactions") of the asymptomatic and symptomatically infected populations, which decrease with the imposed restrictive measures. We inserted the maps of the two regions, and we imported population-density data into COMSOL, which was subsequently used to solve numerically the model PDEs. Upon discussing how to adapt the 0D model to this spatial setting, we show that these models bear significant potential towards capturing both the well-mixed, 0D description and the spatial expansion of the pandemic in the two regions. Veins of potential refinement of the model assumptions towards future work are also explored.Comment: 28 pages, 16 figures and 2 movie

    Querying Probabilistic Neighborhoods in Spatial Data Sets Efficiently

    Full text link
    \newcommand{\dist}{\operatorname{dist}} In this paper we define the notion of a probabilistic neighborhood in spatial data: Let a set PP of nn points in Rd\mathbb{R}^d, a query point qRdq \in \mathbb{R}^d, a distance metric \dist, and a monotonically decreasing function f:R+[0,1]f : \mathbb{R}^+ \rightarrow [0,1] be given. Then a point pPp \in P belongs to the probabilistic neighborhood N(q,f)N(q, f) of qq with respect to ff with probability f(\dist(p,q)). We envision applications in facility location, sensor networks, and other scenarios where a connection between two entities becomes less likely with increasing distance. A straightforward query algorithm would determine a probabilistic neighborhood in Θ(nd)\Theta(n\cdot d) time by probing each point in PP. To answer the query in sublinear time for the planar case, we augment a quadtree suitably and design a corresponding query algorithm. Our theoretical analysis shows that -- for certain distributions of planar PP -- our algorithm answers a query in O((N(q,f)+n)logn)O((|N(q,f)| + \sqrt{n})\log n) time with high probability (whp). This matches up to a logarithmic factor the cost induced by quadtree-based algorithms for deterministic queries and is asymptotically faster than the straightforward approach whenever N(q,f)o(n/logn)|N(q,f)| \in o(n / \log n). As practical proofs of concept we use two applications, one in the Euclidean and one in the hyperbolic plane. In particular, our results yield the first generator for random hyperbolic graphs with arbitrary temperatures in subquadratic time. Moreover, our experimental data show the usefulness of our algorithm even if the point distribution is unknown or not uniform: The running time savings over the pairwise probing approach constitute at least one order of magnitude already for a modest number of points and queries.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-44543-4_3

    A Simplified Cellular Automaton Model for City Traffic

    Full text link
    We systematically investigate the effect of blockage sites in a cellular automaton model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of ``green'' time and the throughput. We use this information for a fast implementation of traffic in Dallas.Comment: 12 pages, 18 figures. submitted to Phys Rev

    Immunization strategies for epidemic processes in time-varying contact networks

    Get PDF
    Spreading processes represent a very efficient tool to investigate the structural properties of networks and the relative importance of their constituents, and have been widely used to this aim in static networks. Here we consider simple disease spreading processes on empirical time-varying networks of contacts between individuals, and compare the effect of several immunization strategies on these processes. An immunization strategy is defined as the choice of a set of nodes (individuals) who cannot catch nor transmit the disease. This choice is performed according to a certain ranking of the nodes of the contact network. We consider various ranking strategies, focusing in particular on the role of the training window during which the nodes' properties are measured in the time-varying network: longer training windows correspond to a larger amount of information collected and could be expected to result in better performances of the immunization strategies. We find instead an unexpected saturation in the efficiency of strategies based on nodes' characteristics when the length of the training window is increased, showing that a limited amount of information on the contact patterns is sufficient to design efficient immunization strategies. This finding is balanced by the large variations of the contact patterns, which strongly alter the importance of nodes from one period to the next and therefore significantly limit the efficiency of any strategy based on an importance ranking of nodes. We also observe that the efficiency of strategies that include an element of randomness and are based on temporally local information do not perform as well but are largely independent on the amount of information available
    corecore