441 research outputs found

    Scaling Expected Force: Efficient Identification of Key Nodes in Network-based Epidemic Models

    Full text link
    Centrality measures are fundamental tools of network analysis as they highlight the key actors within the network. This study focuses on a newly proposed centrality measure, Expected Force (EF), and its use in identifying spreaders in network-based epidemic models. We found that EF effectively predicts the spreading power of nodes and identifies key nodes and immunization targets. However, its high computational cost presents a challenge for its use in large networks. To overcome this limitation, we propose two parallel scalable algorithms for computing EF scores: the first algorithm is based on the original formulation, while the second one focuses on a cluster-centric approach to improve efficiency and scalability. Our implementations significantly reduce computation time, allowing for the detection of key nodes at large scales. Performance analysis on synthetic and real-world networks demonstrates that the GPU implementation of our algorithm can efficiently scale to networks with up to 44 million edges by exploiting modern parallel architectures, achieving speed-ups of up to 300x, and 50x on average, compared to the simple parallel solution

    Targeted Recovery as an Effective Strategy against Epidemic Spreading

    Full text link
    We propose a targeted intervention protocol where recovery is restricted to individuals that have the least number of infected neighbours. Our recovery strategy is highly efficient on any kind of network, since epidemic outbreaks are minimal when compared to the baseline scenario of spontaneous recovery. In the case of spatially embedded networks, we find that an epidemic stays strongly spatially confined with a characteristic length scale undergoing a random walk. We demonstrate numerically and analytically that this dynamics leads to an epidemic spot with a flat surface structure and a radius that grows linearly with the spreading rate.Comment: 6 pages, 5 figure

    Application and support for high-performance simulation

    Get PDF
    types: Editorial CommentHigh performance simulation that supports sophisticated simulation experimentation and optimization can require non-trivial amounts of computing power. Advanced distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), grid computing, cloud computing and e-Infrastructures are needed to provide effectively the computing power needed for the high performance simulation of large and complex models. In simulation there has been a long tradition of translating and adopting advances in distributed computing as shown by contributions from the parallel and distributed simulation community. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue is divided into two parts. This first part focuses on research pertaining to high performance simulation that support a range of applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. Compared to other simulation techniques agent-based modeling and simulation is relatively new; however, it is increasingly being used to study large-scale problems. Agent-based simulations present challenges for high performance simulation as they can be complex and computationally demanding, and it is therefore not surprising that this special issue includes several articles on the high performance simulation of such systems.Research Councils U

    High-performance simulation and simulation methodologies

    Get PDF
    types: Editorial CommentThe realization of high performance simulation necessitates sophisticated simulation experimentation and optimization; this often requires non-trivial amounts of computing power. Distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), e-infrastructures, grid and cloud computing can provide the required computing capacity for the execution of large and complex simulations. This extends the long tradition of adopting advances in distributed computing in simulation as evidenced by contributions from the parallel and distributed simulation community. There has arguably been a recent acceleration of innovation in distributed computing tools and techniques. This special issue presents the opportunity to showcase recent research that is assimilating these new advances in simulation. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue has two parts. The first part (published in the preceding issue of the journal) included seven studies in high performance simulation that support applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. This second part focuses on original research in high performance simulation that supports a range of methods including DEVS, Petri nets and DES. Of the four papers for this issue, the manuscript by Bergero, et al. (2013), which was submitted, reviewed and accepted for the special issue, was published in an earlier issue of SIMULATION as the author requested early publication.Research Councils U

    The Effects of Spatio-Temporal Heterogeneities on the Emergence and Spread of Dengue Virus

    Get PDF
    The dengue virus (DENV) remains a considerable global public health concern. The interactions between the virus, its mosquito vectors and the human host are complex and only partially understood. Dependencies of vector ecology on environmental attributes, such as temperature and rainfall, together with host population density, introduce strong spatiotemporal heterogeneities, resulting in irregular epidemic outbreaks and asynchronous oscillations in serotype prevalence. Human movements across different spatial scales have also been implicated as important drivers of dengue epidemiology across space and time, and further create the conditions for the geographic expansion of dengue into new habitats. Previously proposed transmission models often relied on strong, unrealistic assumptions regarding key epidemiological and ecological interactions to elucidate the effects of these spatio-temporal heterogeneities on the emergence, spread and persistence of dengue. Furthermore, the computational limitations of individual based models have hindered the development of more detailed descriptions of the influence of vector ecology, environment and human mobility on dengue epidemiology. In order to address these shortcomings, the main aim of this thesis was to rigorously quantify the effects of ecological drivers on dengue epidemiology within a robust and computational efficient framework. The individual based model presented included an explicit spatial structure, vector and human movement, spatio-temporal heterogeneity in population densities, and climate effects. The flexibility of the framework allowed robust assessment of the implications of classical modelling assumptions on the basic reproduction number, Râ‚€, demonstrating that traditional approaches grossly inflate Râ‚€ estimates. The model's more realistic meta-population formulation was then exploited to elucidate the effects of ecological heterogeneities on dengue incidence which showed that sufficient levels of community connectivity are required for the spread and persistence of dengue virus. By fitting the individual based model to empirical data, the influence of climate and on dengue was quantified, revealing the strong benefits that cross-sectional serological data could bring to more precisely inferring ecological drivers of arboviral epidemiology. Overall, the findings presented here demonstrate the wide epidemiological landscape which ecological drivers induce, forewarning against the strong implications of generalising interpretations from one particular setting across wider spatial contexts. These findings will prove invaluable for the assessment of vector-borne control strategies, such as mosquito elimination or vaccination deployment programs

    Could Cultures Determine the Course of Epidemics and Explain Waves of COVID-19?

    Get PDF
    Coronavirus Disease (COVID-19), caused by the SARS-CoV-2 virus, is an infectious disease that quickly became a pandemic spreading with different patterns in each country. Travel bans, lockdowns, social distancing, and non-essential business closures caused significant economic disruptions and stalled growth worldwide in the pandemic’s first year. In almost every country, public health officials forced and/or encouraged Nonpharmaceutical Interventions (NPIs) such as contact tracing, social distancing, masks, and quarantine. Human behavioral decision-making regarding social isolation significantly impedes global success in containing the pandemic. This thesis focuses on human behaviors and cultures related to the decision-making of social isolation during the pandemic. Within a COVID-19 disease transmission model, we created a conceptual and deterministic model of human behavior and cultures. This study emphasizes the importance of human behavior in successful disease control strategies. Additionally, we introduce a back engineering approach to determine whether cultures are explained by the courses of COVID-19 epidemics. We used a deep learning technique based on a convolutional neural network (CNN) to predict cultures from COVID-19 courses. In this system, CNN is used for deep feature extraction with ordinary convolution and with residual blocks. Also, a novel concept is introduced that converts tabular data into an image using matrix transformation and image processing validated by identifying some well-known function. Despite having a small and novel data set, we have achieved an 80-95% accuracy, depending on the cultural measures

    Architectures and GPU-Based Parallelization for Online Bayesian Computational Statistics and Dynamic Modeling

    Get PDF
    Recent work demonstrates that coupling Bayesian computational statistics methods with dynamic models can facilitate the analysis of complex systems associated with diverse time series, including those involving social and behavioural dynamics. Particle Markov Chain Monte Carlo (PMCMC) methods constitute a particularly powerful class of Bayesian methods combining aspects of batch Markov Chain Monte Carlo (MCMC) and the sequential Monte Carlo method of Particle Filtering (PF). PMCMC can flexibly combine theory-capturing dynamic models with diverse empirical data. Online machine learning is a subcategory of machine learning algorithms characterized by sequential, incremental execution as new data arrives, which can give updated results and predictions with growing sequences of available incoming data. While many machine learning and statistical methods are adapted to online algorithms, PMCMC is one example of the many methods whose compatibility with and adaption to online learning remains unclear. In this thesis, I proposed a data-streaming solution supporting PF and PMCMC methods with dynamic epidemiological models and demonstrated several successful applications. By constructing an automated, easy-to-use streaming system, analytic applications and simulation models gain access to arriving real-time data to shorten the time gap between data and resulting model-supported insight. The well-defined architecture design emerging from the thesis would substantially expand traditional simulation models' potential by allowing such models to be offered as continually updated services. Contingent on sufficiently fast execution time, simulation models within this framework can consume the incoming empirical data in real-time and generate informative predictions on an ongoing basis as new data points arrive. In a second line of work, I investigated the platform's flexibility and capability by extending this system to support the use of a powerful class of PMCMC algorithms with dynamic models while ameliorating such algorithms' traditionally stiff performance limitations. Specifically, this work designed and implemented a GPU-enabled parallel version of a PMCMC method with dynamic simulation models. The resulting codebase readily has enabled researchers to adapt their models to the state-of-art statistical inference methods, and ensure that the computation-heavy PMCMC method can perform significant sampling between the successive arrival of each new data point. Investigating this method's impact with several realistic PMCMC application examples showed that GPU-based acceleration allows for up to 160x speedup compared to a corresponding CPU-based version not exploiting parallelism. The GPU accelerated PMCMC and the streaming processing system can complement each other, jointly providing researchers with a powerful toolset to greatly accelerate learning and securing additional insight from the high-velocity data increasingly prevalent within social and behavioural spheres. The design philosophy applied supported a platform with broad generalizability and potential for ready future extensions. The thesis discusses common barriers and difficulties in designing and implementing such systems and offers solutions to solve or mitigate them

    REU Site: Supercomputing Undergraduate Program in Maine (SuperMe)

    Get PDF
    This award, for a new Research Experience for Undergraduates (REU) site, builds a Supercomputing Undergraduate Program in Maine (SuperMe). This new site provides ten-week summer research experiences at the University of Maine (UMaine) for ten undergraduates each year for three years. With integrated expertise of ten faculty researchers from both computer systems and domain applications, SuperMe allows each undergraduate to conduct meaningful research, such as developing supercomputing techniques and tools, and solving cutting-edge research problems through parallel computing and scientific visualization. Besides being actively involved in research groups, students attend weekly seminars given by faculty mentors, formally report and present their research experiences and results, conduct field trips, and interact with ITEST, RET and GK-12 participants. SuperMe provides scientific exploration ranging from engineering to sciences with a coherent intellectual focus on supercomputing. It consists of four computer systems projects that aim to improve techniques in grid computing, parallel I/O data accesses, high-resolution scientific visualization and information security, and five computer modeling projects that utilize world-class supercomputing and visualization facilities housed at UMaine to perform large, complex simulation experiments and data analysis in different science domains. SuperMe provides a diversity of cutting-edge research opportunities to students from under-represented groups or from universities in rural areas with limited research opportunities. Through interacting directly with the participant of existing programs at UMaine, including ITEST, RET and GK-12, REU students disseminates their research results and experiences to middle and high school students and teachers. This site is co-funded by the Department of Defense in partnership with the NSF REU Site program

    High-Performance Computing and ABMS for High-Resolution COVID-19 Spreading Simulation

    Get PDF
    This paper presents an approach for the modeling and the simulation of the spreading of COVID-19 based on agent-based modeling and simulation (ABMS). Our goal is not only to support large-scale simulations but also to increase the simulation resolution. Moreover, we do not assume an underlying network of contacts, and the person-to-person contacts responsible for the spreading are modeled as a function of the geographical distance among the individuals. In particular, we defined a commuting mechanism combining radiation-based and gravity-based models and we exploited the commuting properties at different resolution levels (municipalities and provinces). Finally, we exploited the high-performance computing (HPC) facilities to simulate millions of concurrent agents, each mapping the individual’s behavior. To do such simulations, we developed a spreading simulator and validated it through the simulation of the spreading in two of the most populated Italian regions: Lombardy and Emilia-Romagna. Our main achievement consists of the effective modeling of 10 million of concurrent agents, each one mapping an individual behavior with a high-resolution in terms of social contacts, mobility and contribution to the virus spreading. Moreover, we analyzed the forecasting ability of our framework to predict the number of infections being initialized with only a few days of real data. We validated our model with the statistical data coming from the serological analysis conducted in Lombardy, and our model makes a smaller error than other state of the art models with a final root mean squared error equal to 56,009 simulating the entire first pandemic wave in spring 2020. On the other hand, for the Emilia-Romagna region, we simulated the second pandemic wave during autumn 2020, and we reached a final RMSE equal to 10,730.11

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation
    • …
    corecore