12,107 research outputs found
HIV and Concurrent Sexual Partnerships: Modelling the Role of Coital Dilution
Background: The concurrency hypothesis asserts that high prevalence of overlapping sexual partnerships explains extraordinarily high HIV levels in sub-Saharan Africa. Earlier simulation models show that the network effect of concurrency can increase HIV incidence, but those models do not account for the coital dilution effect (nonprimary partnerships have lower coital frequency than primary partnerships).
Methods: We modify the model of Eaton et al (AIDS and Behavior, September 2010) to incorporate coital dilution by assigning lower coital frequencies to non-primary partnerships. We parameterize coital dilution based on the empirical work of Morris et al (PLoS ONE, December 2010) and others. Following Eaton et al, we simulate the daily transmission of HIV over 250 years for 10 levels of concurrency.
Results: At every level of concurrency, our focal coital-dilution simulation produces epidemic extinction. Our sensitivity analysis shows that this result is quite robust; even modestly lower coital frequencies in non-primary partnerships lead to epidemic extinction.
Conclusions: In order to contribute usefully to the investigation of HIV prevalence, simulation models of concurrent partnering and HIV epidemics must incorporate realistic degrees of coital dilution. Doing so dramatically reduces the role that concurrency can play in accelerating the spread of HIV and suggests that concurrency cannot be an important driver of HIV epidemics in sub-Saharan Africa. Alternative explanations for HIV epidemics in sub- Saharan Africa are needed
Saturation Effects and the Concurrency Hypothesis: Insights from an Analytic Model
Sexual partnerships that overlap in time (concurrent relationships) may play
a significant role in the HIV epidemic, but the precise effect is unclear. We
derive edge-based compartmental models of disease spread in idealized dynamic
populations with and without concurrency to allow for an investigation of its
effects. Our models assume that partnerships change in time and individuals
enter and leave the at-risk population. Infected individuals transmit at a
constant per-partnership rate to their susceptible partners. In our idealized
populations we find regions of parameter space where the existence of
concurrent partnerships leads to substantially faster growth and higher
equilibrium levels, but also regions in which the existence of concurrent
partnerships has very little impact on the growth or the equilibrium.
Additionally we find mixed regimes in which concurrency significantly increases
the early growth, but has little effect on the ultimate equilibrium level.
Guided by model predictions, we discuss general conditions under which
concurrent relationships would be expected to have large or small effects in
real-world settings. Our observation that the impact of concurrency saturates
suggests that concurrency-reducing interventions may be most effective in
populations with low to moderate concurrency
Optimal configuration of active and backup servers for augmented reality cooperative games
Interactive applications as online games and mobile devices have become more and more popular in recent years. From their combination, new and interesting cooperative services could be generated. For instance, gamers endowed with Augmented Reality (AR) visors connected as wireless nodes in an ad-hoc network, can interact with each other while immersed in the game. To enable this vision, we discuss here a hybrid architecture enabling game play in ad-hoc mode instead of the traditional client-server setting. In our architecture, one of the player nodes also acts as the server of the game, whereas other backup server nodes are ready to become active servers in case of disconnection of the network i.e. due to low energy level of the currently active server. This allows to have a longer gaming session before incurring in disconnections or energy exhaustion. In this context, the server election strategy with the aim of maximizing network lifetime is not so straightforward. To this end, we have hence analyzed this issue through a Mixed Integer Linear Programming (MILP) model and both numerical and simulation-based analysis shows that the backup servers solution fulfills its design objective
Recommended from our members
A survey of simulation techniques in commerce and defence
Despite the developments in Modelling and Simulation (M&S) tools and techniques over the past years, there has been a gap in the M&S research and practice in healthcare on developing a toolkit to assist the modellers and simulation practitioners with selecting an appropriate set of techniques. This study is a preliminary step towards this goal. This paper presents some results from a systematic literature survey on applications of M&S in the commerce and defence domains that could inspire some improvements in the healthcare. Interim results show that in the commercial sector Discrete-Event Simulation (DES) has been the most widely used technique with System Dynamics (SD) in second place. However in the defence sector, SD has gained relatively more attention. SD has been found quite useful for qualitative and soft factors analysis. From both the surveys it becomes clear that there is a growing trend towards using hybrid M&S approaches
RELEASE: A High-level Paradigm for Reliable Large-scale Server Software
Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the rst six months. The project aim is to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the e ectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene
A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing
The emergence of cloud computing based on virtualization technologies brings
huge opportunities to host virtual resource at low cost without the need of
owning any infrastructure. Virtualization technologies enable users to acquire,
configure and be charged on pay-per-use basis. However, Cloud data centers
mostly comprise heterogeneous commodity servers hosting multiple virtual
machines (VMs) with potential various specifications and fluctuating resource
usages, which may cause imbalanced resource utilization within servers that may
lead to performance degradation and service level agreements (SLAs) violations.
To achieve efficient scheduling, these challenges should be addressed and
solved by using load balancing strategies, which have been proved to be NP-hard
problem. From multiple perspectives, this work identifies the challenges and
analyzes existing algorithms for allocating VMs to PMs in infrastructure
Clouds, especially focuses on load balancing. A detailed classification
targeting load balancing algorithms for VM placement in cloud data centers is
investigated and the surveyed algorithms are classified according to the
classification. The goal of this paper is to provide a comprehensive and
comparative understanding of existing literature and aid researchers by
providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
Recommended from our members
Improving Performance of M-to-N Processing and Data Redistribution in In Transit Analysis and Visualization
In an in transit setting, a parallel data producer, such as a numerical simulation, runs on one set of ranks M, while a data consumer, such as a parallel visualization application, runs on a different set of ranks N. One of the central challenges in this in transit setting is to determine the mapping of data from the set of M producer ranks to the set of N consumer ranks. This is a challenging problem for several reasons, such as the producer and consumer codes potentially having different scaling characteristics and different data models. The resulting mapping from M to N ranks can have a significant impact on aggregate application performance. In this work, we present an approach for performing this M-to-N mapping in a way that has broad applicability across a diversity of data producer and consumer applications. We evaluate its design and performance with
a study that runs at high concurrency on a modern HPC platform. By leveraging design characteristics, which facilitate an “intelligent” mapping from M-to-N, we observe significant performance gains are possible in terms of several different metrics, including time-to-solution and amount of data moved
- …