64,400 research outputs found
Using simulation studies to evaluate statistical methods
Simulation studies are computer experiments that involve creating data by
pseudorandom sampling. The key strength of simulation studies is the ability to
understand the behaviour of statistical methods because some 'truth' (usually
some parameter/s of interest) is known from the process of generating the data.
This allows us to consider properties of methods, such as bias. While widely
used, simulation studies are often poorly designed, analysed and reported. This
tutorial outlines the rationale for using simulation studies and offers
guidance for design, execution, analysis, reporting and presentation. In
particular, this tutorial provides: a structured approach for planning and
reporting simulation studies, which involves defining aims, data-generating
mechanisms, estimands, methods and performance measures ('ADEMP'); coherent
terminology for simulation studies; guidance on coding simulation studies; a
critical discussion of key performance measures and their estimation; guidance
on structuring tabular and graphical presentation of results; and new graphical
presentations. With a view to describing recent practice, we review 100
articles taken from Volume 34 of Statistics in Medicine that included at least
one simulation study and identify areas for improvement.Comment: 31 pages, 9 figures (2 in appendix), 8 tables (1 in appendix
Graphs with specified degree distributions, simple epidemics and local vaccination strategies
Consider a random graph, having a pre-specified degree distribution F but
other than that being uniformly distributed, describing the social structure
(friendship) in a large community. Suppose one individual in the community is
externally infected by an infectious disease and that the disease has its
course by assuming that infected individuals infect their not yet infected
friends independently with probability p. For this situation the paper
determines R_0 and tau_0, the basic reproduction number and the asymptotic
final size in case of a major outbreak. Further, the paper looks at some
different local vaccination strategies where individuals are chosen randomly
and vaccinated, or friends of the selected individuals are vaccinated, prior to
the introduction of the disease. For the studied vaccination strategies the
paper determines R_v: the reproduction number, and tau_v: the asymptotic final
proportion infected in case of a major outbreak, after vaccinating a fraction
v.Comment: 31 pages, 3 figure
EZ-AG: Structure-free data aggregation in MANETs using push-assisted self-repelling random walks
This paper describes EZ-AG, a structure-free protocol for duplicate
insensitive data aggregation in MANETs. The key idea in EZ-AG is to introduce a
token that performs a self-repelling random walk in the network and aggregates
information from nodes when they are visited for the first time. A
self-repelling random walk of a token on a graph is one in which at each step,
the token moves to a neighbor that has been visited least often. While
self-repelling random walks visit all nodes in the network much faster than
plain random walks, they tend to slow down when most of the nodes are already
visited. In this paper, we show that a single step push phase at each node can
significantly speed up the aggregation and eliminate this slow down. By doing
so, EZ-AG achieves aggregation in only O(N) time and messages. In terms of
overhead, EZ-AG outperforms existing structure-free data aggregation by a
factor of at least log(N) and achieves the lower bound for aggregation message
overhead. We demonstrate the scalability and robustness of EZ-AG using ns-3
simulations in networks ranging from 100 to 4000 nodes under different mobility
models and node speeds. We also describe a hierarchical extension for EZ-AG
that can produce multi-resolution aggregates at each node using only O(NlogN)
messages, which is a poly-logarithmic factor improvement over existing
techniques
- …