84,563 research outputs found
Tight Bounds for On-line Tree Embedding
Many tree–structured computations are inherently parallel.
As leaf processes are recursively spawned they can
be assigned to independent processors in a multicomputer
network. To maintain load balance, an on–line
mapping algorithm must distribute processes equitably
among processors. Additionally, the algorithm itself
must be distributed in nature, and process allocation
must be completed via message–passing with minimal
communication overhead.
This paper investigates bounds on the performance
of deterministic and randomized algorithms for on–line
tree embedding. In particular, we study tradeoffs between
performance (load–balance) and communication
overhead (message congest ion). We give a simple technique
to derive lower bounds on the congestion that
any on–line allocation algorithm must incur in order to
guarantee load balance. This technique works for both
randomized and deterministic algorithms, although we
find that the performance of randomized on-line algorithms
to be somewhat better than that of deterministic
algorithms. Optimal bounds are achieved for several
networks including multi–dimensional grids and butterflies
The Core of the Participatory Budgeting Problem
In participatory budgeting, communities collectively decide on the allocation
of public tax dollars for local public projects. In this work, we consider the
question of fairly aggregating the preferences of community members to
determine an allocation of funds to projects. This problem is different from
standard fair resource allocation because of public goods: The allocated goods
benefit all users simultaneously. Fairness is crucial in participatory decision
making, since generating equitable outcomes is an important goal of these
processes. We argue that the classic game theoretic notion of core captures
fairness in the setting. To compute the core, we first develop a novel
characterization of a public goods market equilibrium called the Lindahl
equilibrium, which is always a core solution. We then provide the first (to our
knowledge) polynomial time algorithm for computing such an equilibrium for a
broad set of utility functions; our algorithm also generalizes (in a
non-trivial way) the well-known concept of proportional fairness. We use our
theoretical insights to perform experiments on real participatory budgeting
voting data. We empirically show that the core can be efficiently computed for
utility functions that naturally model our practical setting, and examine the
relation of the core with the familiar welfare objective. Finally, we address
concerns of incentives and mechanism design by developing a randomized
approximately dominant-strategy truthful mechanism building on the exponential
mechanism from differential privacy
Sequential monitoring of response-adaptive randomized clinical trials
Clinical trials are complex and usually involve multiple objectives such as
controlling type I error rate, increasing power to detect treatment difference,
assigning more patients to better treatment, and more. In literature, both
response-adaptive randomization (RAR) procedures (by changing randomization
procedure sequentially) and sequential monitoring (by changing analysis
procedure sequentially) have been proposed to achieve these objectives to some
degree. In this paper, we propose to sequentially monitor response-adaptive
randomized clinical trial and study it's properties. We prove that the
sequential test statistics of the new procedure converge to a Brownian motion
in distribution. Further, we show that the sequential test statistics
asymptotically satisfy the canonical joint distribution defined in Jennison and
Turnbull (\citeyearJT00). Therefore, type I error and other objectives can be
achieved theoretically by selecting appropriate boundaries. These results open
a door to sequentially monitor response-adaptive randomized clinical trials in
practice. We can also observe from the simulation studies that, the proposed
procedure brings together the advantages of both techniques, in dealing with
power, total sample size and total failure numbers, while keeps the type I
error. In addition, we illustrate the characteristics of the proposed procedure
by redesigning a well-known clinical trial of maternal-infant HIV transmission.Comment: Published in at http://dx.doi.org/10.1214/10-AOS796 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Dynamic algorithms for multicast with intra-session network coding
The problem of multiple multicast sessions with
intra-session network coding in time-varying networks is considered.
The network-layer capacity region of input rates that can be
stably supported is established. Dynamic algorithms for multicast
routing, network coding, power allocation, session scheduling, and
rate allocation across correlated sources, which achieve stability
for rates within the capacity region, are presented. This work
builds on the back-pressure approach introduced by Tassiulas
et al., extending it to network coding and correlated sources. In
the proposed algorithms, decisions on routing, network coding,
and scheduling between different sessions at a node are made
locally at each node based on virtual queues for different sinks.
For correlated sources, the sinks locally determine and control
transmission rates across the sources. The proposed approach
yields a completely distributed algorithm for wired networks.
In the wireless case, power control among different transmitters
is centralized while routing, network coding, and scheduling
between different sessions at a given node are distributed
On the almost sure convergence of adaptive allocation procedures
In this paper, we provide some general convergence results for adaptive
designs for treatment comparison, both in the absence and presence of
covariates. In particular, we demonstrate the almost sure convergence of the
treatment allocation proportion for a vast class of adaptive procedures, also
including designs that have not been formally investigated but mainly explored
through simulations, such as Atkinson's optimum biased coin design, Pocock and
Simon's minimization method and some of its generalizations. Even if the large
majority of the proposals in the literature rely on continuous allocation
rules, our results allow to prove via a unique mathematical framework the
convergence of adaptive allocation methods based on both continuous and
discontinuous randomization functions. Although several examples of earlier
works are included in order to enhance the applicability, our approach provides
substantial insight for future suggestions, especially in the absence of a
prefixed target and for designs characterized by sequences of allocation rules.Comment: Published at http://dx.doi.org/10.3150/13-BEJ591 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Multitype randomized Reed--Frost epidemics and epidemics upon random graphs
We consider a multitype epidemic model which is a natural extension of the
randomized Reed--Frost epidemic model. The main result is the derivation of an
asymptotic Gaussian limit theorem for the final size of the epidemic. The
method of proof is simpler, and more direct, than is used for similar results
elsewhere in the epidemics literature. In particular, the results are
specialized to epidemics upon extensions of the Bernoulli random graph.Comment: Published at http://dx.doi.org/10.1214/105051606000000123 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
- …