10,106 research outputs found
Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies
The paper considers the problem of distributed adaptive linear parameter
estimation in multi-agent inference networks. Local sensing model information
is only partially available at the agents and inter-agent communication is
assumed to be unpredictable. The paper develops a generic mixed time-scale
stochastic procedure consisting of simultaneous distributed learning and
estimation, in which the agents adaptively assess their relative observation
quality over time and fuse the innovations accordingly. Under rather weak
assumptions on the statistical model and the inter-agent communication, it is
shown that, by properly tuning the consensus potential with respect to the
innovation potential, the asymptotic information rate loss incurred in the
learning process may be made negligible. As such, it is shown that the agent
estimates are asymptotically efficient, in that their asymptotic covariance
coincides with that of a centralized estimator (the inverse of the centralized
Fisher information rate for Gaussian systems) with perfect global model
information and having access to all observations at all times. The proof
techniques are mainly based on convergence arguments for non-Markovian mixed
time scale stochastic approximation procedures. Several approximation results
developed in the process are of independent interest.Comment: Submitted to SIAM Journal on Control and Optimization journal.
Initial Submission: Sept. 2011. Revised: Aug. 201
Tuning Actions and Observables in Lattice QCD
We propose a strategy for conducting lattice QCD simulations at fixed volume
but variable quark mass so as to investigate the physical effects of dynamical
fermions. We present details of techniques which enable this to be carried out
effectively, namely the tuning in bare parameter space and efficient stochastic
estimation of the fermion determinant. Preliminary results and tests of the
method are presented. We discuss further possible applications of these
techniques.Comment: 17 pages, 4 eps figures; affiliation correction in this header +
minor post-referee addition
Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs
The paper considers gossip distributed estimation of a (static) distributed
random field (a.k.a., large scale unknown parameter vector) observed by
sparsely interconnected sensors, each of which only observes a small fraction
of the field. We consider linear distributed estimators whose structure
combines the information \emph{flow} among sensors (the \emph{consensus} term
resulting from the local gossiping exchange among sensors when they are able to
communicate) and the information \emph{gathering} measured by the sensors (the
\emph{sensing} or \emph{innovations} term.) This leads to mixed time scale
algorithms--one time scale associated with the consensus and the other with the
innovations. The paper establishes a distributed observability condition
(global observability plus mean connectedness) under which the distributed
estimates are consistent and asymptotically normal. We introduce the
distributed notion equivalent to the (centralized) Fisher information rate,
which is a bound on the mean square error reduction rate of any distributed
estimator; we show that under the appropriate modeling and structural network
communication conditions (gossip protocol) the distributed gossip estimator
attains this distributed Fisher information rate, asymptotically achieving the
performance of the optimal centralized estimator. Finally, we study the
behavior of the distributed gossip estimator when the measurements fade (noise
variance grows) with time; in particular, we consider the maximum rate at which
the noise variance can grow and still the distributed estimator being
consistent, by showing that, as long as the centralized estimator is
consistent, the distributed estimator remains consistent.Comment: Submitted for publication, 30 page
On the ergodicity properties of some adaptive MCMC algorithms
In this paper we study the ergodicity properties of some adaptive Markov
chain Monte Carlo algorithms (MCMC) that have been recently proposed in the
literature. We prove that under a set of verifiable conditions, ergodic
averages calculated from the output of a so-called adaptive MCMC sampler
converge to the required value and can even, under more stringent assumptions,
satisfy a central limit theorem. We prove that the conditions required are
satisfied for the independent Metropolis--Hastings algorithm and the random
walk Metropolis algorithm with symmetric increments. Finally, we propose an
application of these results to the case where the proposal distribution of the
Metropolis--Hastings update is a mixture of distributions from a curved
exponential family.Comment: Published at http://dx.doi.org/10.1214/105051606000000286 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
- …