54,246 research outputs found

    Diffusion-Based Adaptive Distributed Detection: Steady-State Performance in the Slow Adaptation Regime

    Full text link
    This work examines the close interplay between cooperation and adaptation for distributed detection schemes over fully decentralized networks. The combined attributes of cooperation and adaptation are necessary to enable networks of detectors to continually learn from streaming data and to continually track drifts in the state of nature when deciding in favor of one hypothesis or another. The results in the paper establish a fundamental scaling law for the steady-state probabilities of miss-detection and false-alarm in the slow adaptation regime, when the agents interact with each other according to distributed strategies that employ small constant step-sizes. The latter are critical to enable continuous adaptation and learning. The work establishes three key results. First, it is shown that the output of the collaborative process at each agent has a steady-state distribution. Second, it is shown that this distribution is asymptotically Gaussian in the slow adaptation regime of small step-sizes. And third, by carrying out a detailed large deviations analysis, closed-form expressions are derived for the decaying rates of the false-alarm and miss-detection probabilities. Interesting insights are gained. In particular, it is verified that as the step-size μ\mu decreases, the error probabilities are driven to zero exponentially fast as functions of 1/μ1/\mu, and that the error exponents increase linearly in the number of agents. It is also verified that the scaling laws governing errors of detection and errors of estimation over networks behave very differently, with the former having an exponential decay proportional to 1/μ1/\mu, while the latter scales linearly with decay proportional to μ\mu. It is shown that the cooperative strategy allows each agent to reach the same detection performance, in terms of detection error exponents, of a centralized stochastic-gradient solution.Comment: The paper will appear in IEEE Trans. Inf. Theor

    Joint Beamforming and Power Control in Coordinated Multicell: Max-Min Duality, Effective Network and Large System Transition

    Full text link
    This paper studies joint beamforming and power control in a coordinated multicell downlink system that serves multiple users per cell to maximize the minimum weighted signal-to-interference-plus-noise ratio. The optimal solution and distributed algorithm with geometrically fast convergence rate are derived by employing the nonlinear Perron-Frobenius theory and the multicell network duality. The iterative algorithm, though operating in a distributed manner, still requires instantaneous power update within the coordinated cluster through the backhaul. The backhaul information exchange and message passing may become prohibitive with increasing number of transmit antennas and increasing number of users. In order to derive asymptotically optimal solution, random matrix theory is leveraged to design a distributed algorithm that only requires statistical information. The advantage of our approach is that there is no instantaneous power update through backhaul. Moreover, by using nonlinear Perron-Frobenius theory and random matrix theory, an effective primal network and an effective dual network are proposed to characterize and interpret the asymptotic solution.Comment: Some typos in the version publised in the IEEE Transactions on Wireless Communications are correcte

    Uncertainty damping in kinetic traffic models by driver-assist controls

    Get PDF
    In this paper, we propose a kinetic model of traffic flow with uncertain binary interactions, which explains the scattering of the fundamental diagram in terms of the macroscopic variability of aggregate quantities, such as the mean speed and the flux of the vehicles, produced by the microscopic uncertainty. Moreover, we design control strategies at the level of the microscopic interactions among the vehicles, by which we prove that it is possible to dampen the propagation of such an uncertainty across the scales. Our analytical and numerical results suggest that the aggregate traffic flow may be made more ordered, hence predictable, by implementing such control protocols in driver-assist vehicles. Remarkably, they also provide a precise relationship between a measure of the macroscopic damping of the uncertainty and the penetration rate of the driver-assist technology in the traffic stream

    Efficient Sequential Monte-Carlo Samplers for Bayesian Inference

    Full text link
    In many problems, complex non-Gaussian and/or nonlinear models are required to accurately describe a physical system of interest. In such cases, Monte Carlo algorithms are remarkably flexible and extremely powerful approaches to solve such inference problems. However, in the presence of a high-dimensional and/or multimodal posterior distribution, it is widely documented that standard Monte-Carlo techniques could lead to poor performance. In this paper, the study is focused on a Sequential Monte-Carlo (SMC) sampler framework, a more robust and efficient Monte Carlo algorithm. Although this approach presents many advantages over traditional Monte-Carlo methods, the potential of this emergent technique is however largely underexploited in signal processing. In this work, we aim at proposing some novel strategies that will improve the efficiency and facilitate practical implementation of the SMC sampler specifically for signal processing applications. Firstly, we propose an automatic and adaptive strategy that selects the sequence of distributions within the SMC sampler that minimizes the asymptotic variance of the estimator of the posterior normalization constant. This is critical for performing model selection in modelling applications in Bayesian signal processing. The second original contribution we present improves the global efficiency of the SMC sampler by introducing a novel correction mechanism that allows the use of the particles generated through all the iterations of the algorithm (instead of only particles from the last iteration). This is a significant contribution as it removes the need to discard a large portion of the samples obtained, as is standard in standard SMC methods. This will improve estimation performance in practical settings where computational budget is important to consider.Comment: arXiv admin note: text overlap with arXiv:1303.3123 by other author
    corecore