98,427 research outputs found

    Diffusion-Based Adaptive Distributed Detection: Steady-State Performance in the Slow Adaptation Regime

    Full text link
    This work examines the close interplay between cooperation and adaptation for distributed detection schemes over fully decentralized networks. The combined attributes of cooperation and adaptation are necessary to enable networks of detectors to continually learn from streaming data and to continually track drifts in the state of nature when deciding in favor of one hypothesis or another. The results in the paper establish a fundamental scaling law for the steady-state probabilities of miss-detection and false-alarm in the slow adaptation regime, when the agents interact with each other according to distributed strategies that employ small constant step-sizes. The latter are critical to enable continuous adaptation and learning. The work establishes three key results. First, it is shown that the output of the collaborative process at each agent has a steady-state distribution. Second, it is shown that this distribution is asymptotically Gaussian in the slow adaptation regime of small step-sizes. And third, by carrying out a detailed large deviations analysis, closed-form expressions are derived for the decaying rates of the false-alarm and miss-detection probabilities. Interesting insights are gained. In particular, it is verified that as the step-size μ\mu decreases, the error probabilities are driven to zero exponentially fast as functions of 1/μ1/\mu, and that the error exponents increase linearly in the number of agents. It is also verified that the scaling laws governing errors of detection and errors of estimation over networks behave very differently, with the former having an exponential decay proportional to 1/μ1/\mu, while the latter scales linearly with decay proportional to μ\mu. It is shown that the cooperative strategy allows each agent to reach the same detection performance, in terms of detection error exponents, of a centralized stochastic-gradient solution.Comment: The paper will appear in IEEE Trans. Inf. Theor

    Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies

    Full text link
    The goal of this paper is to propose novel strategies for adaptive learning of signals defined over graphs, which are observed over a (randomly time-varying) subset of vertices. We recast two classical adaptive algorithms in the graph signal processing framework, namely, the least mean squares (LMS) and the recursive least squares (RLS) adaptive estimation strategies. For both methods, a detailed mean-square analysis illustrates the effect of random sampling on the adaptive reconstruction capability and the steady-state performance. Then, several probabilistic sampling strategies are proposed to design the sampling probability at each node in the graph, with the aim of optimizing the tradeoff between steady-state performance, graph sampling rate, and convergence rate of the adaptive algorithms. Finally, a distributed RLS strategy is derived and is shown to be convergent to its centralized counterpart. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.Comment: Submitted to IEEE Transactions on Signal Processing, September 201

    Optimization and universality of Brownian search in quenched heterogeneous media

    Full text link
    The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so called mean first passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piece-wise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an {\em optimal heterogeneity} minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single particle tracking in living cells we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.Comment: 19 pages, 11 figures, RevTe

    Energy-Efficient Resource Management in Ultra Dense Small Cell Networks: A Mean-Field Approach

    Full text link
    In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit power, in ultra dense small cell networks (UDNs). To address this problem, a dynamic stochastic game (DSG) is formulated between small cell base stations (SBSs). This game enables to capture the dynamics of both queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean field game (MFG) in which the MFG equilibrium is analyzed with the aid of two low-complexity tractable partial differential equations. User scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' quality-of-service. Simulation results show that the proposed approach achieves up to 18:1% gains in EE and 98.2% reductions in the network's outage probability compared to a baseline model.Comment: 6 pages, 7 figures, GLOBECOM 2015 (published

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    corecore