696 research outputs found

    Compressive Diffusion Strategies Over Distributed Networks for Reduced Communication Load

    Get PDF
    We study the compressive diffusion strategies over distributed networks based on the diffusion implementation and adaptive extraction of the information from the compressed diffusion data. We demonstrate that one can achieve a comparable performance with the full information exchange configurations, even if the diffused information is compressed into a scalar or a single bit. To this end, we provide a complete performance analysis for the compressive diffusion strategies. We analyze the transient, steady-state and tracking performance of the configurations in which the diffused data is compressed into a scalar or a single-bit. We propose a new adaptive combination method improving the convergence performance of the compressive diffusion strategies further. In the new method, we introduce one more freedom-of-dimension in the combination matrix and adapt it by using the conventional mixture approach in order to enhance the convergence performance for any possible combination rule used for the full diffusion configuration. We demonstrate that our theoretical analysis closely follow the ensemble averaged results in our simulations. We provide numerical examples showing the improved convergence performance with the new adaptive combination method.Comment: Submitted to IEEE Transactions on Signal Processin

    Single Bit and Reduced Dimension Diffusion Strategies Over Distributed Networks

    Get PDF
    We introduce novel diffusion based adaptive estimation strategies for distributed networks that have significantly less communication load and achieve comparable performance to the full information exchange configurations. After local estimates of the desired data is produced in each node, a single bit of information (or a reduced dimensional data vector) is generated using certain random projections of the local estimates. This newly generated data is diffused and then used in neighboring nodes to recover the original full information. We provide the complete state-space description and the mean stability analysis of our algorithms.Comment: Submitted to the IEEE Signal Processing Letter

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin

    On the Global Convergence of Stochastic Fictitious Play in Stochastic Games with Turn-based Controllers

    Full text link
    This paper presents a learning dynamic with almost sure convergence guarantee for any stochastic game with turn-based controllers (on state transitions) as long as stage-payoffs have stochastic fictitious-play-property. For example, two-player zero-sum and n-player potential strategic-form games have this property. Note also that stage-payoffs for different states can have different structures such as they can sum to zero in some states and be identical in others. The dynamics presented combines the classical stochastic fictitious play with value iteration for stochastic games. There are two key properties: (i) players play finite horizon stochastic games with increasing lengths within the underlying infinite-horizon stochastic game, and (ii) the turn-based controllers ensure that the auxiliary stage-games (induced from the continuation payoff estimated) have the stochastic fictitious-play-property

    Stochastic Subgradient Algorithms for Strongly Convex Optimization over Distributed Networks

    Full text link
    We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node, and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on the stochastic gradient descent (SGD) updates. Particularly, we use a carefully designed time-dependent weighted averaging of the SGD iterates, which yields a convergence rate of O(NNT)O\left(\frac{N\sqrt{N}}{T}\right) after TT gradient updates for each node on a network of NN nodes. We then show that after TT gradient oracle calls, the average SGD iterate achieves a mean square deviation (MSD) of O(NT)O\left(\frac{\sqrt{N}}{T}\right). This rate of convergence is optimal as it matches the performance lower bound up to constant terms. Similar to the SGD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SGD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets and widely studied network topologies

    Emerging Trends in Applied Mathematics

    Get PDF
    This research explores the emerging trends in applied mathematics and their far-reaching implications in various fields. Machine learning and artificial intelligence are revolutionizing healthcare, finance, and natural language processing. Big data analysis is enhancing decision-making in finance, healthcare, and logistics. Quantum computing promises to transform materials science and renewable energy. These trends are reshaping research and practice, offering innovative solutions and opportunities for interdisciplinary collaboration. Ethical considerations and the development of advanced algorithms are critical areas of future research. This study serves as a foundation for understanding and harnessing the potential of these trends, emphasizing the importance of continuous exploration and skill development in an evolving mathematical landscape
    corecore