154 research outputs found

    Decentralized Beamforming Design for Intelligent Reflecting Surface-enhanced Cell-free Networks

    Full text link
    Cell-free networks are considered as a promising distributed network architecture to satisfy the increasing number of users and high rate expectations in beyond-5G systems. However, to further enhance network capacity, an increasing number of high-cost base stations (BSs) are required. To address this problem and inspired by the cost-effective intelligent reflecting surface (IRS) technique, we propose a fully decentralized design framework for cooperative beamforming in IRS-aided cell-free networks. We first transform the centralized weighted sum-rate maximization problem into a tractable consensus optimization problem, and then an incremental alternating direction method of multipliers (ADMM) algorithm is proposed to locally update the beamformer. The complexity and convergence of the proposed method are analyzed, and these results show that the performance of the new scheme can asymptotically approach that of the centralized one as the number of iterations increases. Results also show that IRSs can significantly increase the system sum-rate of cell-free networks and the proposed method outperforms existing decentralized methods.Comment: 5 pages, 6 figure

    A Stochastic Resource-Sharing Network for Electric Vehicle Charging

    Full text link
    We consider a distribution grid used to charge electric vehicles such that voltage drops stay bounded. We model this as a class of resource-sharing networks, known as bandwidth-sharing networks in the communication network literature. We focus on resource-sharing networks that are driven by a class of greedy control rules that can be implemented in a decentralized fashion. For a large number of such control rules, we can characterize the performance of the system by a fluid approximation. This leads to a set of dynamic equations that take into account the stochastic behavior of EVs. We show that the invariant point of these equations is unique and can be computed by solving a specific ACOPF problem, which admits an exact convex relaxation. We illustrate our findings with a case study using the SCE 47-bus network and several special cases that allow for explicit computations.Comment: 13 pages, 8 figure

    Privacy-Preserving Decentralized Optimization and Event Localization

    Get PDF
    This dissertation considers decentralized optimization and its applications. On the one hand, we address privacy preservation for decentralized optimization, where N agents cooperatively minimize the sum of N convex functions private to these individual agents. In most existing decentralized optimization approaches, participating agents exchange and disclose states explicitly, which may not be desirable when the states contain sensitive information of individual agents. The problem is more acute when adversaries exist which try to steal information from other participating agents. To address this issue, we first propose two privacy-preserving decentralized optimization approaches based on ADMM (alternating direction method of multipliers) and subgradient method, respectively, by leveraging partially homomorphic cryptography. To our knowledge, this is the first time that cryptographic techniques are incorporated in a fully decentralized setting to enable privacy preservation in decentralized optimization in the absence of any third party or aggregator. To facilitate the incorporation of encryption in a fully decentralized manner, we also introduce a new ADMM which allows time-varying penalty matrices and rigorously prove that it has a convergence rate of O(1/t). However, given that encryption-based algorithms unavoidably bring about extra computational and communication overhead in real-time optimization [61], we then propose another novel privacy solution for decentralized optimization based on function decomposition and ADMM which enables privacy without incurring large communication/computational overhead. On the other hand, we address the application of decentralized optimization to the event localization problem, which plays a fundamental role in many wireless sensor network applications such as environmental monitoring, homeland security, medical treatment, and health care. The event localization problem is essentially a non-convex and non-smooth problem. We address such a problem in two ways. First, a completely decentralized solution based on augmented Lagrangian methods and ADMM is proposed to solve the non-smooth and non-convex problem directly, rather than using conventional convex relaxation techniques. However, this algorithm requires the target event to be within the convex hull of the deployed sensors. To address this issue, we propose another two scalable distributed algorithms based on ADMM and convex relaxation, which do not require the target event to be within the convex hull of the deployed sensors. Simulation results confirm effectiveness of the proposed algorithms

    Distributed Learning over Unreliable Networks

    Full text link
    Most of today's distributed machine learning systems assume {\em reliable networks}: whenever two machines exchange information (e.g., gradients or models), the network should guarantee the delivery of the message. At the same time, recent work exhibits the impressive tolerance of machine learning algorithms to errors or noise arising from relaxed communication or synchronization. In this paper, we connect these two trends, and consider the following question: {\em Can we design machine learning systems that are tolerant to network unreliability during training?} With this motivation, we focus on a theoretical problem of independent interest---given a standard distributed parameter server architecture, if every communication between the worker and the server has a non-zero probability pp of being dropped, does there exist an algorithm that still converges, and at what speed? The technical contribution of this paper is a novel theoretical analysis proving that distributed learning over unreliable network can achieve comparable convergence rate to centralized or distributed learning over reliable networks. Further, we prove that the influence of the packet drop rate diminishes with the growth of the number of \textcolor{black}{parameter servers}. We map this theoretical result onto a real-world scenario, training deep neural networks over an unreliable network layer, and conduct network simulation to validate the system improvement by allowing the networks to be unreliable

    Decentralized Federated Learning: Fundamentals, State-of-the-art, Frameworks, Trends, and Challenges

    Full text link
    In the last decade, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges
    • …
    corecore