1,980 research outputs found

    Distributed Maximum Likelihood Sensor Network Localization

    Full text link
    We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems

    Full text link
    Nonconvex and structured optimization problems arise in many engineering applications that demand scalable and distributed solution methods. The study of the convergence properties of these methods is in general difficult due to the nonconvexity of the problem. In this paper, two distributed solution methods that combine the fast convergence properties of augmented Lagrangian-based methods with the separability properties of alternating optimization are investigated. The first method is adapted from the classic quadratic penalty function method and is called the Alternating Direction Penalty Method (ADPM). Unlike the original quadratic penalty function method, in which single-step optimizations are adopted, ADPM uses an alternating optimization, which in turn makes it scalable. The second method is the well-known Alternating Direction Method of Multipliers (ADMM). It is shown that ADPM for nonconvex problems asymptotically converges to a primal feasible point under mild conditions and an additional condition ensuring that it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In the case of the ADMM, novel sufficient conditions under which the algorithm asymptotically reaches the standard first order necessary conditions are established. Based on this, complete convergence of ADMM for a class of low dimensional problems are characterized. Finally, the results are illustrated by applying ADPM and ADMM to a nonconvex localization problem in wireless sensor networks.Comment: 13 pages, 6 figure
    • …
    corecore