131 research outputs found

    Globally Optimal Beamforming Design for Integrated Sensing and Communication Systems

    Full text link
    In this paper, we propose a multi-input multi-output (MIMO) beamforming transmit optimization model for joint radar sensing and multi-user communications, where the design of the beamformers is formulated as an optimization problem whose objective is a weighted combination of the sum rate and the Cram\'{e}r-Rao bound (CRB), subject to the transmit power budget constraint. The formulated problem is challenging to obtain a global solution, because the sum rate maximization (SRM) problem itself (even without considering the sensing metric) is known to be NP-hard. In this paper, we propose an efficient global branch-and-bound algorithm for solving the formulated problem based on the McCormick envelope relaxation and the semidefinite relaxation (SDR) technique. The proposed algorithm is guaranteed to find the global solution for the considered problem, and thus serves as an important benchmark for performance evaluation of the existing local or suboptimal algorithms for solving the same problem.Comment: 5 pages, 2 figures, submitted for possible publicatio

    Accuracy, Efficiency, and Parallelism in Network Target Coordination Optimization

    Get PDF
    The optimal design task of complex engineering systems requires knowledge in various domains. It is thus often split into smaller parts and assigned to different design teams with specialized backgrounds. Decomposition based optimization is a multidisciplinary design optimization (MDO) technique that models and improves this process by partitioning the whole design optimization task into many manageable sub-problems. These sub-problems can be treated separately and a coordination strategy is employed to coordinate their couplings and drive their individual solutions to a consistent overall optimum. Many methods have been proposed in the literature, applying mathematical theories in nonlinear programming to decomposition based optimization, and testing them on engineering problems. These methods include Analytical Target Cascading (ATC) using quadratic methods and Augmented Lagrangian Coordination (ALC) using augmented Lagrangian relaxation. The decomposition structure has also been expanded from the special hierarchical structure to the general network structure. However, accuracy, efficiency, and parallelism still remain the focus of decomposition based optimization research when dealing with complex problems and more work is needed to both improve the existing methods and develop new methods. In this research, a hybrid network partition in which additional sub-problems can either be disciplines or components added to a component or discipline network respectively is proposed and two hybrid test problems are formulated. The newly developed consensus optimization method is applied on these test problems and shows good performance. For the ALC method, when the problem partition is given, various alternative structures are analyzed and compared through numerical tests. A new theory of dual residual based on Karush-Kuhn-Tucker (KKT) conditions is developed, which leads to a new flexible weight update strategy for both centralized and distributed ALC. Numerical tests show that the optimization accuracy is greatly improved by considering the dual residual in the iteration process. Furthermore, the ALC using the new update is able to converge to a good solution starting with various initial weights while the traditional update fails to guide the optimization to a reasonable solution when the initial weight is outside of a narrow range. Finally, a new coordination method is developed in this research by utilizing both the ordinary Lagrangian duality theorem and the alternating direction method of multipliers (ADMM). Different from the methods in the literature which employ duality theorems just once, the proposed method uses duality theorems twice and the resulting algorithm can optimize all sub-problems in parallel while requiring the least copies of linking variables. Numerical tests show that the new method consistently reaches more accurate solutions and consumes less computational resources when compared to another popular parallel method, the centralized ALC

    On Degeneracy Issues in Multi-parametric Programming and Critical Region Exploration based Distributed Optimization in Smart Grid Operations

    Full text link
    Improving renewable energy resource utilization efficiency is crucial to reducing carbon emissions, and multi-parametric programming has provided a systematic perspective in conducting analysis and optimization toward this goal in smart grid operations. This paper focuses on two aspects of interest related to multi-parametric linear/quadratic programming (mpLP/QP). First, we study degeneracy issues of mpLP/QP. A novel approach to deal with degeneracies is proposed to find all critical regions containing the given parameter. Our method leverages properties of the multi-parametric linear complementary problem, vertex searching technique, and complementary basis enumeration. Second, an improved critical region exploration (CRE) method to solve distributed LP/QP is proposed under a general mpLP/QP-based formulation. The improved CRE incorporates the proposed approach to handle degeneracies. A cutting plane update and an adaptive stepsize scheme are also integrated to accelerate convergence under different problem settings. The computational efficiency is verified on multi-area tie-line scheduling problems with various testing benchmarks and initial states

    A Low-Complexity Approach to Distributed Cooperative Caching with Geographic Constraints

    Get PDF
    We consider caching in cellular networks in which each base station is equipped with a cache that can store a limited number of files. The popularity of the files is known and the goal is to place files in the caches such that the probability that a user at an arbitrary location in the plane will find the file that she requires in one of the covering caches is maximized. We develop distributed asynchronous algorithms for deciding which contents to store in which cache. Such cooperative algorithms require communication only between caches with overlapping coverage areas and can operate in asynchronous manner. The development of the algorithms is principally based on an observation that the problem can be viewed as a potential game. Our basic algorithm is derived from the best response dynamics. We demonstrate that the complexity of each best response step is independent of the number of files, linear in the cache capacity and linear in the maximum number of base stations that cover a certain area. Then, we show that the overall algorithm complexity for a discrete cache placement is polynomial in both network size and catalog size. In practical examples, the algorithm converges in just a few iterations. Also, in most cases of interest, the basic algorithm finds the best Nash equilibrium corresponding to the global optimum. We provide two extensions of our basic algorithm based on stochastic and deterministic simulated annealing which find the global optimum. Finally, we demonstrate the hit probability evolution on real and synthetic networks numerically and show that our distributed caching algorithm performs significantly better than storing the most popular content, probabilistic content placement policy and Multi-LRU caching policies.Comment: 24 pages, 9 figures, presented at SIGMETRICS'1

    Independent Learning in Constrained Markov Potential Games

    Full text link
    Constrained Markov games offer a formal mathematical framework for modeling multi-agent reinforcement learning problems where the behavior of the agents is subject to constraints. In this work, we focus on the recently introduced class of constrained Markov Potential Games. While centralized algorithms have been proposed for solving such constrained games, the design of converging independent learning algorithms tailored for the constrained setting remains an open question. We propose an independent policy gradient algorithm for learning approximate constrained Nash equilibria: Each agent observes their own actions and rewards, along with a shared state. Inspired by the optimization literature, our algorithm performs proximal-point-like updates augmented with a regularized constraint set. Each proximal step is solved inexactly using a stochastic switching gradient algorithm. Notably, our algorithm can be implemented independently without a centralized coordination mechanism requiring turn-based agent updates. Under some technical constraint qualification conditions, we establish convergence guarantees towards constrained approximate Nash equilibria. We perform simulations to illustrate our results.Comment: AISTATS 202

    Decentralized Proximal Method of Multipliers for Convex Optimization with Coupled Constraints

    Full text link
    In this paper, a decentralized proximal method of multipliers (DPMM) is proposed to solve constrained convex optimization problems over multi-agent networks, where the local objective of each agent is a general closed convex function, and the constraints are coupled equalities and inequalities. This algorithm strategically integrates the dual decomposition method and the proximal point algorithm. One advantage of DPMM is that subproblems can be solved inexactly and in parallel by agents at each iteration, which relaxes the restriction of requiring exact solutions to subproblems in many distributed constrained optimization algorithms. We show that the first-order optimality residual of the proposed algorithm decays to 00 at a rate of o(1/k)o(1/k) under general convexity. Furthermore, if a structural assumption for the considered optimization problem is satisfied, the sequence generated by DPMM converges linearly to an optimal solution. In numerical simulations, we compare DPMM with several existing algorithms using two examples to demonstrate its effectiveness

    A Discrete-Continuous Algorithm for Globally Optimal Free Flight Trajectory Optimization

    Get PDF
    This thesis introduces the novel hybrid algorithm DisCOptER for globally optimal flight planning. DisCOptER (Discrete-Continuous Optimization for Enhanced Resolution) com- bines discrete and continuous optimization in a two-stage approach to find optimal trajectories up to arbitrary precision in finite time. In the discrete phase, a directed auxiliary graph is created in order to define a set of candidate paths that densely covers the relevant part of the trajectory space. Then, Yen’s algorithm is employed to identify a set of promising candidate paths. These are used as starting points for the subsequent stage in which they are refined with a locally convergent optimal control method. The correctness, accuracy, and complexity of DisCOptER are intricately linked to the choice of the switch-over point, defined by the discretization coarseness. Only a sufficiently dense graph enables the algorithm to find a path within the convex domain surrounding the global minimizer. Initialized with such a path, the second stage rapidly converges to the optimum. Conversely, an excessively dense graph poses the risk of overly costly and redundant computations. The determination of the optimal switch-over point necessitates a profound understanding of the local behavior of the problem, the approximation properties of the graph, and the convergence characteristics of the employed optimal control method. These topics are explored extensively in this thesis. Crucially, the density of the auxiliary graph is solely dependent on the en- vironmental conditions, yet independent of the desired solution accuracy. As a consequence, the algorithm inherits the superior asymptotic convergence properties of the optimal control stage. The practical implications of this computational efficiency are demonstrated in realistic environments, where the DisCOptER algorithm consistently delivers highly accurate globally optimal trajectories with exceptional computational efficiency. This notable improvement upon existing approaches underscores the algorithm’s significance. Beyond its technical prowess, the DisCOptER algorithm stands as a valuable tool contributing to the reduction of costs and the overall enhancement of flight operations efficiency.In dieser Dissertation wird der neuartige hybride Algorithmus DisCOptER für global optimale Flugplanung vorgestellt. DisCOptER (Discrete-Continuous Optimization for Enhanced Resolution) verbindet diskrete und kontinuierliche Optimierung in einem zweistufigen Ansatz um optimale Trajektorien unter strengen Genauigkeitsanforderungen in endlicher Zeit zu finden. Im ersten Schritt wird ein gerichteter Graph erzeugt und damit implizit eine Menge potentieller Pfade definiert, die den relevanten Teil des Trajektorienraumes gleichmäßig abdeckt. Vielversprechende Kandidaten werden mithilfe von Yen’s Algorithmus identifiziert. Diese dienen als Startpunkte für die zweite Stufe, in welcher lokal konvergente Methoden der Optimalsteuerung eingesetzt werden um kontinuierliche Lösungen zu generieren. Die Korrektheit, Genauigkeit und Komplexität der DisCOptER Methode sind untrennbar verknüpft mit der Wahl des Umschaltpunktes, definiert durch die Dichte des Graphen. Nur auf einem ausreichend dichten Graphen kann ein Pfad gefunden werden, der innerhalb des Konvergenzbereichs um ein globales Optimum liegt. Ausgehend von einem solchen Pfad konvergiert die zweite Stufe schnell zum Optimum. Demgegenüber birgt ein übermäßig dichter Graph das Risiko für aufwändige und redundante Berechnungen. Die Identifikation dieses Umschaltpunktes verlangt nach einem tiefgehenden Verständnis des lokalen Problemverhaltens, der Approximationseigenschaften des benutzten Graphen, sowie der Konvergenzeigenschaften der eingesetzten kontinuier- lichen Optimierungsmethode. Diese Aspekte werden in der vorliegenden Arbeit ausführlich untersucht. Eine zentrale Stärke des vorgestellten diskret-kontinuierlichen Ansatzes ist, dass die nötige Graphendichte ausschließlich von den Umgebungsbedingungen, jedoch nicht von der geforderten Lösungsgüte, abhängt. Dies hat zur Folge, dass asymptotisch die vorteilhaften Konvergenzeigenschaften der kontinuierlichen Op- timierung beibehalten werden. Die Effizienz der vorgestellten Methode wird unter realistischen Bedingungen praktisch nachgewiesen. Es wird demonstriert, dass der DisCOptER Algorithmus mit minimalem Aufwand konsistent hochpräzise global optimale Lösungen erzielt und so einen doppelten Vorteil im Vergleich zu bestehenden Methoden bietet. Einerseits wird eine gesteigerte algorithmische Effizienz erreicht. Andererseits trägt die verbesserte Qualität der Trajektorien wesentlich dazu bei, den Luftfahrtsektor effizienter und umweltfreundlicher zu gestalten

    Transmission Expansion Planning by Quantum Annealing

    Get PDF
    The transmission expansion planning problem (TEP) can be formulated as a mixed-integer linearcprogramming (MILP) problem that aims at finding the optimal way to expand the capacity of an energy system. The solution provides the optimal layout of transmission lines that are to be built in order to satisfy the energy demand on a distributed energy system with a high share of renewable energy sources. The TEP scales badly using classical algorithms and, at the same time, energy system models are getting larger and more complex due to the integration of decentralized weather-dependent renewable energy sources, sector coupling and the increase of storage components. Currently, the problem is often linearized or the scope and granularity of the model are reduced using clustering algorithms. For this reason, any computational time reduction will have substantial implications in closing the granularity gap between what the current models can solve and the desired resolution needed by energy system operators. Quantum annealers are single-purpose quantum computers specialized in solving combinatorial optimization problems. Since quantum computers are still not sufficiently mature, large problems cannot be tackled purely with a quantum computer. We propose a decomposition protocol for the TEP problem which is similar in spirit to Benders decomposition algorithm, which allows us to use a hybrid quantum-classical approach to tackle bigger problems by providing the binary master problem to a quantum annealer and a set of slave sub-problems to classical solvers. Therefore, our method can take advantage of cutting-edge classical algorithms and current quantum annealers. The ultimate goal is to find solutions that are closer to the optimum while achieving a speed-up
    • …
    corecore