15 research outputs found
A subgradient method based on gradient sampling for solving convex optimization problems
2015-2016 > Academic research: refereed > Publication in refereed journa
A Utility Proportional Fairness Radio Resource Block Allocation in Cellular Networks
This paper presents a radio resource block allocation optimization problem
for cellular communications systems with users running delay-tolerant and
real-time applications, generating elastic and inelastic traffic on the network
and being modelled as logarithmic and sigmoidal utilities respectively. The
optimization is cast under a utility proportional fairness framework aiming at
maximizing the cellular systems utility whilst allocating users the resource
blocks with an eye on application quality of service requirements and on the
procedural temporal and computational efficiency. Ultimately, the sensitivity
of the proposed modus operandi to the resource variations is investigated
Nonsmooth Optimization; Proceedings of an IIASA Workshop, March 28 - April 8, 1977
Optimization, a central methodological tool of systems analysis, is used in many of IIASA's research areas, including the Energy Systems and Food and Agriculture Programs. IIASA's activity in the field of optimization is strongly connected with nonsmooth or nondifferentiable extreme problems, which consist of searching for conditional or unconditional minima of functions that, due to their complicated internal structure, have no continuous derivatives. Particularly significant for these kinds of extreme problems in systems analysis is the strong link between nonsmooth or nondifferentiable optimization and the decomposition approach to large-scale programming.
This volume contains the report of the IIASA workshop held from March 28 to April 8, 1977, entitled Nondifferentiable Optimization. However, the title was changed to Nonsmooth Optimization for publication of this volume as we are concerned not only with optimization without derivatives, but also with problems having functions for which gradients exist almost everywhere but are not continous, so that the usual gradient-based methods fail.
Because of the small number of participants and the unusual length of the workshop, a substantial exchange of information was possible. As a result, details of the main developments in nonsmooth optimization are summarized in this volume, which might also be considered a guide for inexperienced users. Eight papers are presented: three on subgradient optimization, four on descent methods, and one on applicability. The report also includes a set of nonsmooth optimization test problems and a comprehensive bibliography
Kontinuierliche Optimierung und Industrieanwendungen
[no abstract available
Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems
In this paper we propose and analyze two dual methods based on inexact
gradient information and averaging that generate approximate primal solutions
for smooth convex optimization problems. The complicating constraints are moved
into the cost using the Lagrange multipliers. The dual problem is solved by
inexact first order methods based on approximate gradients and we prove
sublinear rate of convergence for these methods. In particular, we provide, for
the first time, estimates on the primal feasibility violation and primal and
dual suboptimality of the generated approximate primal and dual solutions.
Moreover, we solve approximately the inner problems with a parallel coordinate
descent algorithm and we show that it has linear convergence rate. In our
analysis we rely on the Lipschitz property of the dual function and inexact
dual gradients. Further, we apply these methods to distributed model predictive
control for network systems. By tightening the complicating constraints we are
also able to ensure the primal feasibility of the approximate solutions
generated by the proposed algorithms. We obtain a distributed control strategy
that has the following features: state and input constraints are satisfied,
stability of the plant is guaranteed, whilst the number of iterations for the
suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure
On the Computational Efficiency of Subgradient Methods: a Case Study with Lagrangian Bounds
Subgradient methods (SM) have long been the preferred way to solve the large-scale Nondifferentiable Optimization problems arising from the solution of Lagrangian Duals (LD) of Integer Programs (IP). Although other methods can have better convergence rate in practice, SM have certain advantages that may make them competitive under the right conditions. Furthermore, SM have significantly progressed in recent years, and new versions have been proposed with better theoretical and practical performances in some applications. We computationally evaluate a large class of SM in order to assess if these improvements carry over to the IP setting. For this we build a unified scheme that covers many of the SM proposed in the literature, comprised some often overlooked features like projection and dynamic generation of variables. We fine-tune the many algorithmic parameters of the resulting large class of SM, and we test them on two different Lagrangian duals of the Fixed-Charge Multicommodity Capacitated Network Design problem, in order to assess the impact of the characteristics of the problem on the optimal algorithmic choices. Our results show that, if extensive tuning is performed, SM can be competitive with more sophisticated approaches when the tolerance required for solution is not too tight, which is the case when solving LDs of IPs