174 research outputs found

    Generalized Bundle Methods

    Get PDF
    We study a class of generalized bundle methods for which the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under a different hypothesis on the stabilizing term and/or the function to be minimized, we prove finite termination, asymptotic convergence, and finite convergence to an optimal point, with or without limits on the number of serious steps and/or requiring the proximal parameter to go to infinity. The convergence proofs leave a high degree of freedom in the crucial implementative features of the algorithm, i.e., the management of the bundle of subgradients (β-strategy) and of the proximal parameter (t-strategy). We extensively exploit a dual view of bundle methods, which are shown to be a dual ascent approach to one nonlinear problem in an appropriate dual space, where nonlinear subproblems are approximately solved at each step with an inner linearization approach. This allows us to precisely characterize the changes in the subproblems during the serious steps, since the dual problem is not tied to the local concept of ε-subdifferential. For some of the proofs, a generalization of inf-compactness, called *-compactness, is required; this concept is related to that of asymptotically well-behaved functions

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    Stabilized Benders methods for large-scale combinatorial optimization, with appllication to data privacy

    Get PDF
    The Cell Suppression Problem (CSP) is a challenging Mixed-Integer Linear Problem arising in statistical tabular data protection. Medium sized instances of CSP involve thousands of binary variables and million of continuous variables and constraints. However, CSP has the typical structure that allows application of the renowned Benders’ decomposition method: once the “complicating” binary variables are fixed, the problem decomposes into a large set of linear subproblems on the “easy” continuous ones. This allows to project away the easy variables, reducing to a master problem in the complicating ones where the value functions of the subproblems are approximated with the standard cutting-plane approach. Hence, Benders’ decomposition suffers from the same drawbacks of the cutting-plane method, i.e., oscillation and slow convergence, compounded with the fact that the master problem is combinatorial. To overcome this drawback we present a stabilized Benders decomposition whose master is restricted to a neighborhood of successful candidates by local branching constraints, which are dynamically adjusted, and even dropped, during the iterations. Our experiments with randomly generated and real-world CSP instances with up to 3600 binary variables, 90M continuous variables and 15M inequality constraints show that our approach is competitive with both the current state-of-the-art (cutting-plane-based) code for cell suppression, and the Benders implementation in CPLEX 12.7. In some instances, stabilized Benders is able to quickly provide a very good solution in less than one minute, while the other approaches were not able to find any feasible solution in one hour.Peer ReviewedPreprin

    Bicriteria data compression

    Get PDF
    The advent of massive datasets (and the consequent design of high-performing distributed storage systems) have reignited the interest of the scientific and engineering community towards the design of lossless data compressors which achieve effective compression ratio and very efficient decompression speed. Lempel-Ziv's LZ77 algorithm is the de facto choice in this scenario because of its decompression speed and its flexibility in trading decompression speed versus compressed-space efficiency. Each of the existing implementations offers a trade-off between space occupancy and decompression speed, so software engineers have to content themselves by picking the one which comes closer to the requirements of the application in their hands. Starting from these premises, and for the first time in the literature, we address in this paper the problem of trading optimally, and in a principled way, the consumption of these two resources by introducing the Bicriteria LZ77-Parsing problem, which formalizes in a principled way what data-compressors have traditionally approached by means of heuristics. The goal is to determine an LZ77 parsing which minimizes the space occupancy in bits of the compressed file, provided that the decompression time is bounded by a fixed amount (or vice-versa). This way, the software engineer can set its space (or time) requirements and then derive the LZ77 parsing which optimizes the decompression speed (or the space occupancy, respectively). We solve this problem efficiently in O(n log^2 n) time and optimal linear space within a small, additive approximation, by proving and deploying some specific structural properties of the weighted graph derived from the possible LZ77-parsings of the input file. The preliminary set of experiments shows that our novel proposal dominates all the highly engineered competitors, hence offering a win-win situation in theory&practice

    Incremental bundle methods using upper models

    Get PDF
    We propose a family of proximal bundle methods for minimizing sum-structured convex nondifferentiable functions which require two slightly uncommon assumptions, that are satisfied in many relevant applications: Lipschitz continuity of the functions and oracles which also produce upper estimates on the function values. In exchange, the methods: i) use upper models of the functions that allow to estimate function values at points where the oracle has not been called; ii) provide the oracles with more information about when the function computation can be interrupted, possibly diminishing their cost; iii) allow to skip oracle calls entirely for some of the component functions, not only at "null steps" but also at "serious steps"; iv) provide explicit and reliable a-posteriori estimates of the quality of the obtained solutions; v) work with all possible combinations of different assumptions on how the oracles deal with not being able to compute the function with arbitrary accuracy. We also discuss the introduction of constraints (or, more generally, of easy components) and use of (partly) aggregated models

    Dynamic smoothness parameter for fast gradient methods

    Get PDF
    We present and computationally evaluate a variant of the fast gradient method by Nesterov that is capable of exploiting information, even if approximate, about the optimal value of the problem. This information is available in some applications, among which the computation of bounds for hard integer programs. We show that dynamically changing the smoothness parameter of the algorithm using this information results in a better convergence profile of the algorithm in practice

    Approximate optimality conditions and stopping criteria in canonical DC programming

    Get PDF
    In this paper, we study approximate optimality conditions for the Canonical DC (CDC) optimization problem and their relationships with stopping criteria for a large class of solution algorithms for the problem. In fact, global optimality conditions for CDC are very often restated in terms of a non-convex optimization problem, which has to be solved each time the optimality of a given tentative solution has to be checked. Since this is in principle a costly task, it makes sense to only solve the problem approximately, leading to an inexact stopping criteria and therefore to approximate optimality conditions. In this framework, it is important to study the relationships between the approximation in the stopping criteria and the quality of the solutions that the corresponding approximated optimality conditions may eventually accept as optimal, in order to ensure that a small tolerance in the stopping criteria does not lead to a disproportionally large approximation of the optimal value of the CDC problem. We develop conditions ensuring that this is the case; these turn out to be closely related with the well-known concept of regularity of a CDC problem, actually coinciding with the latter if the reverse-constraint set is a polyhedron

    Start-up/Shut-down MINLP Formulations for the Unit Commitment with Ramp Constraints

    Get PDF
    In [1 the first MIP exact formulation was provided that describes the convex hull of the solutions satisfying all the standard operational constraints for the thermal units: minimum up- and down-time, minimum and maximum power output, ramp (including start-up and shut-down) limits, general history-dependent start-up costs, and nonlinear convex power production costs. That formulation contains a polynomial, but large, number of variables and constraints. We present two new formulations with fewer variables defined on the shut-down period and computationally test the trade-off between reduced size and possibly weaker bounds. [1] Bacci,T.,Frangioni,A.,Gentile,C.,Tavlaridis-Gyparakis,K.:NewMINLPformulationsfor the single-unit commitment problems with ramping constraints. http://www.optimization- online.org/DB HTML/2019/10/7426.html, submitted (2019

    Delay-constrained Routing Problems: Accurate Scheduling Models and Admission Control

    Get PDF
    As shown in [1], the problem of routing a flow subject to a worst-case end-to-end delay constraint in a packed-based network can be formulated as a Mixed-Integer Second-Order Cone Program, and solved with general-purpose tools in real time on realistic instances. However, that result only holds for one particular class of packet schedulers, Strictly Rate-Proportional ones, and implicitly considering each link to be fully loaded, so that the reserved rate of a flow coincides with its guaranteed rate. These assumptions make latency expressions simpler, and enforce perfect isolation between flows, i.e., admitting a new flow cannot increase the delay of existing ones. Other commonplace schedulers both yield more complex latency formulæ and do not enforce flow isolation. Furthermore, the delay actually depends on the guaranteed rate of the flow, which can be significantly larger than the reserved rate if the network is unloaded. In this paper we extend the result to other classes of schedulers and to a more accurate representation of the latency, showing that, even when admission control needs to be factored in, the problem is still efficiently solvable for realistic instances, provided that the right modeling choices are made. Keywords: Routing problems, maximum delay constraints, scheduling algorithms, admission control, Second-Order Cone Programs, Perspective Reformulatio
    • …
    corecore