15,558 research outputs found

    Disciplined Quasiconvex Programming

    Full text link
    We present a composition rule involving quasiconvex functions that generalizes the classical composition rule for convex functions. This rule complements well-known rules for the curvature of quasiconvex functions under increasing functions and pointwise maximums. We refer to the class of optimization problems generated by these rules, along with a base set of quasiconvex and quasiconcave functions, as disciplined quasiconvex programs. Disciplined quasiconvex programming generalizes disciplined convex programming, the class of optimization problems targeted by most modern domain-specific languages for convex optimization. We describe an implementation of disciplined quasiconvex programming that makes it possible to specify and solve quasiconvex programs in CVXPY 1.0.Comment: p. 4: corrected typo

    Evolutionary Optimization for Decision Making under Uncertainty

    Full text link
    Optimizing decision problems under uncertainty can be done using a variety of solution methods. Soft computing and heuristic approaches tend to be powerful for solving such problems. In this overview article, we survey Evolutionary Optimization techniques to solve Stochastic Programming problems - both for the single-stage and multi-stage case.Comment: Keynote talk at the MENDEL 201

    Column generation based math-heuristic for classification trees

    Full text link
    This paper explores the use of Column Generation (CG) techniques in constructing univariate binary decision trees for classification tasks. We propose a novel Integer Linear Programming (ILP) formulation, based on root-to-leaf paths in decision trees. The model is solved via a Column Generation based heuristic. To speed up the heuristic, we use a restricted instance data by considering a subset of decision splits, sampled from the solutions of the well-known CART algorithm. Extensive numerical experiments show that our approach is competitive with the state-of-the-art ILP-based algorithms. In particular, the proposed approach is capable of handling big data sets with tens of thousands of data rows. Moreover, for large data sets, it finds solutions competitive to CART

    Convergence of Weighted Min-Sum Decoding Via Dynamic Programming on Trees

    Full text link
    Applying the max-product (and belief-propagation) algorithms to loopy graphs is now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This paper presents an analysis of both attenuated max-product (AMP) decoding and weighted min-sum (WMS) decoding for LDPC codes which guarantees convergence to a fixed point when a weight parameter, {\beta}, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both the linear-programming (LP) and maximum-likelihood (ML) solution. For (dv,dc)-regular LDPC codes, the weight must satisfy {\beta}(dv-1) \leq 1 whereas the results proposed by Frey and Koetter require instead that {\beta}(dv-1)(dc-1) < 1. A counterexample which shows a fixed point might not be the ML solution if {\beta}(dv-1) > 1 is also given. Finally, connections are explored with recent work by Arora et al. on the threshold of LP decoding.Comment: 43 pages, 3 figure

    On Decoding Irregular Tanner Codes with Local-Optimality Guarantees

    Full text link
    We consider decoding of binary Tanner codes using message-passing iterative decoding and linear programming (LP) decoding in MBIOS channels. We present new certificates that are based on a combinatorial characterization for local-optimality of a codeword in irregular Tanner codes with respect to any MBIOS channel. This characterization is based on a conical combination of normalized weighted subtrees in the computation trees of the Tanner graph. These subtrees may have any finite height h (even equal or greater than half of the girth of the Tanner graph). In addition, the degrees of local-code nodes in these subtrees are not restricted to two. We prove that local optimality in this new characterization implies maximum-likelihood (ML) optimality and LP optimality, and show that a certificate can be computed efficiently. We also present a new message-passing iterative decoding algorithm, called normalized weighted min-sum (NWMS). NWMS decoding is a BP-type algorithm that applies to any irregular binary Tanner code with single parity-check local codes. We prove that if a locally-optimal codeword with respect to height parameter h exists (whereby notably h is not limited by the girth of the Tanner graph), then NWMS decoding finds this codeword in h iterations. The decoding guarantee of the NWMS decoding algorithm applies whenever there exists a locally optimal codeword. Because local optimality of a codeword implies that it is the unique ML codeword, the decoding guarantee also provides an ML certificate for this codeword. Finally, we apply the new local optimality characterization to regular Tanner codes, and prove lower bounds on the noise thresholds of LP decoding in MBIOS channels. When the noise is below these lower bounds, the probability that LP decoding fails decays doubly exponentially in the girth of the Tanner graph

    Efficient evaluation of mp-MIQP solutions using lifting

    Full text link
    This paper presents an efficient approach for the evaluation of multi-parametric mixed integer quadratic programming (mp-MIQP) solutions, occurring for instance in control problems involving discrete time hybrid systems with quadratic cost. Traditionally, the online evaluation requires a sequential comparison of piecewise quadratic value functions. As the main contribution, we introduce a lifted parameter space in which the piecewise quadratic value functions become piecewise affine and can be merged to a single value function defined over a single polyhedral partition without any overlaps. This enables efficient point location approaches using a single binary search tree. Numerical experiments include a power electronics application and demonstrate an online speedup up to an order of magnitude. We also show how the achievable online evaluation time can be traded off against the offline computational time.Comment: 23 pages, update includes more details Theorem 1 proo

    Finding Minimum Spanning Forests in a Graph

    Full text link
    We introduce a graph partitioning problem motivated by computational topology and propose two algorithms that produce approximate solutions. Specifically, given a weighted, undirected graph GG and a positive integer kk, we desire to find kk disjoint trees within GG such that each vertex of GG is contained in one of the trees and the weight of the largest tree is as small as possible. We are unable to find this problem in the graph partitioning literature, but we show that the problem is NP-complete. We then propose two approximation algorithms, one that uses a spectral clustering approach and another that employs a dynamic programming strategy, which produce near-optimal partitions on a family of test graphs. We describe these algorithms and analyze their empirical performance.Comment: 13 page

    Inference in Graphical Models via Semidefinite Programming Hierarchies

    Full text link
    Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are intimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with nn vertices require solving an SDP with nΘ(d)n^{\Theta(d)} variables where dd is the degree in the hierarchy. In practice, for dβ‰₯4d\ge 4, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation

    Constrained clustering via diagrams: A unified theory and its applications to electoral district design

    Full text link
    The paper develops a general framework for constrained clustering which is based on the close connection of geometric clustering and diagrams. Various new structural and algorithmic results are proved (and known results generalized and unified) which show that the approach is computationally efficient and flexible enough to pursue various conflicting demands. The strength of the model is also demonstrated practically on real-world instances of the electoral district design problem where municipalities of a state have to be grouped into districts of nearly equal population while obeying certain politically motivated requirements

    Twenty (or so) Questions: DD-ary Length-Bounded Prefix Coding

    Full text link
    Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Length-limited Huffman coding, useful for many practical applications, is one such variant, for which codes are restricted to the set of codes in which none of the nn codewords is longer than a given length, lmax⁑l_{\max}. Binary length-limited coding can be done in O(nlmax⁑)O(n l_{\max}) time and O(n) space via the widely used Package-Merge algorithm and with even smaller asymptotic complexity using a lesser-known algorithm. In this paper these algorithms are generalized without increasing complexity in order to introduce a minimum codeword length constraint lmin⁑l_{\min}, to allow for objective functions other than the minimization of expected codeword length, and to be applicable to both binary and nonbinary codes; nonbinary codes were previously addressed using a slower dynamic programming approach. These extensions have various applications -- including fast decompression and a modified version of the game ``Twenty Questions'' -- and can be used to solve the problem of finding an optimal code with limited fringe, that is, finding the best code among codes with a maximum difference between the longest and shortest codewords. The previously proposed method for solving this problem was nonpolynomial time, whereas solving this using the novel linear-space algorithm requires only O(n(lmaxβ‘βˆ’lmin⁑)2)O(n (l_{\max}- l_{\min})^2) time, or even less if lmaxβ‘βˆ’lmin⁑l_{\max}- l_{\min} is not O(log⁑n)O(\log n).Comment: 12 pages, 4 figures, extended version of cs/0701012 (accepted to ISIT 2007), formerly "Twenty (or so) Questions: DD-ary Bounded-Length Huffman Coding
    • …
    corecore