258,587 research outputs found

    Image labeling and grouping by minimizing linear functionals over cones

    Full text link
    We consider energy minimization problems related to image labeling, partitioning, and grouping, which typically show up at mid-level stages of computer vision systems. A common feature of these problems is their intrinsic combinatorial complexity from an optimization pointof-view. Rather than trying to compute the global minimum - a goal we consider as elusive in these cases - we wish to design optimization approaches which exhibit two relevant properties: First, in each application a solution with guaranteed degree of suboptimality can be computed. Secondly, the computations are based on clearly defined algorithms which do not comprise any (hidden) tuning parameters. In this paper, we focus on the second property and introduce a novel and general optimization technique to the field of computer vision which amounts to compute a sub optimal solution by just solving a convex optimization problem. As representative examples, we consider two binary quadratic energy functionals related to image labeling and perceptual grouping. Both problems can be considered as instances of a general quadratic functional in binary variables, which is embedded into a higher-dimensional space such that sub optimal solutions can be computed as minima of linear functionals over cones in that space (semidefinite programs). Extensive numerical results reveal that, on the average, sub optimal solutions can be computed which yield a gap below 5% with respect to the global optimum in case where this is known

    On Approximability of Bounded Degree Instances of Selected Optimization Problems

    Get PDF
    In order to cope with the approximation hardness of an underlying optimization problem, it is advantageous to consider specific families of instances with properties that can be exploited to obtain efficient approximation algorithms for the restricted version of the problem with improved performance guarantees. In this thesis, we investigate the approximation complexity of selected NP-hard optimization problems restricted to instances with bounded degree, occurrence or weight parameter. Specifically, we consider the family of dense instances, where typically the average degree is bounded from below by some function of the size of the instance. Complementarily, we examine the family of sparse instances, in which the average degree is bounded from above by some fixed constant. We focus on developing new methods for proving explicit approximation hardness results for general as well as for restricted instances. The fist part of the thesis contributes to the systematic investigation of the VERTEX COVER problem in k-hypergraphs and k-partite k-hypergraphs with density and regularity constraints. We design efficient approximation algorithms for the problems with improved performance guarantees as compared to the general case. On the other hand, we prove the optimality of our approximation upper bounds under the Unique Games Conjecture or a variant. In the second part of the thesis, we study mainly the approximation hardness of restricted instances of selected global optimization problems. We establish improved or in some cases the first inapproximability thresholds for the problems considered in this thesis such as the METRIC DIMENSION problem restricted to graphs with maximum degree 3 and the (1,2)-STEINER TREE problem. We introduce a new reductions method for proving explicit approximation lower bounds for problems that are related to the TRAVELING SALESPERSON (TSP) problem. In particular, we prove the best up to now inapproximability thresholds for the general METRIC TSP problem, the ASYMMETRIC TSP problem, the SHORTEST SUPERSTRING problem, the MAXIMUM TSP problem and TSP problems with bounded metrics

    Global optimization for low-dimensional switching linear regression and bounded-error estimation

    Get PDF
    The paper provides global optimization algorithms for two particularly difficult nonconvex problems raised by hybrid system identification: switching linear regression and bounded-error estimation. While most works focus on local optimization heuristics without global optimality guarantees or with guarantees valid only under restrictive conditions, the proposed approach always yields a solution with a certificate of global optimality. This approach relies on a branch-and-bound strategy for which we devise lower bounds that can be efficiently computed. In order to obtain scalable algorithms with respect to the number of data, we directly optimize the model parameters in a continuous optimization setting without involving integer variables. Numerical experiments show that the proposed algorithms offer a higher accuracy than convex relaxations with a reasonable computational burden for hybrid system identification. In addition, we discuss how bounded-error estimation is related to robust estimation in the presence of outliers and exact recovery under sparse noise, for which we also obtain promising numerical results

    Robust Adaptive Beamforming for General-Rank Signal Model with Positive Semi-Definite Constraint via POTDC

    Full text link
    The robust adaptive beamforming (RAB) problem for general-rank signal model with an additional positive semi-definite constraint is considered. Using the principle of the worst-case performance optimization, such RAB problem leads to a difference-of-convex functions (DC) optimization problem. The existing approaches for solving the resulted non-convex DC problem are based on approximations and find only suboptimal solutions. Here we solve the non-convex DC problem rigorously and give arguments suggesting that the solution is globally optimal. Particularly, we rewrite the problem as the minimization of a one-dimensional optimal value function whose corresponding optimization problem is non-convex. Then, the optimal value function is replaced with another equivalent one, for which the corresponding optimization problem is convex. The new one-dimensional optimal value function is minimized iteratively via polynomial time DC (POTDC) algorithm.We show that our solution satisfies the Karush-Kuhn-Tucker (KKT) optimality conditions and there is a strong evidence that such solution is also globally optimal. Towards this conclusion, we conjecture that the new optimal value function is a convex function. The new RAB method shows superior performance compared to the other state-of-the-art general-rank RAB methods.Comment: 29 pages, 7 figures, 2 tables, Submitted to IEEE Trans. Signal Processing on August 201

    Robust Monotonic Optimization Framework for Multicell MISO Systems

    Full text link
    The performance of multiuser systems is both difficult to measure fairly and to optimize. Most resource allocation problems are non-convex and NP-hard, even under simplifying assumptions such as perfect channel knowledge, homogeneous channel properties among users, and simple power constraints. We establish a general optimization framework that systematically solves these problems to global optimality. The proposed branch-reduce-and-bound (BRB) algorithm handles general multicell downlink systems with single-antenna users, multiantenna transmitters, arbitrary quadratic power constraints, and robustness to channel uncertainty. A robust fairness-profile optimization (RFO) problem is solved at each iteration, which is a quasi-convex problem and a novel generalization of max-min fairness. The BRB algorithm is computationally costly, but it shows better convergence than the previously proposed outer polyblock approximation algorithm. Our framework is suitable for computing benchmarks in general multicell systems with or without channel uncertainty. We illustrate this by deriving and evaluating a zero-forcing solution to the general problem.Comment: Published in IEEE Transactions on Signal Processing, 16 pages, 9 figures, 2 table
    corecore