20,512 research outputs found

    An ILP Solver for Multi-label MRFs with Connectivity Constraints

    Full text link
    Integer Linear Programming (ILP) formulations of Markov random fields (MRFs) models with global connectivity priors were investigated previously in computer vision, e.g., \cite{globalinter,globalconn}. In these works, only Linear Programing (LP) relaxations \cite{globalinter,globalconn} or simplified versions \cite{graphcutbase} of the problem were solved. This paper investigates the ILP of multi-label MRF with exact connectivity priors via a branch-and-cut method, which provably finds globally optimal solutions. The method enforces connectivity priors iteratively by a cutting plane method, and provides feasible solutions with a guarantee on sub-optimality even if we terminate it earlier. The proposed ILP can be applied as a post-processing method on top of any existing multi-label segmentation approach. As it provides globally optimal solution, it can be used off-line to generate ground-truth labeling, which serves as quality check for any fast on-line algorithm. Furthermore, it can be used to generate ground-truth proposals for weakly supervised segmentation. We demonstrate the power and usefulness of our model by several experiments on the BSDS500 and PASCAL image dataset, as well as on medical images with trained probability maps.Comment: 19 page

    Multiscale approach for the network compression-friendly ordering

    Full text link
    We present a fast multiscale approach for the network minimum logarithmic arrangement problem. This type of arrangement plays an important role in a network compression and fast node/link access operations. The algorithm is of linear complexity and exhibits good scalability which makes it practical and attractive for using on large-scale instances. Its effectiveness is demonstrated on a large set of real-life networks. These networks with corresponding best-known minimization results are suggested as an open benchmark for a research community to evaluate new methods for this problem

    Computational Complexity versus Statistical Performance on Sparse Recovery Problems

    Get PDF
    We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar's condition number is a data-driven complexity measure for convex programs, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.Comment: Final version, to appear in information and Inferenc

    Paradigms for computational nucleic acid design

    Get PDF
    The design of DNA and RNA sequences is critical for many endeavors, from DNA nanotechnology, to PCR‐based applications, to DNA hybridization arrays. Results in the literature rely on a wide variety of design criteria adapted to the particular requirements of each application. Using an extensively studied thermodynamic model, we perform a detailed study of several criteria for designing sequences intended to adopt a target secondary structure. We conclude that superior design methods should explicitly implement both a positive design paradigm (optimize affinity for the target structure) and a negative design paradigm (optimize specificity for the target structure). The commonly used approaches of sequence symmetry minimization and minimum free‐energy satisfaction primarily implement negative design and can be strengthened by introducing a positive design component. Surprisingly, our findings hold for a wide range of secondary structures and are robust to modest perturbation of the thermodynamic parameters used for evaluating sequence quality, suggesting the feasibility and ongoing utility of a unified approach to nucleic acid design as parameter sets are refined further. Finally, we observe that designing for thermodynamic stability does not determine folding kinetics, emphasizing the opportunity for extending design criteria to target kinetic features of the energy landscape

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure
    corecore