49,980 research outputs found

    New algorithms for the dual of the convex cost network flow problem with application to computer vision

    Get PDF
    Motivated by various applications to computer vision, we consider an integer convex optimization problem which is the dual of the convex cost network flow problem. In this paper, we first propose a new primal algorithm for computing an optimal solution of the problem. Our primal algorithm iteratively updates primal variables by solving associated minimum cut problems. The main contribution in this paper is to provide a tight bound for the number of the iterations. We show that the time complexity of the primal algorithm is K ¢ T(n;m) where K is the range of primal variables and T(n;m) is the time needed to compute a minimum cut in a graph with n nodes and m edges. We then propose a primal-dual algorithm for the dual of the convex cost network flow problem. The primal-dual algorithm can be seen as a refined version of the primal algorithm by maintaining dual variables (flow) in addition to primal variables. Although its time complexity is the same as that for the primal algorithm, we can expect a better performance practically. We finally consider an application to a computer vision problem called the panoramic stitching problem. We apply several implementations of our primal-dual algorithm to some instances of the panoramic stitching problem and test their practical performance. We also show that our primal algorithm as well as the proofs can be applied to the L\-convex function minimization problem which is a more general problem than the dual of the convex cost network flow problem

    Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

    Get PDF
    We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods

    A Primal Dual Smoothing Framework for Max-Structured Nonconvex Optimization

    Full text link
    We propose a primal dual first-order smoothing framework for solving a class of nonsmooth nonconvex optimization problems with max-structure. We analyze the primal and dual oracle complexities of the framework via two approaches, i.e., the dual-then-primal and primal-the-dual smoothing approaches. Our framework improves the best-known oracle complexities of the existing methods, even in the restricted problem setting. As the cornerstone of our framework, we propose a conceptually simple primal dual method for solving a class of convex-concave saddle-point problems with primal strong convexity, which is based on a newly developed non-Hilbertian inexact accelerated proximal gradient algorithm. This primal dual method has a dual oracle complexity that is significantly better than the previous ones, and a primal oracle complexity that matches the best-known, up to logarithmic factor. Finally, we extend our framework to the stochastic case, and demonstrate that the oracle complexities of this extension indeed match the state-of-the-art.Comment: 37 page

    Regularization and Kernelization of the Maximin Correlation Approach

    Full text link
    Robust classification becomes challenging when each class consists of multiple subclasses. Examples include multi-font optical character recognition and automated protein function prediction. In correlation-based nearest-neighbor classification, the maximin correlation approach (MCA) provides the worst-case optimal solution by minimizing the maximum misclassification risk through an iterative procedure. Despite the optimality, the original MCA has drawbacks that have limited its wide applicability in practice. That is, the MCA tends to be sensitive to outliers, cannot effectively handle nonlinearities in datasets, and suffers from having high computational complexity. To address these limitations, we propose an improved solution, named regularized maximin correlation approach (R-MCA). We first reformulate MCA as a quadratically constrained linear programming (QCLP) problem, incorporate regularization by introducing slack variables in the primal problem of the QCLP, and derive the corresponding Lagrangian dual. The dual formulation enables us to apply the kernel trick to R-MCA so that it can better handle nonlinearities. Our experimental results demonstrate that the regularization and kernelization make the proposed R-MCA more robust and accurate for various classification tasks than the original MCA. Furthermore, when the data size or dimensionality grows, R-MCA runs substantially faster by solving either the primal or dual (whichever has a smaller variable dimension) of the QCLP.Comment: Submitted to IEEE Acces
    corecore