7,723 research outputs found

    A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

    Full text link
    We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which are necessary to be solved at each iteration of these methods. This permitted us to analyze the convergence of these methods in a unified way compared to previous results which required different approaches for each method/algorithm. Our contribution rely on two well-known methods in non-smooth convex optimization: the mirror-descent method by Nemirovski-Yudin and the dual-averaging method by Nesterov. Therefore, our family of methods includes them and many other methods as particular cases. For instance, the proposed family of classical gradient methods and its accelerations generalize Devolder et al.'s, Nesterov's primal/dual gradient methods, and Tseng's accelerated proximal gradient methods. Also our family of methods can partially become special cases of other universal methods, too. As an additional contribution, the novel extended mirror-descent method removes the compactness assumption of the feasible region and the fixation of the total number of iterations which is required by the original mirror-descent method in order to attain the optimal complexity.Comment: 31 pages. v3: Major revision. Research Report B-477, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, February 201

    Interior Point Decoding for Linear Vector Channels

    Full text link
    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem. The proposed decoding algorithm is based on a numerical optimization technique so called interior point method with barrier function. Approximate variations of the gradient descent and the Newton methods are used to solve the convex optimization problem. In a decoding process of the proposed algorithm, a search point always lies in the fundamental polytope defined based on a low-density parity-check matrix. Compared with a convectional joint message passing decoder, the proposed decoding algorithm achieves better BER performance with less complexity in the case of partial response channels in many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE Transaction on Information Theor

    Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

    Full text link
    Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees. MP and FW address optimization over the linear span and the convex hull of a set of atoms, respectively. In this paper, we consider the intermediate case of optimization over the convex cone, parametrized as the conic hull of a generic atom set, leading to the first principled definitions of non-negative MP algorithms for which we give explicit convergence rates and demonstrate excellent empirical performance. In particular, we derive sublinear (O(1/t)\mathcal{O}(1/t)) convergence on general smooth and convex objectives, and linear convergence (O(e−t)\mathcal{O}(e^{-t})) on strongly convex objectives, in both cases for general sets of atoms. Furthermore, we establish a clear correspondence of our algorithms to known algorithms from the MP and FW literature. Our novel algorithms and analyses target general atom sets and general objective functions, and hence are directly applicable to a large variety of learning settings.Comment: NIPS 201
    • …
    corecore