1,267 research outputs found

    On iterated minimization in nonconvex optimization

    Get PDF
    In dynamic programming and decomposition methods one often applies an iterated minimization procedure. The problem variables are partitioned into several blocks, say x and y. Treating y as a parameter, the first phase consists of minimization with respect to the variable x. In a second phase the minimization of the resulting optimal value function depending on y is considered. In this paper we treat this basic idea on a local level. It turns out that strong stability (in the sense of Kojima) in the first phase is a natural assumption. In order to show that the iterated local minima of the parametric problem lead to a local minimum for the whole problem, we use a generalized version of a positive definiteness criterion of Fujiwara-Han-Mangasarian

    Weighted Schatten pp-Norm Minimization for Image Denoising and Background Subtraction

    Full text link
    Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten pp-Norm Minimization (WSNM), to generalize the NNM to the Schatten pp-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex lpl_p-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.Comment: 13 pages, 11 figure

    On the monotone and primal-dual active set schemes for p\ell^p-type problems, p(0,1]p \in (0,1]

    Full text link
    Nonsmooth nonconvex optimization problems involving the p\ell^p quasi-norm, p(0,1]p \in (0, 1], of a linear map are considered. A monotonically convergent scheme for a regularized version of the original problem is developed and necessary optimality conditions for the original problem in the form of a complementary system amenable for computation are given. Then an algorithm for solving the above mentioned necessary optimality conditions is proposed. It is based on a combination of the monotone scheme and a primal-dual active set strategy. The performance of the two algorithms is studied by means of a series of numerical tests in different cases, including optimal control problems, fracture mechanics and microscopy image reconstruction

    Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection

    Full text link
    A number of variable selection methods have been proposed involving nonconvex penalty functions. These methods, which include the smoothly clipped absolute deviation (SCAD) penalty and the minimax concave penalty (MCP), have been demonstrated to have attractive theoretical properties, but model fitting is not a straightforward task, and the resulting solutions may be unstable. Here, we demonstrate the potential of coordinate descent algorithms for fitting these models, establishing theoretical convergence properties and demonstrating that they are significantly faster than competing approaches. In addition, we demonstrate the utility of convexity diagnostics to determine regions of the parameter space in which the objective function is locally convex, even though the penalty is not. Our simulation study and data examples indicate that nonconvex penalties like MCP and SCAD are worthwhile alternatives to the lasso in many applications. In particular, our numerical results suggest that MCP is the preferred approach among the three methods.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS388 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Efficient Regret Minimization in Non-Convex Games

    Full text link
    We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.Comment: Published as a conference paper at ICML 201
    corecore