6 research outputs found

    One to beat them all: "RYU'' -- a unifying framework for the construction of safe balls

    Full text link
    In this paper, we put forth a novel framework (named ``RYU'') for the construction of ``safe'' balls, i.e. regions that provably contain the dual solution of a target optimization problem. We concentrate on the standard setup where the cost function is the sum of two terms: a closed, proper, convex Lipschitz-smooth function and a closed, proper, convex function. The RYU framework is shown to generalize or improve upon all the results proposed in the last decade for the considered family of optimization problems.Comment: 19 pages, 1 tabl

    Safe rules for the identification of zeros in the solutions of the SLOPE problem

    Full text link
    In this paper we propose a methodology to accelerate the resolution of the so-called ``Sorted L-One Penalized Estimation'' (SLOPE) problem. Our method leverages the concept of ``safe screening'', well-studied in the literature for \textit{group-separable} sparsity-inducing norms, and aims at identifying the zeros in the solution of SLOPE. More specifically, we introduce a family of n!n! safe screening rules for this problem, where nn is the dimension of the primal variable, and propose a tractable procedure to verify if one of these tests is passed. Our procedure has a complexity O(nlog⁥n+LT)\mathcal{O}(n\log n + LT) where T≀nT\leq n is a problem-dependent constant and LL is the number of zeros identified by the tests. We assess the performance of our proposed method on a numerical benchmark and emphasize that it leads to significant computational savings in many setups.Comment: 24 pages, 3 figure

    Screening for a Reweighted Penalized Conditional Gradient Method

    Get PDF
    The conditional gradient method (CGM) is widely used in large-scale sparse convex optimization, having a low per iteration computational cost for structured sparse regularizers and a greedy approach to collecting nonzeros. We explore the sparsity acquiring properties of a general penalized CGM (P-CGM) for convex regularizers and a reweighted penalized CGM (RP-CGM) for nonconvex regularizers, replacing the usual convex constraints with gauge-inspired penalties. This generalization does not increase the per-iteration complexity noticeably. Without assuming bounded iterates or using line search, we show O(1/t)O(1/t) convergence of the gap of each subproblem, which measures distance to a stationary point. We couple this with a screening rule which is safe in the convex case, converging to the true support at a rate O(1/(ÎŽ2))O(1/(\delta^2)) where Ύ≄0\delta \geq 0 measures how close the problem is to degeneracy. In the nonconvex case the screening rule converges to the true support in a finite number of iterations, but is not necessarily safe in the intermediate iterates. In our experiments, we verify the consistency of the method and adjust the aggressiveness of the screening rule by tuning the concavity of the regularizer

    Safe screening tests for lasso based on firmly non-expansiveness

    Get PDF
    International audienceThis paper focusses on safe screening techniques for the LASSO problem. We derive a new sphere test, coined RFNE, exploiting the firmly non-expansiveness of projection operators. Our test generalizes some methods of the literature but, unlike the latter, exploits approximated primal-dual solutions of the LASSO problem while remaining safe and effective. Our simulation results show that the proposed RFNE test out-performs the best methodology of the state of the art, namely the GAP test derived by Fercoq et al
    corecore