512,822 research outputs found

    An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization

    Full text link
    We study a Newton-like method for the minimization of an objective function that is the sum of a smooth convex function and an l-1 regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model of the objective function. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Symmetric Submodular Function Minimization Under Hereditary Family Constraints

    Full text link
    We present an efficient algorithm to find non-empty minimizers of a symmetric submodular function over any family of sets closed under inclusion. This for example includes families defined by a cardinality constraint, a knapsack constraint, a matroid independence constraint, or any combination of such constraints. Our algorithm make O(n3)O(n^3) oracle calls to the submodular function where nn is the cardinality of the ground set. In contrast, the problem of minimizing a general submodular function under a cardinality constraint is known to be inapproximable within o(n/logn)o(\sqrt{n/\log n}) (Svitkina and Fleischer [2008]). The algorithm is similar to an algorithm of Nagamochi and Ibaraki [1998] to find all nontrivial inclusionwise minimal minimizers of a symmetric submodular function over a set of cardinality nn using O(n3)O(n^3) oracle calls. Their procedure in turn is based on Queyranne's algorithm [1998] to minimize a symmetric submodularComment: 13 pages, Submitted to SODA 201

    A transformation method for constrained-function minimization

    Get PDF
    A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points

    New Query Lower Bounds for Submodular Function Minimization

    Get PDF
    We consider submodular function minimization in the oracle model: given black-box access to a submodular set function f:2[n]Rf:2^{[n]}\rightarrow \mathbb{R}, find an element of argminS{f(S)}\arg\min_S \{f(S)\} using as few queries to f()f(\cdot) as possible. State-of-the-art algorithms succeed with O~(n2)\tilde{O}(n^2) queries [LeeSW15], yet the best-known lower bound has never been improved beyond nn [Harvey08]. We provide a query lower bound of 2n2n for submodular function minimization, a 3n/223n/2-2 query lower bound for the non-trivial minimizer of a symmetric submodular function, and a (n2)\binom{n}{2} query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our 3n/223n/2-2 lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a 3n/223n/2-2 cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than n+1n+1 for ss-tt mincut, even in a directed, weighted graph
    corecore