103,954 research outputs found

    Track Category Ant Colony Optimization and Swarm Intelligence

    Get PDF
    Spatial Extension PSO (SEPSO) and Attractive-Repulsive PSO (ARPSO) are methods for artificial injection of diversity into particle swarm optimizers that are intended to encourage converged swarms to engage in exploration. While simple to implement, effective when tuned correctly, and benefiting from intuitive appeal, SEPSO behavior can be improved by adapting its radius and bounce parameters in response to collisions. In fact, adaptation can allow SEPSO to compete with and outperform ARPSO. The adaptation strategies presented here are simple to implement, easy to tune, and retain SEPSO’s intuitive appeal

    Vilin: Unconstrained Numerical Optimization Application

    Full text link
    We introduce an application for executing and testing different unconstrained optimization algorithms. The application contains a library of various test functions with pre-defined starting points. A several known classes of methods as well as different classes of line search procedures are covered. Each method can be tested on various test function with a chosen number of parameters. Solvers come with optimal pre-defined parameter values which simplifies the usage. Additionally, user friendly interface gives an opportunity for advanced users to use their expertise and also easily fine-tune a large number of hyper parameters for obtaining even more optimal solution. This application can be used as a tool for developing new optimization algorithms (by using simple API), as well as for testing and comparing existing ones, by using given standard library of test functions. Special care has been given in order to achieve good numerical stability of all vital parts of the application. The application is implemented in programming language Matlab with very helpful gui support.Comment: 23 pages, one figur

    Fair Classification via Unconstrained Optimization

    Full text link
    Achieving the Bayes optimal binary classification rule subject to group fairness constraints is known to be reducible, in some cases, to learning a group-wise thresholding rule over the Bayes regressor. In this paper, we extend this result by proving that, in a broader setting, the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor but with a (possible) randomization at the thresholds. This provides a stronger justification to the post-processing approach in fair classification, in which (1) a predictor is learned first, after which (2) its output is adjusted to remove bias. We show how the post-processing rule in this two-stage approach can be learned quite efficiently by solving an unconstrained optimization problem. The proposed algorithm can be applied to any black-box machine learning model, such as deep neural networks, random forests and support vector machines. In addition, it can accommodate many fairness criteria that have been previously proposed in the literature, such as equalized odds and statistical parity. We prove that the algorithm is Bayes consistent and motivate it, furthermore, via an impossibility result that quantifies the tradeoff between accuracy and fairness across multiple demographic groups. Finally, we conclude by validating the algorithm on the Adult benchmark dataset

    Fixed-Time Stable Gradient Flows: Applications to Continuous-Time Optimization

    Full text link
    This paper proposes novel gradient-flow schemes that yield convergence to the optimal point of a convex optimization problem within a \textit{fixed} time from any given initial condition for unconstrained optimization, constrained optimization, and min-max problems. The application of the modified gradient flow to unconstrained optimization problems is studied under the assumption of gradient-dominance. Then, a modified Newton's method is presented that exhibits fixed-time convergence under some mild conditions on the objective function. Building upon this method, a novel technique for solving convex optimization problems with linear equality constraints that yields convergence to the optimal point in fixed time is developed. More specifically, constrained optimization problems formulated as min-max problems are considered, and a novel method for computing the optimal solution in fixed-time is proposed using the Lagrangian dual. Finally, the general min-max problem is considered, and a modified scheme to obtain the optimal solution of saddle-point dynamics in fixed time is developed. Numerical illustrations that compare the performance of the proposed method against Newton's method, rescaled-gradient method, and Nesterov's accelerated method are included to corroborate the efficacy and applicability of the modified gradient flows in constrained and unconstrained optimization problems.Comment: 15 pages, 11 figure

    An efficient nonmonotone adaptive trust region method for unconstrained optimization

    Full text link
    In this paper, we propose a new and efficient nonmonotone adaptive trust region algorithm to solve unconstrained optimization problems. This algorithm incorporates two novelties: it benefits from a radius dependent shrinkage parameter for adjusting the trust region radius that avoids undesirable directions; and it exploits a strategy to prevent sudden increments of objective function values in nonmonotone trust region techniques. Global convergence of this algorithm is investigated under mild conditions. Numerical experiments demonstrate the efficiency and robustness of the proposed algorithm in solving a collection of unconstrained optimization problems from CUTEst package

    Two globally convergent nonmonotone trust-region methods for unconstrained optimization

    Full text link
    This paper addresses some trust-region methods equipped with nonmonotone strategies for solving nonlinear unconstrained optimization problems. More specifically, the importance of using nonmonotone techniques in nonlinear optimization is motivated, then two new nonmonotone terms are proposed, and their combinations into the traditional trust-region framework are studied. The global convergence to first- and second-order stationary points and local superlinear and quadratic convergence rates for both algorithms are established. Numerical experiments on the \textsf{CUTEst} test collection of unconstrained problems and some highly nonlinear test functions are reported, where a comparison among state-of-the-art nonmonotone trust-region methods show the efficiency of the proposed nonmonotne schemes

    An unconstrained optimization approach for finding real eigenvalues of even order symmetric tensors

    Full text link
    Let nn be a positive integer and mm be a positive even integer. Let A{\mathcal A} be an mthm^{th} order nn-dimensional real weakly symmetric tensor and B{\mathcal B} be a real weakly symmetric positive definite tensor of the same size. λR\lambda \in R is called a Br{\mathcal B}_r-eigenvalue of A{\mathcal A} if Axm1=λBxm1{\mathcal A} x^{m-1} = \lambda {\mathcal B} x^{m-1} for some xRn\{0}x \in R^n \backslash \{0\}. In this paper, we introduce two unconstrained optimization problems and obtain some variational characterizations for the minimum and maximum Br{\mathcal B}_r--eigenvalues of A{\mathcal A}. Our results extend Auchmuty's unconstrained variational principles for eigenvalues of real symmetric matrices. This unconstrained optimization approach can be used to find a Z-, H-, or D-eigenvalue of an even order weakly symmetric tensor. We provide some numerical results to illustrate the effectiveness of this approach for finding a Z-eigenvalue and for determining the positive semidefiniteness of an even order symmetric tensor.Comment: 24 page

    An Asynchronous Parallel Stochastic Coordinate Descent Algorithm

    Full text link
    We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n1/2)O(n^{1/2}) in unconstrained optimization and O(n1/4)O(n^{1/4}) in the separable-constrained case, where nn is the number of variables. We describe results from implementation on 40-core processors

    Minimax Optimal Algorithms for Unconstrained Linear Optimization

    Full text link
    We design and analyze minimax-optimal algorithms for online linear optimization games where the player's choice is unconstrained. The player strives to minimize regret, the difference between his loss and the loss of a post-hoc benchmark strategy. The standard benchmark is the loss of the best strategy chosen from a bounded comparator set. When the the comparison set and the adversary's gradients satisfy L_infinity bounds, we give the value of the game in closed form and prove it approaches sqrt(2T/pi) as T -> infinity. Interesting algorithms result when we consider soft constraints on the comparator, rather than restricting it to a bounded set. As a warmup, we analyze the game with a quadratic penalty. The value of this game is exactly T/2, and this value is achieved by perhaps the simplest online algorithm of all: unprojected gradient descent with a constant learning rate. We then derive a minimax-optimal algorithm for a much softer penalty function. This algorithm achieves good bounds under the standard notion of regret for any comparator point, without needing to specify the comparator set in advance. The value of this game converges to sqrt{e} as T ->infinity; we give a closed-form for the exact value as a function of T. The resulting algorithm is natural in unconstrained investment or betting scenarios, since it guarantees at worst constant loss, while allowing for exponential reward against an "easy" adversary

    On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications

    Full text link
    This paper addresses the problem of finding the closest generalized essential matrix from a given 6×66\times 6 matrix, with respect to the Frobenius norm. To the best of our knowledge, this nonlinear constrained optimization problem has not been addressed in the literature yet. Although it can be solved directly, it involves a large number of constraints, and any optimization method to solve it would require much computational effort. We start by deriving a couple of unconstrained formulations of the problem. After that, we convert the original problem into a new one, involving only orthogonal constraints, and propose an efficient algorithm of steepest descent-type to find its solution. To test the algorithms, we evaluate the methods with synthetic data and conclude that the proposed steepest descent-type approach is much faster than the direct application of general optimization techniques to the original formulation with 33 constraints and to the unconstrained ones. To further motivate the relevance of our method, we apply it in two pose problems (relative and absolute) using synthetic and real data.Comment: 14 pages, 7 figures, journa
    corecore