14,331 research outputs found

    Parallel projected variable metric algorithms for unconstrained optimization

    Get PDF
    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm

    Pulsar Algorithms: A Class of Coarse-Grain Parallel Nonlinear Optimization Algorithms

    Get PDF
    Parallel architectures of modern computers formed of processors with high computing power motivate the search for new approaches to basic computational algorithms. Another motivating force for parallelization of algorithms has been the need to solve very large scale or complex problems. However, the complexity of a mathematical programming problem is not necessarily due to its scale or dimension; thus, we should search also for new parallel computation approaches to problems that might have a moderate size but are difficult for other reasons. One of such approaches might be coarse-grained parallelization based on a parametric imbedding of an algorithm and on an allocation of resulting algorithmic phases and variants to many processors with suitable coordination of data obtained that way. Each processor performs then a phase of the algorithm -- a substantial computational task which mitigates the problems related to data transmission and coordination. The paper presents a class of such coarse-grained parallel algorithms for unconstrained nonlinear optimization, called pulsar algorithms since the approximations of an optimal solution alternatively increase and reduce their spread in subsequent iterations. The main algorithmic phase of an algorithm of this class might be either a directional search or a restricted step determination in a trust region method. This class is exemplified by a modified, parallel Newton-type algorithm and a parallel rank-one variable metric algorithm. In the latter case, a consistent approximation of the inverse of the hessian matrix based on parallel produced data is available at each iteration, while the known deficiencies of a rank-one variable metric are suppressed by a parallel implementation. Additionally, pulsar algorithms might use a parametric imbedding into a family of regularized problems in order to counteract possible effects of ill-conditioning. Such parallel algorithms result not only in an increased speed of solving a problem but also in an increased robustness with respect to various sources of complexity of the problem. Necessary theoretical foundations, outlines of various variants of parallel algorithms and the results of preliminary tests are presented

    An ADMM Based Framework for AutoML Pipeline Configuration

    Full text link
    We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines. This black-box (gradient-free) optimization with mixed integer & continuous variables is a challenging problem. We propose a novel AutoML scheme by leveraging the alternating direction method of multipliers (ADMM). The proposed framework is able to (i) decompose the optimization problem into easier sub-problems that have a reduced number of variables and circumvent the challenge of mixed variable categories, and (ii) incorporate black-box constraints along-side the black-box optimization objective. We empirically evaluate the flexibility (in utilizing existing AutoML techniques), effectiveness (against open source AutoML toolkits),and unique capability (of executing AutoML with practically motivated black-box constraints) of our proposed scheme on a collection of binary classification data sets from UCI ML& OpenML repositories. We observe that on an average our framework provides significant gains in comparison to other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical advantages of this framework

    OPTIMASS: A Package for the Minimization of Kinematic Mass Functions with Constraints

    Get PDF
    Reconstructed mass variables, such as M2M_2, M2CM_{2C}, MT⋆M_T^\star, and MT2WM_{T2}^W, play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, OPTIMASS, which interfaces with the MINUIT library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies and thus allows the user to significantly extend the existing set of kinematic variables. We describe this code and its physics motivation, and demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using the M2M_2 variables.Comment: 39 pages, 12 figures, (1) minor revision in section 3, (2) figure added in section 4.3, (3) reference added and (4) matched with published versio

    A parallel variable metric optimization algorithm

    Get PDF
    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques

    On limited-memory quasi-Newton methods for minimizing a quadratic function

    Full text link
    The main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give two classes of limited-memory quasi-Newton Hessian approximations that generate search directions parallel to those of the method of preconditioned conjugate gradients, and hence give finite termination on quadratic optimization problems. The Hessian approximations are described by a novel compact representation which provides a dynamical framework. We also discuss possible extensions of these classes and show their behavior on randomly generated quadratic optimization problems. The methods behave numerically similar to L-BFGS. Inclusion of information from the first iteration in the limited-memory Hessian approximation and L-BFGS significantly reduces the effects of round-off errors on the considered problems. In addition, we give our compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components
    • …
    corecore