12,699 research outputs found

    Solving one-dimensional unconstrained global optimization problem using parameter free filled function method

    Get PDF
    It is generally known that almost all filled function methods for one-dimensional unconstrained global optimization problems have computational weaknesses. This paper introduces a relatively new parameter free filled function, which creates a non-ascending bridge from any local isolated minimizer to other first local isolated minimizer with lower or equal function value. The algorithm’s unprecedented function can be used to determine all extreme and inflection points between the two considered consecutive local isolated minimizers. The proposed method never fails to carry out its job. The results of the several testing examples have shown the capability and efficiency of this algorithm while at the same time, proving that the computational weaknesses of the filled function methods can be overcomed

    Generalized Schwarzschild's method

    Full text link
    We describe a new finite element method (FEM) to construct continuous equilibrium distribution functions of stellar systems. The method is a generalization of Schwarzschild's orbit superposition method from the space of discrete functions to continuous ones. In contrast to Schwarzschild's method, FEM produces a continuous distribution function (DF) and satisfies the intra element continuity and Jeans equations. The method employs two finite-element meshes, one in configuration space and one in action space. The DF is represented by its values at the nodes of the action-space mesh and by interpolating functions inside the elements. The Galerkin projection of all equations that involve the DF leads to a linear system of equations, which can be solved for the nodal values of the DF using linear or quadratic programming, or other optimization methods. We illustrate the superior performance of FEM by constructing ergodic and anisotropic equilibrium DFs for spherical stellar systems (Hernquist models). We also show that explicitly constraining the DF by the Jeans equations leads to smoother and/or more accurate solutions with both Schwarzschild's method and FEM.Comment: 14 pages, 7 Figures, Submitted to MNRA

    Matrix recovery using Split Bregman

    Full text link
    In this paper we address the problem of recovering a matrix, with inherent low rank structure, from its lower dimensional projections. This problem is frequently encountered in wide range of areas including pattern recognition, wireless sensor networks, control systems, recommender systems, image/video reconstruction etc. Both in theory and practice, the most optimal way to solve the low rank matrix recovery problem is via nuclear norm minimization. In this paper, we propose a Split Bregman algorithm for nuclear norm minimization. The use of Bregman technique improves the convergence speed of our algorithm and gives a higher success rate. Also, the accuracy of reconstruction is much better even for cases where small number of linear measurements are available. Our claim is supported by empirical results obtained using our algorithm and its comparison to other existing methods for matrix recovery. The algorithms are compared on the basis of NMSE, execution time and success rate for varying ranks and sampling ratios

    Global optimality conditions and optimization methods for polynomial programming problems and their applications

    Get PDF
    The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.Doctor of Philosoph

    A Bayesian approach to constrained single- and multi-objective optimization

    Get PDF
    This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization
    • …
    corecore