5 research outputs found

    A discrete dynamic convexized method for nonlinear integer programming

    Get PDF
    AbstractIn this paper, we consider the box constrained nonlinear integer programming problem. We present an auxiliary function, which has the same discrete global minimizers as the problem. The minimization of the function using a discrete local search method can escape successfully from previously converged discrete local minimizers by taking increasing values of a parameter. We propose an algorithm to find a global minimizer of the box constrained nonlinear integer programming problem. The algorithm minimizes the auxiliary function from random initial points. We prove that the algorithm can converge asymptotically with probability one. Numerical experiments on a set of test problems show that the algorithm is efficient and robust

    Solving one-dimensional unconstrained global optimization problem using parameter free filled function method

    Get PDF
    It is generally known that almost all filled function methods for one-dimensional unconstrained global optimization problems have computational weaknesses. This paper introduces a relatively new parameter free filled function, which creates a non-ascending bridge from any local isolated minimizer to other first local isolated minimizer with lower or equal function value. The algorithm’s unprecedented function can be used to determine all extreme and inflection points between the two considered consecutive local isolated minimizers. The proposed method never fails to carry out its job. The results of the several testing examples have shown the capability and efficiency of this algorithm while at the same time, proving that the computational weaknesses of the filled function methods can be overcomed

    Minimum-Time Spacecraft Attitude Motion Planning Using Objective Alternation in Derivative-Free Optimization

    Get PDF
    This work presents an approach to spacecraft attitude motion planning which guarantees rest-to-rest maneuvers while satisfying pointing constraints. Attitude is represented on the group of three dimensional rotations. The angular velocity is expressed as weighted sum of some basis functions, and the weights are obtained by solving a constrained minimization problem in which the objective is the maneuvering time. However, the analytic expressions of objective and constraints of this minimization problem are not available. To solve the problem despite this obstacle, we propose to use a derivative-free approach based on sequential penalty. Moreover, to avoid local minima traps during the search, we propose to alternate phases in which two different objective functions are pursued. The control torque derived from the spacecraft inverse dynamics is continuously differentiable and vanishes at its endpoints. Results on practical cases taken from the literature demonstrate advantages over existing approaches

    Global optimality conditions and optimization methods for polynomial programming problems and their applications

    Get PDF
    The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.Doctor of Philosoph

    New classes of globally convexized filled functions for global optimization.

    No full text
    We propose new classes of globally convexized filled functions. Unlike the globally convexized filled functions previously proposed in literature, the ones proposed in this paper are continuously differentiable and, under suitable assumptions, their unconstrained minimization allows to escape from any local minima of the original objective function. Moreover we show that the properties of the proposed functions can be extended to the case of box constrained minimization problems. We also report the results of a preliminary numerical experience
    corecore