35,512 research outputs found

    Hybrid optimization coupling electromagnetism and descent search for engineering problems

    Get PDF
    In this paper, we present a new stochastic hybrid technique for constrained global optimization. It is a combination of the electromagnetism-like (EM) mechanism with an approximate descent search, which is a derivative-free procedure with high ability of producing a descent direction. Since the original EM algorithm is specifically designed for solving bound constrained problems, the approach herein adopted for handling the constraints of the problem relies on a simple heuristic denoted by feasibility and dominance rules. The hybrid EM method is tested on four well-known engineering design problems and the numerical results demonstrate the effectiveness of the proposed approach

    Online Optimization of Switched LTI Systems Using Continuous-Time and Hybrid Accelerated Gradient Flows

    Full text link
    This paper studies the design of feedback controllers that steer the output of a switched linear time-invariant system to the solution of a possibly time-varying optimization problem. The design of the feedback controllers is based on an online gradient descent method, and an online hybrid controller that can be seen as a regularized Nesterov's accelerated gradient method. Both of the proposed approaches accommodate output measurements of the plant, and are implemented in closed-loop with the switched dynamical system. By design, the controllers continuously steer the system output to an optimal trajectory implicitly defined by the time-varying optimization problem without requiring knowledge of exogenous inputs and disturbances. For cost functions that are smooth and satisfy the Polyak-Lojasiewicz inequality, we demonstrate that the online gradient descent controller ensures uniform global exponential stability when the time-scales of the plant and the controller are sufficiently separated and the switching signal of the plant is slow on the average. Under a strong convexity assumption, we also show that the online hybrid Nesterov's method guarantees tracking of optimal trajectories, and outperforms online controllers based on gradient descent. Interestingly, the proposed hybrid accelerated controller resolves the potential lack of robustness suffered by standard continuous-time accelerated gradient methods when coupled with a dynamical system. When the function is not strongly convex, we establish global practical asymptotic stability results for the accelerated method, and we unveil the existence of a trade-off between acceleration and exact convergence in online optimization problems with controllers using dynamic momentum. Our theoretical results are illustrated via different numerical examples

    Fuzzy-Genetic Control of Quadrotors Unmanned Aerial Vehicles

    Get PDF
    This article presents a novel fuzzy identification method for dynamic modelling of quadrotor unmanned aerial vehicles. The method is based on a special parameterization of the antecedent part of fuzzy systems that results in fuzzy-partitions for antecedents. This antecedent parameter representation method of fuzzy rules ensures upholding of predefined linguistic value ordering and ensures that fuzzy-partitions remain intact throughout an unconstrained hybrid evolutionary and gradient descent based optimization process. In the equations of motion the first order derivative component is calculated based on Christoffel symbols, the derivatives of fuzzy systems are used for modelling the Coriolis effects, gyroscopic and centrifugal terms. The non-linear parameters are subjected to an initial global evolutionary optimization scheme and fine tuning with gradient descent based local search. Simulation results of the proposed new quadrotor dynamic model identification method are promising

    Towards Hybrid-Optimization Video Coding

    Full text link
    Video coding is a mathematical optimization problem of rate and distortion essentially. To solve this complex optimization problem, two popular video coding frameworks have been developed: block-based hybrid video coding and end-to-end learned video coding. If we rethink video coding from the perspective of optimization, we find that the existing two frameworks represent two directions of optimization solutions. Block-based hybrid coding represents the discrete optimization solution because those irrelevant coding modes are discrete in mathematics. It searches for the best one among multiple starting points (i.e. modes). However, the search is not efficient enough. On the other hand, end-to-end learned coding represents the continuous optimization solution because the gradient descent is based on a continuous function. It optimizes a group of model parameters efficiently by the numerical algorithm. However, limited by only one starting point, it is easy to fall into the local optimum. To better solve the optimization problem, we propose to regard video coding as a hybrid of the discrete and continuous optimization problem, and use both search and numerical algorithm to solve it. Our idea is to provide multiple discrete starting points in the global space and optimize the local optimum around each point by numerical algorithm efficiently. Finally, we search for the global optimum among those local optimums. Guided by the hybrid optimization idea, we design a hybrid optimization video coding framework, which is built on continuous deep networks entirely and also contains some discrete modes. We conduct a comprehensive set of experiments. Compared to the continuous optimization framework, our method outperforms pure learned video coding methods. Meanwhile, compared to the discrete optimization framework, our method achieves comparable performance to HEVC reference software HM16.10 in PSNR

    Uniting Nesterov and Heavy Ball Methods for Uniform Global Asymptotic Stability of the Set of Minimizers

    Full text link
    We propose a hybrid control algorithm that guarantees fast convergence and uniform global asymptotic stability of the unique minimizer of a smooth, convex objective function. The algorithm, developed using hybrid system tools, employs a uniting control strategy, in which Nesterov's accelerated gradient descent is used "globally" and the heavy ball method is used "locally," relative to the minimizer. Without knowledge of its location, the proposed hybrid control strategy switches between these accelerated methods to ensure convergence to the minimizer without oscillations, with a (hybrid) convergence rate that preserves the convergence rates of the individual optimization algorithms. We analyze key properties of the resulting closed-loop system including existence of solutions, uniform global asymptotic stability, and convergence rate. Additionally, stability properties of Nesterov's method are analyzed, and extensions on convergence rate results in the existing literature are presented. Numerical results validate the findings and demonstrate the robustness of the uniting algorithm.Comment: The technical report accompanying "Uniting Nesterov and Heavy Ball Methods for Uniform Global Asymptotic Stability of the Set of Minimizers", submitted to Automatica, 2022. Revisions made according to first round reviewer feedbac

    Hybridizing the electromagnetism-like algorithm with descent search for solving engineering design problems

    Get PDF
    In this paper, we present a new stochastic hybrid technique for constrained global optimization. It is a combination of the electromagnetism-like (EM) mechanism with a random local search, which is a derivative-free procedure with high ability of producing a descent direction. Since the original EM algorithm is specifically designed for solving bound constrained problems, the approach herein adopted for handling the inequality constraints of the problem relies on selective conditions that impose a sufficient reduction either in the constraints violation or in the objective function value, when comparing two points at a time. The hybrid EM method is tested on a set of benchmark engineering design problems and the numerical results demonstrate the effectiveness of the proposed approach. A comparison with results from other stochastic methods is also included

    A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization

    Full text link
    We show that a large class of Estimation of Distribution Algorithms, including, but not limited to, Covariance Matrix Adaption, can be written as a Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs
    corecore