12,750 research outputs found
Composite Differential Evolution for Constrained Evolutionary Optimization
When solving constrained optimization problems (COPs) by evolutionary algorithms, the search algorithm plays a crucial role. In general, we expect that the search algorithm has the capability to balance not only diversity and convergence but also constraints and objective function during the evolution. For this purpose, this paper proposes a composite differential evolution (DE) for constrained optimization, which includes three different trial vector generation strategies with distinct advantages. In order to strike a balance between diversity and convergence, one of these three trial vector generation strategies is able to increase diversity, and the other two exhibit the property of convergence. In addition, to accomplish the tradeoff between constraints and objective function, one of the two trial vector generation strategies for convergence is guided by the individual with the least degree of constraint violation in the population, and the other is guided by the individual with the best objective function value in the population. After producing offspring by the proposed composite DE, the feasibility rule and the ϵ constrained method are combined elaborately for selection in this paper. Moreover, a restart scheme is proposed to help the population jump out of a local optimum in the infeasible region for some extremely complicated COPs. By assembling the above techniques together, a constrained composite DE is proposed. The experiments on two sets of benchmark test functions with various features, i.e., 24 test functions from IEEE CEC2006 and 18 test functions with 10 dimensions and 30 dimensions from IEEE CEC2010, have demonstrated that the proposed method shows better or at least competitive performance against other state-of-the-art methods
Domain Decomposition for Stochastic Optimal Control
This work proposes a method for solving linear stochastic optimal control
(SOC) problems using sum of squares and semidefinite programming. Previous work
had used polynomial optimization to approximate the value function, requiring a
high polynomial degree to capture local phenomena. To improve the scalability
of the method to problems of interest, a domain decomposition scheme is
presented. By using local approximations, lower degree polynomials become
sufficient, and both local and global properties of the value function are
captured. The domain of the problem is split into a non-overlapping partition,
with added constraints ensuring continuity. The Alternating Direction
Method of Multipliers (ADMM) is used to optimize over each domain in parallel
and ensure convergence on the boundaries of the partitions. This results in
improved conditioning of the problem and allows for much larger and more
complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201
Cost Adaptation for Robust Decentralized Swarm Behaviour
Decentralized receding horizon control (D-RHC) provides a mechanism for
coordination in multi-agent settings without a centralized command center.
However, combining a set of different goals, costs, and constraints to form an
efficient optimization objective for D-RHC can be difficult. To allay this
problem, we use a meta-learning process -- cost adaptation -- which generates
the optimization objective for D-RHC to solve based on a set of human-generated
priors (cost and constraint functions) and an auxiliary heuristic. We use this
adaptive D-RHC method for control of mesh-networked swarm agents. This
formulation allows a wide range of tasks to be encoded and can account for
network delays, heterogeneous capabilities, and increasingly large swarms
through the adaptation mechanism. We leverage the Unity3D game engine to build
a simulator capable of introducing artificial networking failures and delays in
the swarm. Using the simulator we validate our method on an example coordinated
exploration task. We demonstrate that cost adaptation allows for more efficient
and safer task completion under varying environment conditions and increasingly
large swarm sizes. We release our simulator and code to the community for
future work.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 201
A Primal-Dual Method for Optimal Control and Trajectory Generation in High-Dimensional Systems
Presented is a method for efficient computation of the Hamilton-Jacobi (HJ)
equation for time-optimal control problems using the generalized Hopf formula.
Typically, numerical methods to solve the HJ equation rely on a discrete grid
of the solution space and exhibit exponential scaling with dimension. The
generalized Hopf formula avoids the use of grids and numerical gradients by
formulating an unconstrained convex optimization problem. The solution at each
point is completely independent, and allows a massively parallel implementation
if solutions at multiple points are desired. This work presents a primal-dual
method for efficient numeric solution and presents how the resulting optimal
trajectory can be generated directly from the solution of the Hopf formula,
without further optimization. Examples presented have execution times on the
order of milliseconds and experiments show computation scales approximately
polynomial in dimension with very small high-order coefficients.Comment: Updated references and funding sources. To appear in the proceedings
of the 2018 IEEE Conference on Control Technology and Application
Source Coding Optimization for Distributed Average Consensus
Consensus is a common method for computing a function of the data distributed
among the nodes of a network. Of particular interest is distributed average
consensus, whereby the nodes iteratively compute the sample average of the data
stored at all the nodes of the network using only near-neighbor communications.
In real-world scenarios, these communications must undergo quantization, which
introduces distortion to the internode messages. In this thesis, a model for
the evolution of the network state statistics at each iteration is developed
under the assumptions of Gaussian data and additive quantization error. It is
shown that minimization of the communication load in terms of aggregate source
coding rate can be posed as a generalized geometric program, for which an
equivalent convex optimization can efficiently solve for the global minimum.
Optimization procedures are developed for rate-distortion-optimal vector
quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform
quantization. Numerical results demonstrate the performance of these
approaches. For small numbers of iterations, the fixed-rate optimizations are
verified using exhaustive search. Comparison to the prior art suggests
competitive performance under certain circumstances but strongly motivates the
incorporation of more sophisticated coding strategies, such as differential,
predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State
Universit
- …