8,737 research outputs found
Evolution strategies in optimization problems
Evolution strategies are inspired in biology and form part of a larger research field known as evolutionary algorithms. Those strategies perform a random search in the space of admissible functions, aiming to optimize some given objective function. We show that simple evolution strategies are a useful tool in optimal control, permitting one to obtain, in an efficient way, good approximations to the solutions of some recent and challenging optimal control problems.CEOCFCTFEDER/POCI 201
Evolution Strategies in Optimization Problems
Evolution Strategies are inspired in biology and part of a larger research
field known as Evolutionary Algorithms. Those strategies perform a random
search in the space of admissible functions, aiming to optimize some given
objective function. We show that simple evolution strategies are a useful tool
in optimal control, permitting to obtain, in an efficient way, good
approximations to the solutions of some recent and challenging optimal control
problems.Comment: Partially presented at the 5th Junior European Meeting on "Control
and Information Technology" (JEM'06), Sept 20-22, 2006, Tallinn, Estonia. To
appear in "Proceedings of the Estonian Academy of Sciences -- Physics
Mathematics
Natural evolution strategies and variational Monte Carlo
A notion of quantum natural evolution strategies is introduced, which
provides a geometric synthesis of a number of known quantum/classical
algorithms for performing classical black-box optimization. Recent work of
Gomes et al. [2019] on heuristic combinatorial optimization using neural
quantum states is pedagogically reviewed in this context, emphasizing the
connection with natural evolution strategies. The algorithmic framework is
illustrated for approximate combinatorial optimization problems, and a
systematic strategy is found for improving the approximation ratios. In
particular it is found that natural evolution strategies can achieve
approximation ratios competitive with widely used heuristic algorithms for
Max-Cut, at the expense of increased computation time
Composite ontology change operators and their customizable evolution strategies
Change operators are the building blocks of ontology evolution. Elementary, composite and complex change operators have been suggested. While lower-level change operators are useful in terms of finegranular representation of ontology changes, representing the intent of change requires higher-level change operators. Here, we focus on higherlevel composite change operators to perform an aggregated task. We introduce composite-level evolution strategies. The central role of the evolution strategies is to preserve the intent of the composite change with respect to the userâs requirements and to reduce the change operational cost. Composite-level evolution strategies assist in avoiding the illegal changes or presence of illegal axioms that may generate inconsistencies during application of a composite change. We discuss few composite changes along with the defined evolution strategies as an example that allow users to control and customize the ontology evolution process
Neural networks robot controller trained with evolution strategies
Congress on Evolutionary Computation. Washington, DC, 6-9 July 1999.Neural networks (NN) can be used as controllers in autonomous robots. The specific features of the navigation problem in robotics make generation of good training sets for the NN difficult. An evolution strategy (ES) is introduced to learn the weights of the NN instead of the learning method of the network. The ES is used to learn high performance reactive behavior for navigation and collision avoidance. No subjective information about âhow to accomplish the taskâ has been included in the fitness function. The learned behaviors are able to solve the problem in different environments; therefore, the learning process has the proven ability to obtain a specialized behavior. All the behaviors obtained have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on the mini-robot, Khepera, has been used to learn each behavior
- âŠ