51 research outputs found

    Asynchronous Evolution by Reference-Based Evaluation: Tertiary Parent Selection and Its Archive

    No full text

    A Simple Maximum Gain Algorithm for Support Vector Regression

    No full text

    Handling sharp ridges with local supremum transformations

    No full text

    Motion Capture and Contemporary Optimization Algorithms for Robust and Stable Motions on Simulated Biped Robots

    No full text
    Biped soccer robots have shown drastic improvements in motion skills over the past few years. Still, a lot of work needs to be done with the RoboCup Federation’s vision of 2050 in mind. One goal is creating a workflow for quickly generating reliable motions, preferably with inexpensive and accessible hardware. Our hypothesis is that using Microsoft’s Kinect sensor in combination with a modern optimization algorithm can achieve this objective. We produced four complex and inherently unstable motions and then applied three contemporary optimization algorithms (CMA-ES, xNES, PSO) to make the motions robust; we performed 900 experiments with these motions on a 3D simulated Nao robot with full physics. In this paper we describe the motion mapping technique, compare the optimization algorithms, and discuss various basis functions and their impact on the learning performance. Our conclusion is that there is a straightforward process to achieve complex and stable motions in a short period of time

    Natural Evolution Strategies

    No full text
    Editor: Una-May O’Reilly This paper presents Natural Evolution Strategies (NES), a recent family of black-box opti-mization algorithms that use the natural gradient to update a parameterized search distri-bution in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complex-ity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, such as general-purpose multi-variate normal distributions and separa-ble distributions tailored towards search in high dimensional spaces. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others

    An Algorithm for Parallelizing Sequential Minimal Optimization

    No full text

    Learning Bounds for Support Vector Machines with Learned Kernels

    No full text
    Consider the problem of learning a kernel for use in SVM classification. We bound the estimation error of a large margin classifier when the kernel, relative to which this margin is defined, is chosen from a family of kernels based on the training sample. For a kernel family with pseudodimension dφ, we present a bound of Õ(dφ + 1/γ2)/n on the estimation error for SVMs with margin γ. This is the first bound in which the relation between the margin term and the family-of-kernels term is additive rather then multiplicative. The pseudodimen-sion of families of linear combinations of base kernels is the number of base kernels. Unlike in previous (multiplicative) bounds, there is no non-negativity requirement on the coefficients of the linear combinations. We also give simple bounds on the pseudodimension for families of Gaussian kernels
    corecore