9 research outputs found

    Learning to learn with an evolutionary strategy applied to variational quantum algorithms

    Full text link
    Variational Quantum Algorithms (VQAs) employ quantum circuits parameterized by UU, optimized using classical methods to minimize a cost function. While VQAs have found broad applications, certain challenges persist. Notably, a significant computational burden arises during parameter optimization. The prevailing ``parameter shift rule'' mandates a double evaluation of the cost function for each parameter. In this article, we introduce a novel optimization approach named ``Learning to Learn with an Evolutionary Strategy'' (LLES). LLES unifies ``Learning to Learn'' and ``Evolutionary Strategy'' methods. ``Learning to Learn'' treats optimization as a learning problem, utilizing recurrent neural networks to iteratively propose VQA parameters. Conversely, ``Evolutionary Strategy'' employs gradient searches to estimate function gradients. Our optimization method is applied to two distinct tasks: determining the ground state of an Ising Hamiltonian and training a quantum neural network. Results underscore the efficacy of this novel approach. Additionally, we identify a key hyperparameter that significantly influences gradient estimation using the ``Evolutionary Strategy'' method

    Review and Classification of Bio-inspired Algorithms and Their Applications

    Get PDF
    Scientists have long looked to nature and biology in order to understand and model solutions for complex real-world problems. The study of bionics bridges the functions, biological structures and functions and organizational principles found in nature with our modern technologies, numerous mathematical and metaheuristic algorithms have been developed along with the knowledge transferring process from the lifeforms to the human technologies. Output of bionics study includes not only physical products, but also various optimization computation methods that can be applied in different areas. Related algorithms can broadly be divided into four groups: evolutionary based bio-inspired algorithms, swarm intelligence-based bio-inspired algorithms, ecology-based bio-inspired algorithms and multi-objective bio-inspired algorithms. Bio-inspired algorithms such as neural network, ant colony algorithms, particle swarm optimization and others have been applied in almost every area of science, engineering and business management with a dramatic increase of number of relevant publications. This paper provides a systematic, pragmatic and comprehensive review of the latest developments in evolutionary based bio-inspired algorithms, swarm intelligence based bio-inspired algorithms, ecology based bio-inspired algorithms and multi-objective bio-inspired algorithms

    Trust Region Evolution Strategies

    No full text
    Evolution Strategies (ES), a class of black-box optimization algorithms, has recently been demonstrated to be a viable alternative to popular MDP-based RL techniques such as Qlearning and Policy Gradients. ES achieves fairly good performance on challenging reinforcement learning problems and is easier to scale in a distributed setting. However, standard ES algorithms perform one gradient update per data sample, which is not very efficient. In this paper, with the purpose of more efficient using of sampled data, we propose a novel iterative procedure that optimizes a surrogate objective function, enabling to reuse data sample for multiple epochs of updates. We prove monotonic improvement guarantee for such procedure. By making several approximations to the theoretically-justified procedure, we further develop a practical algorithm called Trust Region Evolution Strategies (TRES). Our experiments demonstrate the effectiveness of TRES on a range of popular MuJoCo locomotion tasks in the OpenAI Gym, achieving better performance than ES algorithm

    Trust Region Evolution Strategies

    No full text
    corecore