103,185 research outputs found

    A Quantum States Preparation Method Based on Difference-Driven Reinforcement Learning

    Full text link
    Due to the large state space of the two-qubit system, and the adoption of ladder reward function in the existing quantum state preparation methods, the convergence speed is slow and it is difficult to prepare the desired target quantum state with high fidelity under limited conditions. To solve the above problems, a difference-driven reinforcement learning (RL) algorithm for quantum state preparation of two-qubit system is proposed by improving the reward function and action selection strategy. Firstly, a model is constructed for the problem of preparing quantum states of a two-qubit system, with restrictions on the type of quantum gates and the time for quantum state evolution. In the preparation process, a weighted differential dynamic reward function is designed to assist the algorithm quickly obtain the maximum expected cumulative reward. Then, an adaptive e-greedy action selection strategy is adopted to achieve a balance between exploration and utilization to a certain extent, thereby improving the fidelity of the final quantum state. The simulation results show that the proposed algorithm can prepare quantum state with high fidelity under limited conditions. Compared with other algorithms, it has different degrees of improvement in convergence speed and fidelity of the final quantum state.Comment: 12 pages, 9 figure

    Paired Comparisons-based Interactive Differential Evolution

    Get PDF
    We propose Interactive Differential Evolution (IDE) based on paired comparisons for reducing user fatigue and evaluate its convergence speed in comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User interface and convergence performance are two big keys for reducing Interactive Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE, users of the proposed IDE and tournament IGA do not need to compare whole individuals each other but compare pairs of individuals, which largely decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate another factor, IEC convergence performance, using IEC simulators and show that our proposed IDE converges significantly faster than IGA and tournament IGA, i.e. our proposed one is superior to others from both user interface and convergence performance points of view

    A Powerful Optimization Tool for Analog Integrated Circuits Design

    Get PDF
    This paper presents a new optimization tool for analog circuit design. Proposed tool is based on the robust version of the differential evolution optimization method. Corners of technology, temperature, voltage and current supplies are taken into account during the optimization. That ensures robust resulting circuits. Those circuits usually do not need any schematic change and are ready for the layout.. The newly developed tool is implemented directly to the Cadence design environment to achieve very short setup time of the optimization task. The design automation procedure was enhanced by optimization watchdog feature. It was created to control optimization progress and moreover to reduce the search space to produce better design in shorter time. The optimization algorithm presented in this paper was successfully tested on several design examples

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO
    corecore