7,061 research outputs found
Multistart Methods for Quantum Approximate Optimization
Hybrid quantum-classical algorithms such as the quantum approximate
optimization algorithm (QAOA) are considered one of the most promising
approaches for leveraging near-term quantum computers for practical
applications. Such algorithms are often implemented in a variational form,
combining classical optimization methods with a quantum machine to find
parameters to maximize performance. The quality of the QAOA solution depends
heavily on quality of the parameters produced by the classical optimizer.
Moreover, the presence of multiple local optima in the space of parameters
makes it harder for the classical optimizer. In this paper we study the use of
a multistart optimization approach within a QAOA framework to improve the
performance of quantum machines on important graph clustering problems. We also
demonstrate that reusing the optimal parameters from similar problems can
improve the performance of classical optimization methods, expanding on similar
results for MAXCUT
Differentiable Programming Tensor Networks
Differentiable programming is a fresh programming paradigm which composes
parameterized algorithmic components and trains them using automatic
differentiation (AD). The concept emerges from deep learning but is not only
limited to training neural networks. We present theory and practice of
programming tensor network algorithms in a fully differentiable way. By
formulating the tensor network algorithm as a computation graph, one can
compute higher order derivatives of the program accurately and efficiently
using AD. We present essential techniques to differentiate through the tensor
networks contractions, including stable AD for tensor decomposition and
efficient backpropagation through fixed point iterations. As a demonstration,
we compute the specific heat of the Ising model directly by taking the second
order derivative of the free energy obtained in the tensor renormalization
group calculation. Next, we perform gradient based variational optimization of
infinite projected entangled pair states for quantum antiferromagnetic
Heisenberg model and obtain start-of-the-art variational energy and
magnetization with moderate efforts. Differentiable programming removes
laborious human efforts in deriving and implementing analytical gradients for
tensor network programs, which opens the door to more innovations in tensor
network algorithms and applications.Comment: Typos corrected, discussion and refs added; revised version accepted
for publication in PRX. Source code available at
https://github.com/wangleiphy/tensorgra
A Field Guide to Genetic Programming
xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction --
Representation, initialisation and operators in Tree-based GP --
Getting ready to run genetic programming --
Example genetic programming run --
Alternative initialisations and operators in Tree-based GP --
Modular, grammatical and developmental Tree-based GP --
Linear and graph genetic programming --
Probalistic genetic programming --
Multi-objective genetic programming --
Fast and distributed genetic programming --
GP theory and its applications --
Applications --
Troubleshooting GP --
Conclusions.Contents
xi
1 Introduction
1.1 Genetic Programming in a Nutshell
1.2 Getting Started
1.3 Prerequisites
1.4 Overview of this Field Guide I
Basics
2 Representation, Initialisation and GP
2.1 Representation
2.2 Initialising the Population
2.3 Selection
2.4 Recombination and Mutation Operators in Tree-based
3 Getting Ready to Run Genetic Programming 19
3.1 Step 1: Terminal Set 19
3.2 Step 2: Function Set 20
3.2.1 Closure 21
3.2.2 Sufficiency 23
3.2.3 Evolving Structures other than Programs 23
3.3 Step 3: Fitness Function 24
3.4 Step 4: GP Parameters 26
3.5 Step 5: Termination and solution designation 27
4 Example Genetic Programming Run
4.1 Preparatory Steps 29
4.2 Step-by-Step Sample Run 31
4.2.1 Initialisation 31
4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming
5 Alternative Initialisations and Operators in
5.1 Constructing the Initial Population
5.1.1 Uniform Initialisation
5.1.2 Initialisation may Affect Bloat
5.1.3 Seeding
5.2 GP Mutation
5.2.1 Is Mutation Necessary?
5.2.2 Mutation Cookbook
5.3 GP Crossover
5.4 Other Techniques 32
5.5 Tree-based GP 39
6 Modular, Grammatical and Developmental Tree-based GP 47
6.1 Evolving Modular and Hierarchical Structures 47
6.1.1 Automatically Defined Functions 48
6.1.2 Program Architecture and Architecture-Altering 50
6.2 Constraining Structures 51
6.2.1 Enforcing Particular Structures 52
6.2.2 Strongly Typed GP 52
6.2.3 Grammar-based Constraints 53
6.2.4 Constraints and Bias 55
6.3 Developmental Genetic Programming 57
6.4 Strongly Typed Autoconstructive GP with PushGP 59
7 Linear and Graph Genetic Programming 61
7.1 Linear Genetic Programming 61
7.1.1 Motivations 61
7.1.2 Linear GP Representations 62
7.1.3 Linear GP Operators 64
7.2 Graph-Based Genetic Programming 65
7.2.1 Parallel Distributed GP (PDGP) 65
7.2.2 PADO 67
7.2.3 Cartesian GP 67
7.2.4 Evolving Parallel Programs using Indirect Encodings 68
8 Probabilistic Genetic Programming
8.1 Estimation of Distribution Algorithms 69
8.2 Pure EDA GP 71
8.3 Mixing Grammars and Probabilities 74
9 Multi-objective Genetic Programming 75
9.1 Combining Multiple Objectives into a Scalar Fitness Function 75
9.2 Keeping the Objectives Separate 76
9.2.1 Multi-objective Bloat and Complexity Control 77
9.2.2 Other Objectives 78
9.2.3 Non-Pareto Criteria 80
9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80
9.4 Multi-objective Optimisation via Operator Bias 81
10 Fast and Distributed Genetic Programming 83
10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83
10.2 Reducing Cost of Fitness with Caches 86
10.3 Parallel and Distributed GP are Not Equivalent 88
10.4 Running GP on Parallel Hardware 89
10.4.1 Master–slave GP 89
10.4.2 GP Running on GPUs 90
10.4.3 GP on FPGAs 92
10.4.4 Sub-machine-code GP 93
10.5 Geographically Distributed GP 93
11 GP Theory and its Applications 97
11.1 Mathematical Models 98
11.2 Search Spaces 99
11.3 Bloat 101
11.3.1 Bloat in Theory 101
11.3.2 Bloat Control in Practice 104
III
Practical Genetic Programming
12 Applications
12.1 Where GP has Done Well
12.2 Curve Fitting, Data Modelling and Symbolic Regression
12.3 Human Competitive Results – the Humies
12.4 Image and Signal Processing
12.5 Financial Trading, Time Series, and Economic Modelling
12.6 Industrial Process Control
12.7 Medicine, Biology and Bioinformatics
12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii
12.9 Entertainment and Computer Games 127
12.10The Arts 127
12.11Compression 128
13 Troubleshooting GP
13.1 Is there a Bug in the Code?
13.2 Can you Trust your Results?
13.3 There are No Silver Bullets
13.4 Small Changes can have Big Effects
13.5 Big Changes can have No Effect
13.6 Study your Populations
13.7 Encourage Diversity
13.8 Embrace Approximation
13.9 Control Bloat
13.10 Checkpoint Results
13.11 Report Well
13.12 Convince your Customers
14 Conclusions
Tricks of the Trade
A Resources
A.1 Key Books
A.2 Key Journals
A.3 Key International Meetings
A.4 GP Implementations
A.5 On-Line Resources 145
B TinyGP 151
B.1 Overview of TinyGP 151
B.2 Input Data Files for TinyGP 153
B.3 Source Code 154
B.4 Compiling and Running TinyGP 162
Bibliography 167
Inde
An efficient shooting algorithm for Evans function calculations in large systems
In Evans function computations of the spectra of asymptotically
constant-coefficient linear operators, a basic issue is the efficient and
numerically stable computation of subspaces evolving according to the
associated eigenvalue ODE. For small systems, a fast, shooting algorithm may be
obtained by representing subspaces as single exterior products
\cite{AS,Br.1,Br.2,BrZ,BDG}. For large systems, however, the dimension of the
exterior-product space quickly becomes prohibitive, growing as ,
where is the dimension of the system written as a first-order ODE and
(typically ) is the dimension of the subspace. We resolve this
difficulty by the introduction of a simple polar coordinate algorithm
representing ``pure'' (monomial) products as scalar multiples of orthonormal
bases, for which the angular equation is a numerically optimized version of the
continuous orthogonalization method of Drury--Davey \cite{Da,Dr} and the radial
equation is evaluable by quadrature. Notably, the polar-coordinate method
preserves the important property of analyticity with respect to parameters.Comment: 21 pp., two figure
Phase diagram of the isotropic spin-3/2 model on the z=3 Bethe lattice
We study an SU(2) symmetric spin-3/2 model on the z=3 Bethe lattice using the
infinite Time Evolving Block Decimation (iTEBD) method. This model is shown to
exhibit a rich phase diagram. We compute the expectation values of several
order parameters which allow us to identify a ferromagnetic, a ferrimagnetic, a
anti-ferromagnetic as well as a dimerized phase. We calculate the entanglement
spectra from which we conclude the existence of a symmetry protected
topological phase that is characterized by S=1/2 edge spins. Details of the
iTEBD algorithm used for the simulations are included
- …