106,798 research outputs found
A scalable parallel algorithm for multiple objective linear programs
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included
Scaling Package Queries to a Billion Tuples via Hierarchical Partitioning and Customized Optimization
A package query returns a package -- a multiset of tuples -- that maximizes
or minimizes a linear objective function subject to linear constraints, thereby
enabling in-database decision support. Prior work has established the
equivalence of package queries to Integer Linear Programs (ILPs) and developed
the SketchRefine algorithm for package query processing. While this algorithm
was an important first step toward supporting prescriptive analytics scalably
inside a relational database, it struggles when the data size grows beyond a
few hundred million tuples or when the constraints become very tight. In this
paper, we present Progressive Shading, a novel algorithm for processing package
queries that can scale efficiently to billions of tuples and gracefully handle
tight constraints. Progressive Shading solves a sequence of optimization
problems over a hierarchy of relations, each resulting from an ever-finer
partitioning of the original tuples into homogeneous groups until the original
relation is obtained. This strategy avoids the premature discarding of
high-quality tuples that can occur with SketchRefine. Our novel partitioning
scheme, Dynamic Low Variance, can handle very large relations with multiple
attributes and can dynamically adapt to both concentrated and spread-out sets
of attribute values, provably outperforming traditional partitioning schemes
such as KD-Tree. We further optimize our system by replacing our off-the-shelf
optimization software with customized ILP and LP solvers, called Dual Reducer
and Parallel Dual Simplex respectively, that are highly accurate and orders of
magnitude faster
Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning
Many problems in machine learning and other fields can be (re)for-mulated as
linearly constrained separable convex programs. In most of the cases, there are
multiple blocks of variables. However, the traditional alternating direction
method (ADM) and its linearized version (LADM, obtained by linearizing the
quadratic penalty term) are for the two-block case and cannot be naively
generalized to solve the multi-block case. So there is great demand on
extending the ADM based methods for the multi-block case. In this paper, we
propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve
multi-block separable convex programs efficiently. When all the component
objective functions have bounded subgradients, we obtain convergence results
that are stronger than those of ADM and LADM, e.g., allowing the penalty
parameter to be unbounded and proving the sufficient and necessary conditions}
for global convergence. We further propose a simple optimality measure and
reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with
extra convex set constraints, with refined parameter estimation we devise a
practical version of LADMPSAP for faster convergence. Finally, we generalize
LADMPSAP to handle programs with more difficult objective functions by
linearizing part of the objective function as well. LADMPSAP is particularly
suitable for sparse representation and low-rank recovery problems because its
subproblems have closed form solutions and the sparsity and low-rankness of the
iterates can be preserved during the iteration. It is also highly
parallelizable and hence fits for parallel or distributed computing. Numerical
experiments testify to the advantages of LADMPSAP in speed and numerical
accuracy.Comment: Preliminary version published on Asian Conference on Machine Learning
201
A Parallelizable Acceleration Framework for Packing Linear Programs
This paper presents an acceleration framework for packing linear programming
problems where the amount of data available is limited, i.e., where the number
of constraints m is small compared to the variable dimension n. The framework
can be used as a black box to speed up linear programming solvers dramatically,
by two orders of magnitude in our experiments. We present worst-case guarantees
on the quality of the solution and the speedup provided by the algorithm,
showing that the framework provides an approximately optimal solution while
running the original solver on a much smaller problem. The framework can be
used to accelerate exact solvers, approximate solvers, and parallel/distributed
solvers. Further, it can be used for both linear programs and integer linear
programs
A Field Guide to Genetic Programming
xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction --
Representation, initialisation and operators in Tree-based GP --
Getting ready to run genetic programming --
Example genetic programming run --
Alternative initialisations and operators in Tree-based GP --
Modular, grammatical and developmental Tree-based GP --
Linear and graph genetic programming --
Probalistic genetic programming --
Multi-objective genetic programming --
Fast and distributed genetic programming --
GP theory and its applications --
Applications --
Troubleshooting GP --
Conclusions.Contents
xi
1 Introduction
1.1 Genetic Programming in a Nutshell
1.2 Getting Started
1.3 Prerequisites
1.4 Overview of this Field Guide I
Basics
2 Representation, Initialisation and GP
2.1 Representation
2.2 Initialising the Population
2.3 Selection
2.4 Recombination and Mutation Operators in Tree-based
3 Getting Ready to Run Genetic Programming 19
3.1 Step 1: Terminal Set 19
3.2 Step 2: Function Set 20
3.2.1 Closure 21
3.2.2 Sufficiency 23
3.2.3 Evolving Structures other than Programs 23
3.3 Step 3: Fitness Function 24
3.4 Step 4: GP Parameters 26
3.5 Step 5: Termination and solution designation 27
4 Example Genetic Programming Run
4.1 Preparatory Steps 29
4.2 Step-by-Step Sample Run 31
4.2.1 Initialisation 31
4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming
5 Alternative Initialisations and Operators in
5.1 Constructing the Initial Population
5.1.1 Uniform Initialisation
5.1.2 Initialisation may Affect Bloat
5.1.3 Seeding
5.2 GP Mutation
5.2.1 Is Mutation Necessary?
5.2.2 Mutation Cookbook
5.3 GP Crossover
5.4 Other Techniques 32
5.5 Tree-based GP 39
6 Modular, Grammatical and Developmental Tree-based GP 47
6.1 Evolving Modular and Hierarchical Structures 47
6.1.1 Automatically Defined Functions 48
6.1.2 Program Architecture and Architecture-Altering 50
6.2 Constraining Structures 51
6.2.1 Enforcing Particular Structures 52
6.2.2 Strongly Typed GP 52
6.2.3 Grammar-based Constraints 53
6.2.4 Constraints and Bias 55
6.3 Developmental Genetic Programming 57
6.4 Strongly Typed Autoconstructive GP with PushGP 59
7 Linear and Graph Genetic Programming 61
7.1 Linear Genetic Programming 61
7.1.1 Motivations 61
7.1.2 Linear GP Representations 62
7.1.3 Linear GP Operators 64
7.2 Graph-Based Genetic Programming 65
7.2.1 Parallel Distributed GP (PDGP) 65
7.2.2 PADO 67
7.2.3 Cartesian GP 67
7.2.4 Evolving Parallel Programs using Indirect Encodings 68
8 Probabilistic Genetic Programming
8.1 Estimation of Distribution Algorithms 69
8.2 Pure EDA GP 71
8.3 Mixing Grammars and Probabilities 74
9 Multi-objective Genetic Programming 75
9.1 Combining Multiple Objectives into a Scalar Fitness Function 75
9.2 Keeping the Objectives Separate 76
9.2.1 Multi-objective Bloat and Complexity Control 77
9.2.2 Other Objectives 78
9.2.3 Non-Pareto Criteria 80
9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80
9.4 Multi-objective Optimisation via Operator Bias 81
10 Fast and Distributed Genetic Programming 83
10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83
10.2 Reducing Cost of Fitness with Caches 86
10.3 Parallel and Distributed GP are Not Equivalent 88
10.4 Running GP on Parallel Hardware 89
10.4.1 Master–slave GP 89
10.4.2 GP Running on GPUs 90
10.4.3 GP on FPGAs 92
10.4.4 Sub-machine-code GP 93
10.5 Geographically Distributed GP 93
11 GP Theory and its Applications 97
11.1 Mathematical Models 98
11.2 Search Spaces 99
11.3 Bloat 101
11.3.1 Bloat in Theory 101
11.3.2 Bloat Control in Practice 104
III
Practical Genetic Programming
12 Applications
12.1 Where GP has Done Well
12.2 Curve Fitting, Data Modelling and Symbolic Regression
12.3 Human Competitive Results – the Humies
12.4 Image and Signal Processing
12.5 Financial Trading, Time Series, and Economic Modelling
12.6 Industrial Process Control
12.7 Medicine, Biology and Bioinformatics
12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii
12.9 Entertainment and Computer Games 127
12.10The Arts 127
12.11Compression 128
13 Troubleshooting GP
13.1 Is there a Bug in the Code?
13.2 Can you Trust your Results?
13.3 There are No Silver Bullets
13.4 Small Changes can have Big Effects
13.5 Big Changes can have No Effect
13.6 Study your Populations
13.7 Encourage Diversity
13.8 Embrace Approximation
13.9 Control Bloat
13.10 Checkpoint Results
13.11 Report Well
13.12 Convince your Customers
14 Conclusions
Tricks of the Trade
A Resources
A.1 Key Books
A.2 Key Journals
A.3 Key International Meetings
A.4 GP Implementations
A.5 On-Line Resources 145
B TinyGP 151
B.1 Overview of TinyGP 151
B.2 Input Data Files for TinyGP 153
B.3 Source Code 154
B.4 Compiling and Running TinyGP 162
Bibliography 167
Inde
- …