32,711 research outputs found
Using genetic algorithms to solve combinatorial optimization problems
Genetic algorithms are stochastic search techniques based on the mechanics of natural selection and natural genetics. Genetic algorithms differ from traditional analytical methods by using genetic operators and historic cumulative information to prune the search space and generate plausible solutions. Recent research has shown that genetic algorithms have a large range and growing number of applications.
The research presented in this thesis is that of using genetic algorithms to solve some typical combinatorial optimization problems, namely the Clique, Vertex Cover and Max Cut problems. All of these are NP-Complete problems. The empirical results show that genetic algorithms can provide efficient search heuristics for solving these combinatorial optimization problems.
Genetic algorithms are inherently parallel. The Connection Machine system makes parallel implementation of these inherently parallel algorithms possible. Both sequential genetic algorithms and parallel genetic algorithms for Clique, Vertex Cover and Max Cut problems have been developed and implemented on the SUN4 and the Connection Machine systems respectively
Survey on Combinatorial Register Allocation and Instruction Scheduling
Register allocation (mapping variables to processor registers or memory) and
instruction scheduling (reordering instructions to increase instruction-level
parallelism) are essential tasks for generating efficient assembly code in a
compiler. In the last three decades, combinatorial optimization has emerged as
an alternative to traditional, heuristic algorithms for these two tasks.
Combinatorial optimization approaches can deliver optimal solutions according
to a model, can precisely capture trade-offs between conflicting decisions, and
are more flexible at the expense of increased compilation time.
This paper provides an exhaustive literature review and a classification of
combinatorial optimization approaches to register allocation and instruction
scheduling, with a focus on the techniques that are most applied in this
context: integer programming, constraint programming, partitioned Boolean
quadratic programming, and enumeration. Researchers in compilers and
combinatorial optimization can benefit from identifying developments, trends,
and challenges in the area; compiler practitioners may discern opportunities
and grasp the potential benefit of applying combinatorial optimization
The Enabling Power of Graph Coloring Algorithms in Automatic Differentiation and Parallel Processing
Combinatorial scientific computing (CSC) is founded
on the recognition of the enabling power of combinatorial algorithms
in scientific and engineering computation and in high-performance computing.
The domain of CSC extends beyond traditional scientific computing---the
three major branches of which are numerical linear algebra,
numerical solution of differential equations, and
numerical optimization---to include a range of emerging and
rapidly evolving computational and information science disciplines.
Orthogonally, CSC problems could also emanate from
infrastructural technologies for supporting high-performance computing.
Despite the apparent disparity in their origins,
CSC problems and scenarios are unified by the following common features:
(A) The overarching goal is often to make computation
efficient---by minimizing overall execution time, memory usage,
and/or storage space---or to facilitate knowledge discovery or analysis.
(B) Identifying the most accurate combinatorial abstractions that
help achieve this goal is usually a part of the challenge.
(C) The abstractions are often expressed, with advantage, as graph
or hypergraph problems.
(D) The identified combinatorial problems are typically NP-hard to
solve optimally. Thus, fast, often linear-time, approximation (or
heuristic) algorithms are the methods of choice.
(E) The combinatorial algorithms themselves often need to be
parallelized, to avoid their being bottlenecks within a larger
parallel computation.
(F) Implementing the algorithms and deploying them via software
toolkits is critical.
This talk attempts to illustrate the aforementioned features of CSC
through an example: we consider the enabling role graph coloring
models and their algorithms play in efficient computation of
sparse derivative matrices via automatic differentiation (AD).
The talk focuses on efforts being made on this topic within
the SciDAC Institute for Combinatorial Scientific Computing
and Petascale Simulations (CSCAPES).
Aiming at providing overview than details, we discuss
the various coloring models used in sparse Jacobian and Hessian computation,
the serial and parallel algorithms developed in CSCAPES
for solving the coloring problems, and a
case study that demonstrate the efficacy of the coloring techniques
in the context of an optimization problem in a Simulated Moving Bed process.
Implementations of our serial algorithms for the coloring
and related problems in derivative computation are assembled
and made publicly available in a package called ColPack.
Implementations of our parallel coloring algorithms are
incorporated into and deployed via the load-balancing toolkit Zoltan.
ColPack has been interfaced with ADOL-C, an operator overloading-based
AD tool that has recently acquired improved capabilities for
automatic detection of sparsity patterns of Jacobians and Hessians
(sparsity pattern detection is the first step in derivative matrix
computation via coloring-based compression).
Further information on ColPack and Zoltan is available
at their respective websites, which can be accessed via
http://www.cscapes.or
Reflection methods for user-friendly submodular optimization
Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results)
We explore the use of the Cell Broadband Engine (Cell/BE for short) for
combinatorial optimization applications: we present a parallel version of a
constraint-based local search algorithm that has been implemented on a
multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16
SPUs per blade). This algorithm was chosen because it fits very well the
Cell/BE architecture and requires neither shared memory nor communication
between processors, while retaining a compact memory footprint. We study the
performance on several large optimization benchmarks and show that this
achieves mostly linear time speedups, even sometimes super-linear. This is
possible because the parallel implementation might explore simultaneously
different parts of the search space and therefore converge faster towards the
best sub-space and thus towards a solution. Besides getting speedups, the
resulting times exhibit a much smaller variance, which benefits applications
where a timely reply is critical
- …