1,179 research outputs found
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Evolutionary Approaches to Optimization Problems in Chimera Topologies
Chimera graphs define the topology of one of the first commercially available
quantum computers. A variety of optimization problems have been mapped to this
topology to evaluate the behavior of quantum enhanced optimization heuristics
in relation to other optimizers, being able to efficiently solve problems
classically to use them as benchmarks for quantum machines. In this paper we
investigate for the first time the use of Evolutionary Algorithms (EAs) on
Ising spin glass instances defined on the Chimera topology. Three genetic
algorithms (GAs) and three estimation of distribution algorithms (EDAs) are
evaluated over hard instances of the Ising spin glass constructed from
Sidon sets. We focus on determining whether the information about the topology
of the graph can be used to improve the results of EAs and on identifying the
characteristics of the Ising instances that influence the success rate of GAs
and EDAs.Comment: 8 pages, 5 figures, 3 table
Learning Bayesian Networks with the bnlearn R Package
bnlearn is an R package (R Development Core Team 2010) which includes several algorithms for learning the structure of Bayesian networks with either discrete or continuous variables. Both constraint-based and score-based algorithms are implemented, and can use the functionality provided by the snow package (Tierney et al. 2008) to improve their performance via parallel computing. Several network scores and conditional independence algorithms are available for both the learning algorithms and independent use. Advanced plotting options are provided by the Rgraphviz package (Gentry et al. 2010).
- …