3,474 research outputs found
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Randomized heuristics for the Capacitated Clustering Problem
In this paper, we investigate the adaptation of the Greedy Randomized Adaptive Search Procedure (GRASP) and Iterated Greedy methodologies to the Capacitated Clustering Problem (CCP). In particular, we focus on the effect of the balance between randomization and greediness on the performance of these multi-start heuristic search methods when solving this NP-hard problem. The former is a memory-less approach that constructs independent solutions, while the latter is a memory-based method that constructs linked solutions, obtained by partially rebuilding previous ones. Both are based on the combination of greediness and randomization in the constructive process, and coupled with a subsequent local search phase. We propose these two multi-start methods and their hybridization and compare their performance on the CCP. Additionally, we propose a heuristic based on the mathematical programming formulation of this problem, which constitutes a so-called matheuristic. We also implement a classical randomized method based on simulated annealing to complete the picture of randomized heuristics. Our extensive experimentation reveals that Iterated Greedy performs better than GRASP in this problem, and improved outcomes are obtained when both methods are hybridized and coupled with the matheuristic. In fact, the hybridization is able to outperform the best approaches previously published for the CCP. This study shows that memory-based construction is an effective mechanism within multi-start heuristic search techniques
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
We investigate two new optimization problems -- minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover) and
maximizing a submodular function subject to a submodular upper bound constraint
(submodular knapsack). We are motivated by a number of real-world applications
in machine learning including sensor placement and data subset selection, which
require maximizing a certain submodular function (like coverage or diversity)
while simultaneously minimizing another (like cooperative cost). These problems
are often posed as minimizing the difference between submodular functions [14,
35] which is in the worst case inapproximable. We show, however, that by
phrasing these problems as constrained optimization, which is more natural for
many applications, we achieve a number of bounded approximation guarantees. We
also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for the
other. We provide hardness results for both problems thus showing that our
approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201
An efficient genetic algorithm for large-scale planning of robust industrial wireless networks
An industrial indoor environment is harsh for wireless communications
compared to an office environment, because the prevalent metal easily causes
shadowing effects and affects the availability of an industrial wireless local
area network (IWLAN). On the one hand, it is costly, time-consuming, and
ineffective to perform trial-and-error manual deployment of wireless nodes. On
the other hand, the existing wireless planning tools only focus on office
environments such that it is hard to plan IWLANs due to the larger problem size
and the deployed IWLANs are vulnerable to prevalent shadowing effects in harsh
industrial indoor environments. To fill this gap, this paper proposes an
overdimensioning model and a genetic algorithm based over-dimensioning (GAOD)
algorithm for deploying large-scale robust IWLANs. As a progress beyond the
state-of-the-art wireless planning, two full coverage layers are created. The
second coverage layer serves as redundancy in case of shadowing. Meanwhile, the
deployment cost is reduced by minimizing the number of access points (APs); the
hard constraint of minimal inter-AP spatial paration avoids multiple APs
covering the same area to be simultaneously shadowed by the same obstacle. The
computation time and occupied memory are dedicatedly considered in the design
of GAOD for large-scale optimization. A greedy heuristic based
over-dimensioning (GHOD) algorithm and a random OD algorithm are taken as
benchmarks. In two vehicle manufacturers with a small and large indoor
environment, GAOD outperformed GHOD with up to 20% less APs, while GHOD
outputted up to 25% less APs than a random OD algorithm. Furthermore, the
effectiveness of this model and GAOD was experimentally validated with a real
deployment system
- …