3,113 research outputs found
On Approximating Multi-Criteria TSP
We present approximation algorithms for almost all variants of the
multi-criteria traveling salesman problem (TSP).
First, we devise randomized approximation algorithms for multi-criteria
maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP,
where the edge weights have to be symmetric, we devise an algorithm with an
approximation ratio of 2/3 - eps. For multi-criteria Max-ATSP, where the edge
weights may be asymmetric, we present an algorithm with a ratio of 1/2 - eps.
Our algorithms work for any fixed number k of objectives. Furthermore, we
present a deterministic algorithm for bi-criteria Max-STSP that achieves an
approximation ratio of 7/27.
Finally, we present a randomized approximation algorithm for the asymmetric
multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm
achieves a ratio of log n + eps.Comment: Preliminary version at STACS 2009. This paper is a revised full
version, where some proofs are simplifie
Deterministic algorithms for multi-criteria TSP
We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms.\ud
First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of 1/2k − ε and 1/(4k − 2) − ε, respectively, where k is the number of objective functions. For two objective functions, we obtain ratios of 3/8 − ε and 1/4 − ε for the symmetric and asymmetric TSP, respectively. Our algorithms are self-contained and do not use existing approximation schemes as black boxes.\ud
Second, we adapt the generic cycle cover algorithm for Min-TSP. It achieves ratios of 3/2 + ε, , and for multi-criteria Min-ATSP with distances 1 and 2, Min-ATSP with -triangle inequality and Min-STSP with -triangle inequality, respectively
Approximation Algorithms for Multi-Criteria Traveling Salesman Problems
In multi-criteria optimization problems, several objective functions have to
be optimized. Since the different objective functions are usually in conflict
with each other, one cannot consider only one particular solution as the
optimal solution. Instead, the aim is to compute a so-called Pareto curve of
solutions. Since Pareto curves cannot be computed efficiently in general, we
have to be content with approximations to them.
We design a deterministic polynomial-time algorithm for multi-criteria
g-metric STSP that computes (min{1 +g, 2g^2/(2g^2 -2g +1)} + eps)-approximate
Pareto curves for all 1/2<=g<=1. In particular, we obtain a
(2+eps)-approximation for multi-criteria metric STSP. We also present two
randomized approximation algorithms for multi-criteria g-metric STSP that
achieve approximation ratios of (2g^3 +2g^2)/(3g^2 -2g +1) + eps and (1 +g)/(1
+3g -4g^2) + eps, respectively.
Moreover, we present randomized approximation algorithms for multi-criteria
g-metric ATSP (ratio 1/2 + g^3/(1 -3g^2) + eps) for g < 1/sqrt(3)), STSP with
weights 1 and 2 (ratio 4/3) and ATSP with weights 1 and 2 (ratio 3/2). To do
this, we design randomized approximation schemes for multi-criteria cycle cover
and graph factor problems.Comment: To appear in Algorithmica. A preliminary version has been presented
at the 4th Workshop on Approximation and Online Algorithms (WAOA 2006
Multi-rendezvous Spacecraft Trajectory Optimization with Beam P-ACO
The design of spacecraft trajectories for missions visiting multiple
celestial bodies is here framed as a multi-objective bilevel optimization
problem. A comparative study is performed to assess the performance of
different Beam Search algorithms at tackling the combinatorial problem of
finding the ideal sequence of bodies. Special focus is placed on the
development of a new hybridization between Beam Search and the Population-based
Ant Colony Optimization algorithm. An experimental evaluation shows all
algorithms achieving exceptional performance on a hard benchmark problem. It is
found that a properly tuned deterministic Beam Search always outperforms the
remaining variants. Beam P-ACO, however, demonstrates lower parameter
sensitivity, while offering superior worst-case performance. Being an anytime
algorithm, it is then found to be the preferable choice for certain practical
applications.Comment: Code available at https://github.com/lfsimoes/beam_paco__gtoc
Balanced Combinations of Solutions in Multi-Objective Optimization
For every list of integers x_1, ..., x_m there is some j such that x_1 + ...
+ x_j - x_{j+1} - ... - x_m \approx 0. So the list can be nearly balanced and
for this we only need one alternation between addition and subtraction. But
what if the x_i are k-dimensional integer vectors? Using results from
topological degree theory we show that balancing is still possible, now with k
alternations.
This result is useful in multi-objective optimization, as it allows a
polynomial-time computable balance of two alternatives with conflicting costs.
The application to two multi-objective optimization problems yields the
following results:
- A randomized 1/2-approximation for multi-objective maximum asymmetric
traveling salesman, which improves and simplifies the best known approximation
for this problem.
- A deterministic 1/2-approximation for multi-objective maximum weighted
satisfiability
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
A statistical learning based approach for parameter fine-tuning of metaheuristics
Metaheuristics are approximation methods used to solve combinatorial optimization problems. Their performance usually depends on a set of parameters that need to be adjusted. The selection of appropriate parameter values causes a loss of efficiency, as it requires time, and advanced analytical and problem-specific skills. This paper provides an overview of the principal approaches to tackle the Parameter Setting Problem, focusing on the statistical procedures employed so far by the scientific community. In addition, a novel methodology is proposed, which is tested using an already existing algorithm for solving the Multi-Depot Vehicle Routing Problem.Peer ReviewedPostprint (published version
- …