586,226 research outputs found
On relaxations of the max -cut problem formulations
A tight continuous relaxation is a crucial factor in solving mixed integer
formulations of many NP-hard combinatorial optimization problems. The
(weighted) max -cut problem is a fundamental combinatorial optimization
problem with multiple notorious mixed integer optimization formulations. In
this paper, we explore four existing mixed integer optimization formulations of
the max -cut problem. Specifically, we show that the continuous relaxation
of a binary quadratic optimization formulation of the problem is: (i) stronger
than the continuous relaxation of two mixed integer linear optimization
formulations and (ii) at least as strong as the continuous relaxation of a
mixed integer semidefinite optimization formulation. We also conduct a set of
experiments on multiple sets of instances of the max -cut problem using
state-of-the-art solvers that empirically confirm the theoretical results in
item (i). Furthermore, these numerical results illustrate the advances in the
efficiency of global non-convex quadratic optimization solvers and more general
mixed integer nonlinear optimization solvers. As a result, these solvers
provide a promising option to solve combinatorial optimization problems. Our
codes and data are available on GitHub
Minimizing energy below the glass thresholds
Focusing on the optimization version of the random K-satisfiability problem,
the MAX-K-SAT problem, we study the performance of the finite energy version of
the Survey Propagation (SP) algorithm. We show that a simple (linear time)
backtrack decimation strategy is sufficient to reach configurations well below
the lower bound for the dynamic threshold energy and very close to the analytic
prediction for the optimal ground states. A comparative numerical study on one
of the most efficient local search procedures is also given.Comment: 12 pages, submitted to Phys. Rev. E, accepted for publicatio
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
Variants of Shortest Path Problems
The shortest path problem in which the (s, t) -paths P of a given digraph G = (V, E) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, k -sum and k -max objectives, (k 1, k 2) -max, (k 1, k 2) -balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on k -sum and k -max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs
Sub-exponential Approximation Schemes for CSPs: From Dense to Almost Sparse
It has long been known, since the classical work of (Arora, Karger, Karpinski, JCSS\u2799), that MAX-CUT admits a PTAS on dense graphs, and more generally, MAX-k-CSP admits a PTAS on "dense" instances with Omega(n^k) constraints. In this paper we extend and generalize their exhaustive sampling approach, presenting a framework for (1-epsilon)-approximating any MAX-k-CSP problem in sub-exponential time while significantly relaxing the denseness requirement on the input instance.
Specifically, we prove that for any constants delta in (0, 1] and epsilon > 0, we can approximate MAX-k-CSP problems with Omega(n^{k-1+delta}) constraints within a factor of (1-epsilon) in time 2^{O(n^{1-delta}*ln(n) / epsilon^3)}. The framework is quite general and includes classical optimization problems, such as MAX-CUT, MAX-DICUT, MAX-k-SAT, and (with a slight extension) k-DENSEST SUBGRAPH, as special cases. For MAX-CUT in particular (where k=2), it gives an approximation scheme that runs in time sub-exponential in n even for "almost-sparse" instances (graphs with n^{1+delta} edges).
We prove that our results are essentially best possible, assuming the ETH. First, the density requirement cannot be relaxed further: there exists a constant r 0, MAX-k-SAT instances with O(n^{k-1}) clauses cannot be approximated within a ratio better than r in time 2^{O(n^{1-delta})}. Second, the running time of our algorithm is almost tight for all densities. Even for MAX-CUT there exists r delta >0, MAX-CUT instances with n^{1+delta} edges cannot be approximated within a ratio better than r in time 2^{n^{1-delta\u27}}
- …