9 research outputs found
A multiple search operator heuristic for the max-k-cut problem
The max-k-cut problem is to partition the vertices of an edge-weighted graph G=(V,E) into kâ„2 disjoint subsets such that the weight sum of the edges crossing the different subsets is maximized. The problem is referred as the max-cut problem when k=2. In this work, we present a multiple operator heuristic (MOH) for the general max-k-cut problem. MOH employs five distinct search operators organized into three search phases to effectively explore the search space. Experiments on two sets of 91 well-known benchmark instances show that the proposed algorithm is highly effective on the max-k-cut problem and improves the current best known results (lower bounds) of most of the tested instances for kâ[3,5]. For the popular special case k=2 (i.e., the max-cut problem), MOH also performs remarkably well by discovering 4 improved best known results. We provide additional studies to shed light on the key ingredients of the algorithm
Clustering Improves the GoemansâWilliamson Approximation for the Max-Cut Problem
MAXâCUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer âspinâ variables xi by unitary vectors vâ i. The GoemansâWilliamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product vâ iâ
râ with a random vector râ . Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAXâCUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the GoemansâWilliamson guarantee
A simple iterative algorithm for maxcut
We propose a simple iterative (SI) algorithm for the maxcut problem through
fully using an equivalent continuous formulation. It does not need rounding at
all and has advantages that all subproblems have explicit analytic solutions,
the cut values are monotonically updated and the iteration points converge to a
local optima in finite steps via an appropriate subgradient selection.
Numerical experiments on G-set demonstrate the performance. In particular, the
ratios between the best cut values achieved by SI and the best known ones are
at least and can be further improved to at least by a
preliminary attempt to break out of local optima.Comment: 30 pages, 1 figure. Subgradient selection, cost analysis and local
breakout are adde
Order-of-magnitude differences in computational performance of analog Ising machines induced by the choice of nonlinearity
Ising machines based on nonlinear analog systems are a promising method to
accelerate computation of NP-hard optimization problems. Yet, their analog
nature is also causing amplitude inhomogeneity which can deteriorate the
ability to find optimal solutions. Here, we investigate how the system's
nonlinear transfer function can mitigate amplitude inhomogeneity and improve
computational performance. By simulating Ising machines with polynomial,
periodic, sigmoid and clipped transfer functions and benchmarking them with
MaxCut optimization problems, we find the choice of transfer function to have a
significant influence on the calculation time and solution quality. For
periodic, sigmoid and clipped transfer functions, we report order-of-magnitude
improvements in the time-to-solution compared to conventional polynomial
models, which we link to the suppression of amplitude inhomogeneity induced by
saturation of the transfer function. This provides insights into the
suitability of systems for building Ising machines and presents an efficient
way for overcoming performance limitations
Efficient Enumeration of the Optimal Solutions to the Correlation Clustering problem
According to the structural balance theory, a signed graph is considered
structurally balanced when it can be partitioned into a number of modules such
that positive and negative edges are respectively located inside and between
the modules. In practice, real-world networks are rarely structurally balanced,
though. In this case, one may want to measure the magnitude of their imbalance,
and to identify the set of edges causing this imbalance. The correlation
clustering (CC) problem precisely consists in looking for the signed graph
partition having the least imbalance. Recently, it has been shown that the
space of the optimal solutions of the CC problem can be constituted of numerous
and diverse optimal solutions. Yet, this space is difficult to explore, as the
CC problem is NP-hard, and exact approaches do not scale well even when looking
for a single optimal solution. To alleviate this issue, in this work we propose
an efficient enumeration method allowing to retrieve the complete space of
optimal solutions of the CC problem. It combines an exhaustive enumeration
strategy with neighborhoods of varying sizes, to achieve computational
effectiveness. Results obtained for middle-sized networks confirm the
usefulness of our method
On the Study of Fitness Landscapes and the Max-Cut Problem
The goal of this thesis is to study the complexity of NP-Hard problems, using the Max-Cut and the Max-k-Cut problems, and the study of fitness landscapes. The Max-Cut and Max-k-Cut problems are well studied NP-hard problems specially since the approximation algorithm of Goemans and Williamson (1995) which introduced the use of SDP to solve relaxed problems. In order to prove the existence of a performance guarantee, the rounding step from the SDP solution to a Max-Cut solution is simple and randomized. For the Max-k-Cut problem, there exist several approximation algorithms but many of them have been proved to be equivalent. Similarly as in Max-Cut, these approximation algorithms use a simple randomized rounding to be able to get a performance guarantee.
Ignoring for now the performance guarantee, one could ask if there is a rounding process that takes into account the structure of the relaxed solution since it is the result of an optimization problem. In this thesis we answered this question positively by using clustering as a rounding method.
In order to compare the performance of both algorithms, a series of experiments were performed using the so-called G-set benchmark for the Max-Cut problem and using the Random Graph Benchmark of Goemans1995 for the Max-k-Cut problem.
With this new rounding, larger cut values are found both for the Max-Cut and the Max-k-Cut problems, and always above the value of the performance guarantee of the approximation algorithm. This suggests that taking into account the structure of the problem to design algorithms can lead to better results, possibly at the cost of a worse performance guarantee. An example for the vertex k-center problem can be seen in Garcia-Diaz et al. (2017), where a 3-approximation algorithm performs better than a 2-approximation algorithm despite having a worse performance guarantee.
Landscapes over discrete configurations spaces are an important model in evolutionary and structural biology, as well as many other areas of science, from the physics of disordered systems to operations research. A landscape is a function defined on a very large discrete set V that carries an additional metric or at least topological structure into the real numbers R. We will consider landscapes defined on the vertex set of undirected graphs. Thus let G=G(V,E) be an undirected graph and f an arbitrary real-valued function taking values from V . We will refer to the triple (V,E,f) as a landscape over G.
We say two configurations x,y in V are neutral if f(x)=f(y). We colloquially refer to a landscape as 'neutral'' if a substantial fraction of adjacent pairs of configurations are neutral. A flat landscape is one where f is constant. The opposite of flatness is ruggedness and it is defined as the number of local optima or by means of pair correlation functions.
These two key features of a landscape, ruggedness and neutrality, appear to be two sides of the same coin. Ruggedness can be measured either by correlation properties, which are sensitive to monotonic transformation of the landscape, and by combinatorial properties such as the lengths of downhill paths and the number of local optima, which are invariant under monotonic transformations. The connection between the two views has remained largely unexplored and poorly understood. For this thesis, a survey on fitness landscapes is presented, together with the first steps in the direction to find this connection together with a relation between the covariance matrix of a random landscape model and its ruggedness
Global Optimization of the Maximum K-Cut Problem
RĂSUMĂ: Le problĂšme de la k-coupe maximale (max-k-cut) est un problĂšme de partitionnement de graphes qui est un des reprĂ©sentatifs de la classe des problĂšmes combinatoires NP-difficiles. Le max-kcut peut ĂȘtre utilisĂ© dans de nombreuses applications industrielles. Lâobjectif de ce problĂšme est
de partitionner lâensemble des sommets en k parties de telle façon que le poids total des arrĂȘtes coupĂ©es soit maximisĂ©.
Les mĂ©thodes proposĂ©es dans la littĂ©rature pour rĂ©soudre le max-k-cut emploient, gĂ©nĂ©ralement, la programmation semidĂ©finie positive (SDP) associĂ©e. En comparaison avec les relaxations de la programmation linĂ©aire (LP), les relaxations SDP sont plus fortes mais les temps de calcul sont plus Ă©levĂ©s. Par consĂ©quent, les mĂ©thodes basĂ©es sur la SDP ne peuvent pas rĂ©soudre de gros problĂšmes. Cette thĂšse introduit une mĂ©thode efficace de branchement et de rĂ©solution du problĂšme max-k-cut en utilisant des relaxations SDP et LP renforcĂ©es. Cette thĂšse prĂ©sente trois approches pour amĂ©liorer les solutions du max-k-cut. La premiĂšre approche se concentre sur lâidentification des classes dâinĂ©galitĂ©s les plus pertinentes des relaxations
de max-k-cut. Cette approche consiste en une Ă©tude expĂ©rimentale de quatre classes dâinĂ©galitĂ©s de la littĂ©rature : clique, general clique, wheel et bicycle wheel. Afin dâinclure ces inĂ©galitĂ©s dans
les formulations, nous utilisons un algorithme de plan coupant (CPA) pour ajouter seulement les inĂ©galitĂ©s les plus importantes . Ainsi, nous avons conçu plusieurs procĂ©dures de sĂ©paration pour trouver les violations. Les rĂ©sultats suggĂšrent que les inĂ©galitĂ©s de wheel sont les plus fortes. De plus, lâinclusion de ces inĂ©galitĂ©s dans le max-k-cut peut amĂ©liorer la borne de la SDP de plus de 2%.
La deuxiĂšme approche introduit les contraintes basĂ©es sur formulation SDP pour renforcer la relaxation LP. De plus, le CPA est amĂ©liorĂ© en exploitant la technique de terminaison prĂ©coce dâune mĂ©thode de points intĂ©rieurs. Les rĂ©sultats montrent que la relaxation LP avec les inĂ©galitĂ©s basĂ©es
sur la SDP surpasse la relaxation SDP pour de nombreux cas, en particulier pour les instances avec un grand nombre de partitions (k ïżœ 7). La troisiĂšme approche Ă©tudie la mĂ©thode dâĂ©numĂ©ration implicite en se basant sur les rĂ©sultats
des derniĂšres approches. On Ă©tudie quatre composantes de la mĂ©thode. Tout dâabord, nous prĂ©sentons quatre mĂ©thodes heuristiques pour trouver des solutions rĂ©alisables : lâheuristique itĂ©rative dâagrĂ©gation, lâheuristique dâopĂ©rateur multiple, la recherche Ă voisinages variables, et la procĂ©dure de recherche alĂ©atoire adaptative gloutonne. La deuxiĂšme procĂ©dure analyse les stratĂ©gies dichotomiques
et polytomiques pour diviser un sous-problĂšme. La troisiĂšme composante Ă©tudie cinq rĂšgles de branchement. Enfin, pour la sĂ©lection des noeuds de lâarbre de branchement, nous considĂ©rons les stratĂ©gies suivantes : meilleur dâabord, profondeur dâabord, et largeur dâabord. Pour chaque
stratégie, nous fournissons des tests pour différentes valeurs de k. Les résultats montrent que la
mĂ©thode exacte proposĂ©e est capable de trouver de nombreuses solutions. Chacune de ces trois approches a contribuĂ© Ă la conception dâune mĂ©thode efficace pour rĂ©soudre
le problĂšme du max-k-cut. De plus, les approches proposĂ©es peuvent ĂȘtre Ă©tendues pour rĂ©soudre des problĂšmes gĂ©nĂ©riques dâoptimisation en variables mixtes.----------ABSTRACT: In graph theory, the maximum k-cut (max-k-cut) problem is a representative problem of the class of NP-hard combinatorial optimization problems. It arises in many industrial applications and the objective of this problem is to partition vertices of a given graph into at most k partitions such that the total weight of the cut is maximized. The methods proposed in the literature to optimally solve the max-k-cut employ, usually, the associated semidefinite programming (SDP) relaxation in a branch-and-bound framework. In comparison with the linear programming (LP) relaxation, the SDP relaxation is stronger but it suffers from high CPU times. Therefore, methods based on SDP cannot solve large problems. This thesis introduces
an efficient branch-and-bound method to solve the max-k-cut problem by using tightened SDP and LP relaxations.
This thesis presents three approaches to improve the solutions of the problem. The first approach focuses on identifying relevant classes of inequalities to tighten the relaxations of the max-k-cut. This approach carries out an experimental study of four classes of inequalities from the literature: clique, general clique, wheel and bicycle wheel. In order to include these inequalities, we employ a cutting plane algorithm (CPA) to add only the most important inequalities in practice and we design several separation routines to find violations in a relaxed solution. Computational results suggest that the wheel inequalities are the strongest by far. Moreover, the inclusion of these
inequalities in the max-k-cut improves the bound of the SDP formulation by more than 2%. The second approach introduces the SDP-based constraints to strengthen the LP relaxation. Moreover, the CPA is improved by exploiting the early-termination technique of an interior-point method.
Computational results show that the LP relaxation with the SDP-based inequalities outperforms the SDP relaxations for many instances, especially for a large number of partitions (k ïżœ 7). The third approach investigates the branch-and-bound method using both previous approaches. Four components of the branch-and-bound are considered. First, four heuristic methods are presented to find a feasible solution: the iterative clustering heuristic, the multiple operator heuristic, the variable neighborhood search, and the greedy randomized adaptive search procedure. The second procedure analyzes the dichotomic and polytomic strategies to split a subproblem. The third feature studies five branching rules. Finally, for the node selection, we consider the following
strategies: best-first search, depth-first search, and breadth-first search. For each component, we provide computational tests for different values of k. Computational results show that the proposed exact method is able to uncover many solutions. Each one of these three approaches contributed to the design of an efficient method to solve the max-k-cut problem. Moreover, the proposed approaches can be extended to solve generic mixinteger SDP problems