21,155 research outputs found

    A multiple search operator heuristic for the max-k-cut problem

    Get PDF
    The max-k-cut problem is to partition the vertices of an edge-weighted graph G=(V,E) into k≄2 disjoint subsets such that the weight sum of the edges crossing the different subsets is maximized. The problem is referred as the max-cut problem when k=2. In this work, we present a multiple operator heuristic (MOH) for the general max-k-cut problem. MOH employs five distinct search operators organized into three search phases to effectively explore the search space. Experiments on two sets of 91 well-known benchmark instances show that the proposed algorithm is highly effective on the max-k-cut problem and improves the current best known results (lower bounds) of most of the tested instances for k∈[3,5]. For the popular special case k=2 (i.e., the max-cut problem), MOH also performs remarkably well by discovering 4 improved best known results. We provide additional studies to shed light on the key ingredients of the algorithm

    Global Optimization of the Maximum K-Cut Problem

    Get PDF
    RÉSUMÉ: Le problĂšme de la k-coupe maximale (max-k-cut) est un problĂšme de partitionnement de graphes qui est un des reprĂ©sentatifs de la classe des problĂšmes combinatoires NP-difficiles. Le max-kcut peut ĂȘtre utilisĂ© dans de nombreuses applications industrielles. L’objectif de ce problĂšme est de partitionner l’ensemble des sommets en k parties de telle façon que le poids total des arrĂȘtes coupĂ©es soit maximisĂ©. Les mĂ©thodes proposĂ©es dans la littĂ©rature pour rĂ©soudre le max-k-cut emploient, gĂ©nĂ©ralement, la programmation semidĂ©finie positive (SDP) associĂ©e. En comparaison avec les relaxations de la programmation linĂ©aire (LP), les relaxations SDP sont plus fortes mais les temps de calcul sont plus Ă©levĂ©s. Par consĂ©quent, les mĂ©thodes basĂ©es sur la SDP ne peuvent pas rĂ©soudre de gros problĂšmes. Cette thĂšse introduit une mĂ©thode efficace de branchement et de rĂ©solution du problĂšme max-k-cut en utilisant des relaxations SDP et LP renforcĂ©es. Cette thĂšse prĂ©sente trois approches pour amĂ©liorer les solutions du max-k-cut. La premiĂšre approche se concentre sur l’identification des classes d’inĂ©galitĂ©s les plus pertinentes des relaxations de max-k-cut. Cette approche consiste en une Ă©tude expĂ©rimentale de quatre classes d’inĂ©galitĂ©s de la littĂ©rature : clique, general clique, wheel et bicycle wheel. Afin d’inclure ces inĂ©galitĂ©s dans les formulations, nous utilisons un algorithme de plan coupant (CPA) pour ajouter seulement les inĂ©galitĂ©s les plus importantes . Ainsi, nous avons conçu plusieurs procĂ©dures de sĂ©paration pour trouver les violations. Les rĂ©sultats suggĂšrent que les inĂ©galitĂ©s de wheel sont les plus fortes. De plus, l’inclusion de ces inĂ©galitĂ©s dans le max-k-cut peut amĂ©liorer la borne de la SDP de plus de 2%. La deuxiĂšme approche introduit les contraintes basĂ©es sur formulation SDP pour renforcer la relaxation LP. De plus, le CPA est amĂ©liorĂ© en exploitant la technique de terminaison prĂ©coce d’une mĂ©thode de points intĂ©rieurs. Les rĂ©sultats montrent que la relaxation LP avec les inĂ©galitĂ©s basĂ©es sur la SDP surpasse la relaxation SDP pour de nombreux cas, en particulier pour les instances avec un grand nombre de partitions (k ïżœ 7). La troisiĂšme approche Ă©tudie la mĂ©thode d’énumĂ©ration implicite en se basant sur les rĂ©sultats des derniĂšres approches. On Ă©tudie quatre composantes de la mĂ©thode. Tout d’abord, nous prĂ©sentons quatre mĂ©thodes heuristiques pour trouver des solutions rĂ©alisables : l’heuristique itĂ©rative d’agrĂ©gation, l’heuristique d’opĂ©rateur multiple, la recherche Ă  voisinages variables, et la procĂ©dure de recherche alĂ©atoire adaptative gloutonne. La deuxiĂšme procĂ©dure analyse les stratĂ©gies dichotomiques et polytomiques pour diviser un sous-problĂšme. La troisiĂšme composante Ă©tudie cinq rĂšgles de branchement. Enfin, pour la sĂ©lection des noeuds de l’arbre de branchement, nous considĂ©rons les stratĂ©gies suivantes : meilleur d’abord, profondeur d’abord, et largeur d’abord. Pour chaque stratĂ©gie, nous fournissons des tests pour diffĂ©rentes valeurs de k. Les rĂ©sultats montrent que la mĂ©thode exacte proposĂ©e est capable de trouver de nombreuses solutions. Chacune de ces trois approches a contribuĂ© Ă  la conception d’une mĂ©thode efficace pour rĂ©soudre le problĂšme du max-k-cut. De plus, les approches proposĂ©es peuvent ĂȘtre Ă©tendues pour rĂ©soudre des problĂšmes gĂ©nĂ©riques d’optimisation en variables mixtes.----------ABSTRACT: In graph theory, the maximum k-cut (max-k-cut) problem is a representative problem of the class of NP-hard combinatorial optimization problems. It arises in many industrial applications and the objective of this problem is to partition vertices of a given graph into at most k partitions such that the total weight of the cut is maximized. The methods proposed in the literature to optimally solve the max-k-cut employ, usually, the associated semidefinite programming (SDP) relaxation in a branch-and-bound framework. In comparison with the linear programming (LP) relaxation, the SDP relaxation is stronger but it suffers from high CPU times. Therefore, methods based on SDP cannot solve large problems. This thesis introduces an efficient branch-and-bound method to solve the max-k-cut problem by using tightened SDP and LP relaxations. This thesis presents three approaches to improve the solutions of the problem. The first approach focuses on identifying relevant classes of inequalities to tighten the relaxations of the max-k-cut. This approach carries out an experimental study of four classes of inequalities from the literature: clique, general clique, wheel and bicycle wheel. In order to include these inequalities, we employ a cutting plane algorithm (CPA) to add only the most important inequalities in practice and we design several separation routines to find violations in a relaxed solution. Computational results suggest that the wheel inequalities are the strongest by far. Moreover, the inclusion of these inequalities in the max-k-cut improves the bound of the SDP formulation by more than 2%. The second approach introduces the SDP-based constraints to strengthen the LP relaxation. Moreover, the CPA is improved by exploiting the early-termination technique of an interior-point method. Computational results show that the LP relaxation with the SDP-based inequalities outperforms the SDP relaxations for many instances, especially for a large number of partitions (k ïżœ 7). The third approach investigates the branch-and-bound method using both previous approaches. Four components of the branch-and-bound are considered. First, four heuristic methods are presented to find a feasible solution: the iterative clustering heuristic, the multiple operator heuristic, the variable neighborhood search, and the greedy randomized adaptive search procedure. The second procedure analyzes the dichotomic and polytomic strategies to split a subproblem. The third feature studies five branching rules. Finally, for the node selection, we consider the following strategies: best-first search, depth-first search, and breadth-first search. For each component, we provide computational tests for different values of k. Computational results show that the proposed exact method is able to uncover many solutions. Each one of these three approaches contributed to the design of an efficient method to solve the max-k-cut problem. Moreover, the proposed approaches can be extended to solve generic mixinteger SDP problems

    qTorch: The Quantum Tensor Contraction Handler

    Full text link
    Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit's line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation.Comment: 21 pages, 8 figure

    Industrial and Tramp Ship Routing Problems: Closing the Gap for Real-Scale Instances

    Full text link
    Recent studies in maritime logistics have introduced a general ship routing problem and a benchmark suite based on real shipping segments, considering pickups and deliveries, cargo selection, ship-dependent starting locations, travel times and costs, time windows, and incompatibility constraints, among other features. Together, these characteristics pose considerable challenges for exact and heuristic methods, and some cases with as few as 18 cargoes remain unsolved. To face this challenge, we propose an exact branch-and-price (B&P) algorithm and a hybrid metaheuristic. Our exact method generates elementary routes, but exploits decremental state-space relaxation to speed up column generation, heuristic strong branching, as well as advanced preprocessing and route enumeration techniques. Our metaheuristic is a sophisticated extension of the unified hybrid genetic search. It exploits a set-partitioning phase and uses problem-tailored variation operators to efficiently handle all the problem characteristics. As shown in our experimental analyses, the B&P optimally solves 239/240 existing instances within one hour. Scalability experiments on even larger problems demonstrate that it can optimally solve problems with around 60 ships and 200 cargoes (i.e., 400 pickup and delivery services) and find optimality gaps below 1.04% on the largest cases with up to 260 cargoes. The hybrid metaheuristic outperforms all previous heuristics and produces near-optimal solutions within minutes. These results are noteworthy, since these instances are comparable in size with the largest problems routinely solved by shipping companies

    A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

    Full text link
    Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters from the cloud computing environment, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. Traditional data placement strategies maintain load balancing with a given number of datacenters, which results in a large data transmission time. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the impact factors impacting transmission delay, such as the band-width between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover operator and mutation operator of the genetic algorithm were adopted to avoid the premature convergence of the traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing
    • 

    corecore