507 research outputs found
Partitioning networks into cliques: a randomized heuristic approach
In the context of community detection in social networks, the term community can be grounded in the strict way that simply everybody should know each other within the community. We consider the corresponding community detection problem. We search for a partitioning of a network into the minimum number of non-overlapping cliques, such that the cliques cover all vertices. This problem is called the clique covering problem (CCP) and is one of the classical NP-hard problems. For CCP, we propose a randomized heuristic approach. To construct a high quality solution to CCP, we present an iterated greedy (IG) algorithm. IG can also be combined with a heuristic used to determine how far the algorithm is from the optimum in the worst case. Randomized local search (RLS) for maximum independent set was proposed to find such a bound. The experimental results of IG and the bounds obtained by RLS indicate that IG is a very suitable technique for solving CCP in real-world graphs. In addition, we summarize our basic rigorous results, which were developed for analysis of IG and understanding of its behavior on several relevant graph classes
A novel evolutionary formulation of the maximum independent set problem
We introduce a novel evolutionary formulation of the problem of finding a
maximum independent set of a graph. The new formulation is based on the
relationship that exists between a graph's independence number and its acyclic
orientations. It views such orientations as individuals and evolves them with
the aid of evolutionary operators that are very heavily based on the structure
of the graph and its acyclic orientations. The resulting heuristic has been
tested on some of the Second DIMACS Implementation Challenge benchmark graphs,
and has been found to be competitive when compared to several of the other
heuristics that have also been tested on those graphs
A nonmonotone GRASP
A greedy randomized adaptive search procedure (GRASP) is an itera-
tive multistart metaheuristic for difficult combinatorial optimization problems. Each
GRASP iteration consists of two phases: a construction phase, in which a feasible
solution is produced, and a local search phase, in which a local optimum in the
neighborhood of the constructed solution is sought. Repeated applications of the con-
struction procedure yields different starting solutions for the local search and the
best overall solution is kept as the result. The GRASP local search applies iterative
improvement until a locally optimal solution is found. During this phase, starting from
the current solution an improving neighbor solution is accepted and considered as the
new current solution. In this paper, we propose a variant of the GRASP framework that
uses a new “nonmonotone” strategy to explore the neighborhood of the current solu-
tion. We formally state the convergence of the nonmonotone local search to a locally
optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP
on three classical hard combinatorial optimization problems: the maximum cut prob-
lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and
the quadratic assignment problem (QAP)
Multi-shot Solution Prediction for Combinatorial Optimization
This paper aims to predict optimal solutions for combinatorial optimization
problems (COPs) via machine learning (ML). To find high-quality solutions
efficiently, existing methods use a ML model to predict the optimal solution
and use the ML prediction to guide the search. Prediction of the optimal
solution to sufficient accuracy is critical, however it is challenging due to
the high complexity of COPs. Nevertheless, these existing methods are
single-shot, i.e., predicting the optimal solution only for once. This paper
proposes a framework that enables a ML model to predict the optimal solution in
multiple shots, namely multi-shot solution prediction (MSSP), which can improve
the quality of a ML prediction by harnessing feedback from search.
Specifically, we employ a set of statistical measures as features, to extract
useful information from feasible solutions found by the search method and
inform the ML model as to which value a decision variable is likely to take in
high-quality solutions. Our experiments on three NP-hard COPs show that MSSP
improves the quality of a ML prediction substantially and achieves competitive
results as compared with other search methods in terms of solution quality.
Furthermore, we demonstrate that MSSP can be used as a pricing heuristic for
column generation, to boost a branch-and-price algorithm for solving the graph
coloring problem
- …