108 research outputs found

    Efficient modularity density heuristics in graph clustering and their applications

    Get PDF
    Modularity Density Maximization is a graph clustering problem which avoids the resolution limit degeneracy of the Modularity Maximization problem. This thesis aims at solving larger instances than current Modularity Density heuristics do, and show how close the obtained solutions are to the expected clustering. Three main contributions arise from this objective. The first one is about the theoretical contributions about properties of Modularity Density based prioritizers. The second one is the development of eight Modularity Density Maximization heuristics. Our heuristics are compared with optimal results from the literature, and with GAOD, iMeme-Net, HAIN, BMD- heuristics. Our results are also compared with CNM and Louvain which are heuristics for Modularity Maximization that solve instances with thousands of nodes. The tests were carried out by using graphs from the “Stanford Large Network Dataset Collection”. The experiments have shown that our eight heuristics found solutions for graphs with hundreds of thousands of nodes. Our results have also shown that five of our heuristics surpassed the current state-of-the-art Modularity Density Maximization heuristic solvers for large graphs. A third contribution is the proposal of six column generation methods. These methods use exact and heuristic auxiliary solvers and an initial variable generator. Comparisons among our proposed column generations and state-of-the-art algorithms were also carried out. The results showed that: (i) two of our methods surpassed the state-of-the-art algorithms in terms of time, and (ii) our methods proved the optimal value for larger instances than current approaches can tackle. Our results suggest clear improvements to the state-of-the-art results for the Modularity Density Maximization problem

    Interior point methods and simulated annealing for nonsymmetric conic optimization

    Get PDF
    This thesis explores four methods for convex optimization. The first two are an interior point method and a simulated annealing algorithm that share a theoretical foundation. This connection is due to the interior point method’s use of the so-called entropic barrier, whose derivatives can be approximated through sampling. Here, the sampling will be carried out with a technique known as hit-and-run. By carefully analyzing the properties of hit-and-run sampling, it is shown that both the interior point method and the simulated annealing algorithm can solve a convex optimization problem in the membership oracle setting. The number of oracle calls made by these methods is bounded by a polynomial in the input size. The third method is an analytic center cutting plane method that shows promising performance for copositive optimization. It outperforms the first two methods by a significant margin on the problem of separating a matrix from the completely positive cone. The final method is based on Mosek’s algorithm for nonsymmetric conic optimization. With their scaling matrix, search direction, and neighborhood, we define a method that converges to a near-optimal solution in polynomial time
    corecore