8,810 research outputs found

    Two-phase heuristics for the k-club problem

    Get PDF
    Given an undirected graph G and an integer k, a k-club is a subset of nodes that induces a subgraph with diameter at most k. The k-club problem consists of identifying a maximum cardinality k-club in G. It is an NP-hard problem. The problem of checking if a given k-club is maximal or if it is a subset of a larger k-club is also NP-hard, due to the non-hereditary nature of the k-club structure. This non-hereditary nature is adverse for heuristic strategies that rely on single-element add and delete operations. In this work we propose two-phase algorithms which combine simple construction schemes with exact optimization of restricted integer models to generate near optimal solutions for the k-club problem. Numerical experiments on sets of uniform random graphs with edge densities known to be very challenging and test instances available in the literature indicate that the new algorithms are quite effective, both in terms of solution quality and running times.info:eu-repo/semantics/publishedVersio

    Information dynamics algorithm for detecting communities in networks

    Full text link
    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network - inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.Comment: Submitted to "Communication in Nonlinear Science and Numerical Simulation

    Genetic Algorithm for Epidemic Mitigation by Removing Relationships

    Full text link
    Min-SEIS-Cluster is an optimization problem which aims at minimizing the infection spreading in networks. In this problem, nodes can be susceptible to an infection, exposed to an infection, or infectious. One of the main features of this problem is the fact that nodes have different dynamics when interacting with other nodes from the same community. Thus, the problem is characterized by distinct probabilities of infecting nodes from both the same and from different communities. This paper presents a new genetic algorithm that solves the Min-SEIS-Cluster problem. This genetic algorithm surpassed the current heuristic of this problem significantly, reducing the number of infected nodes during the simulation of the epidemics. The results therefore suggest that our new genetic algorithm is the state-of-the-art heuristic to solve this problem.Comment: GECCO '17 - Proceedings of the Genetic and Evolutionary Computation Conferenc

    Modulo scheduling with reduced register pressure

    Get PDF
    Software pipelining is a scheduling technique that is used by some product compilers in order to expose more instruction level parallelism out of innermost loops. Module scheduling refers to a class of algorithms for software pipelining. Most previous research on module scheduling has focused on reducing the number of cycles between the initiation of consecutive iterations (which is termed II) but has not considered the effect of the register pressure of the produced schedules. The register pressure increases as the instruction level parallelism increases. When the register requirements of a schedule are higher than the available number of registers, the loop must be rescheduled perhaps with a higher II. Therefore, the register pressure has an important impact on the performance of a schedule. This paper presents a novel heuristic module scheduling strategy that tries to generate schedules with the lowest II, and, from all the possible schedules with such II, it tries to select that with the lowest register requirements. The proposed method has been implemented in an experimental compiler and has been tested for the Perfect Club benchmarks. The results show that the proposed method achieves an optimal II for at least 97.5 percent of the loops and its compilation time is comparable to a conventional top-down approach, whereas the register requirements are lower. In addition, the proposed method is compared with some other existing methods. The results indicate that the proposed method performs better than other heuristic methods and almost as well as linear programming methods, which obtain optimal solutions but are impractical for product compilers because their computing cost grows exponentially with the number of operations in the loop body.Peer ReviewedPostprint (published version

    Community detection and graph partitioning

    Full text link
    Many methods have been proposed for community detection in networks. Some of the most promising are methods based on statistical inference, which rest on solid mathematical foundations and return excellent results in practice. In this paper we show that two of the most widely used inference methods can be mapped directly onto versions of the standard minimum-cut graph partitioning problem, which allows us to apply any of the many well-understood partitioning algorithms to the solution of community detection problems. We illustrate the approach by adapting the Laplacian spectral partitioning method to perform community inference, testing the resulting algorithm on a range of examples, including computer-generated and real-world networks. Both the quality of the results and the running time rival the best previous methods.Comment: 5 pages, 2 figure

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Construction of near-optimal vertex clique covering for real-world networks

    Get PDF
    We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model
    corecore