19,230 research outputs found

    Efficient Truss Maintenance in Evolving Networks

    Full text link
    Truss was proposed to study social network data represented by graphs. A k-truss of a graph is a cohesive subgraph, in which each edge is contained in at least k-2 triangles within the subgraph. While truss has been demonstrated as superior to model the close relationship in social networks and efficient algorithms for finding trusses have been extensively studied, very little attention has been paid to truss maintenance. However, most social networks are evolving networks. It may be infeasible to recompute trusses from scratch from time to time in order to find the up-to-date kk-trusses in the evolving networks. In this paper, we discuss how to maintain trusses in a graph with dynamic updates. We first discuss a set of properties on maintaining trusses, then propose algorithms on maintaining trusses on edge deletions and insertions, finally, we discuss truss index maintenance. We test the proposed techniques on real datasets. The experiment results show the promise of our work

    Parallel Maximum Clique Algorithms with Applications to Network Analysis and Storage

    Full text link
    We propose a fast, parallel maximum clique algorithm for large sparse graphs that is designed to exploit characteristics of social and information networks. The method exhibits a roughly linear runtime scaling over real-world networks ranging from 1000 to 100 million nodes. In a test on a social network with 1.8 billion edges, the algorithm finds the largest clique in about 20 minutes. Our method employs a branch and bound strategy with novel and aggressive pruning techniques. For instance, we use the core number of a vertex in combination with a good heuristic clique finder to efficiently remove the vast majority of the search space. In addition, we parallelize the exploration of the search tree. During the search, processes immediately communicate changes to upper and lower bounds on the size of maximum clique, which occasionally results in a super-linear speedup because vertices with large search spaces can be pruned by other processes. We apply the algorithm to two problems: to compute temporal strong components and to compress graphs.Comment: 11 page

    Ramsey numbers and adiabatic quantum computing

    Full text link
    The graph-theoretic Ramsey numbers are notoriously difficult to calculate. In fact, for the two-color Ramsey numbers R(m,n)R(m,n) with m,n≥3m,n\geq 3, only nine are currently known. We present a quantum algorithm for the computation of the Ramsey numbers R(m,n)R(m,n). We show how the computation of R(m,n)R(m,n) can be mapped to a combinatorial optimization problem whose solution can be found using adiabatic quantum evolution. We numerically simulate this adiabatic quantum algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(2,s) for 5≤s≤75\leq s\leq 7. We then discuss the algorithm's experimental implementation, and close by showing that Ramsey number computation belongs to the quantum complexity class QMA.Comment: 4 pages, 1 table, no figures, published versio

    Cover-Encodings of Fitness Landscapes

    Full text link
    The traditional way of tackling discrete optimization problems is by using local search on suitably defined cost or fitness landscapes. Such approaches are however limited by the slowing down that occurs when the local minima that are a feature of the typically rugged landscapes encountered arrest the progress of the search process. Another way of tackling optimization problems is by the use of heuristic approximations to estimate a global cost minimum. Here we present a combination of these two approaches by using cover-encoding maps which map processes from a larger search space to subsets of the original search space. The key idea is to construct cover-encoding maps with the help of suitable heuristics that single out near-optimal solutions and result in landscapes on the larger search space that no longer exhibit trapping local minima. We present cover-encoding maps for the problems of the traveling salesman, number partitioning, maximum matching and maximum clique; the practical feasibility of our method is demonstrated by simulations of adaptive walks on the corresponding encoded landscapes which find the global minima for these problems.Comment: 15 pages, 4 figure

    Sparse neural networks with large learning diversity

    Full text link
    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory
    • …
    corecore