33 research outputs found

    Penentuan Prioritas pada Jaringan Back-bone Palapa Ring Menggunakan Derajat Node dan Cut Vertex

    Get PDF
    Palapa Ring is a project aiming to connect provinces and cities in Indonesia via a high data speed telecommunication path. The purpose of this  research is to identify the priority scale of each node in Palapa Ring Backbone network by considering the degree of each node and the cut vertices of the network. The result shows that the existing infrastructure in Palapa Ring comprised 48 nodes and 117 links. The nodes with the highest degree in the network were PBR, PTK, BJM, JK, SB and UP, with each of the nodes was connected to four links. Cut vertices in the network consisted of 22 nodes. The nodes in the network are classified into 4 categories. Five nodes (PBR, PTK, BJM, SB and UP) fell into the 1st priority group, two nodes (JK,MDN) fell into the 2nd priority group, 16 nodes fell into the 3rd priority group and the rest fell into the non priority group

    Changepoint Detection over Graphs with the Spectral Scan Statistic

    Full text link
    We consider the change-point detection problem of deciding, based on noisy measurements, whether an unknown signal over a given graph is constant or is instead piecewise constant over two connected induced subgraphs of relatively low cut size. We analyze the corresponding generalized likelihood ratio (GLR) statistics and relate it to the problem of finding a sparsest cut in a graph. We develop a tractable relaxation of the GLR statistic based on the combinatorial Laplacian of the graph, which we call the spectral scan statistic, and analyze its properties. We show how its performance as a testing procedure depends directly on the spectrum of the graph, and use this result to explicitly derive its asymptotic properties on few significant graph topologies. Finally, we demonstrate both theoretically and by simulations that the spectral scan statistic can outperform naive testing procedures based on edge thresholding and χ2\chi^2 testing

    Isoperimetric Inequalities in Simplicial Complexes

    Full text link
    In graph theory there are intimate connections between the expansion properties of a graph and the spectrum of its Laplacian. In this paper we define a notion of combinatorial expansion for simplicial complexes of general dimension, and prove that similar connections exist between the combinatorial expansion of a complex, and the spectrum of the high dimensional Laplacian defined by Eckmann. In particular, we present a Cheeger-type inequality, and a high-dimensional Expander Mixing Lemma. As a corollary, using the work of Pach, we obtain a connection between spectral properties of complexes and Gromov's notion of geometric overlap. Using the work of Gunder and Wagner, we give an estimate for the combinatorial expansion and geometric overlap of random Linial-Meshulam complexes

    Distributed Sparse Cut Approximation

    Get PDF
    We study the problem of computing a sparse cut in an undirected network graph G=(V,E). We measure the sparsity of a cut (S,VS) by its conductance phi(S), i.e., by the ratio of the number of edges crossing the cut and the sum of the degrees on the smaller of the two sides. We present an efficient distributed algorithm to compute a cut of low conductance. Specifically, given two parameters b and phi, if there exists a cut of balance at least b and conductance at most phi, our algorithm outputs a cut of balance at least b/2 and conductance at most ~O(sqrt{phi}), where ~O(.) hides polylogarithmic factors in the number of nodes n. Our distributed algorithm works in the congest model, i.e., it only requires to send messages of size at most O(log(n)) bits. The time complexity of the algorithm is ~O(D + 1/b*phi), where D is the diameter of G. This is a significant improvement over a result by Das Sarma et al. [ICDCN 2015], where it is shown that a cut of the same quality can be computed in time ~O(n + 1/b*phi). The improved running time is in particular achieved by devising and applying an efficient distributed algorithm for the all-prefix-sums problem in a distributed search tree. This algorithm, which is based on the classic parallel all-prefix-sums algorithm, might be of independent interest

    SCE: Scalable Network Embedding from Sparsest Cut

    Full text link
    Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner, which captures inherent properties and structural information of the underlying graph. In this field, many popular approaches are influenced by the skip-gram model from natural language processing. Most of them use a contrastive objective to train an encoder which forces the embeddings of similar pairs to be close and embeddings of negative samples to be far. A key of success to such contrastive learning methods is how to draw positive and negative samples. While negative samples that are generated by straightforward random sampling are often satisfying, methods for drawing positive examples remains a hot topic. In this paper, we propose SCE for unsupervised network embedding only using negative samples for training. Our method is based on a new contrastive objective inspired by the well-known sparsest cut problem. To solve the underlying optimization problem, we introduce a Laplacian smoothing trick, which uses graph convolutional operators as low-pass filters for smoothing node representations. The resulting model consists of a GCN-type structure as the encoder and a simple loss function. Notably, our model does not use positive samples but only negative samples for training, which not only makes the implementation and tuning much easier, but also reduces the training time significantly. Finally, extensive experimental studies on real world data sets are conducted. The results clearly demonstrate the advantages of our new model in both accuracy and scalability compared to strong baselines such as GraphSAGE, G2G and DGI.Comment: KDD 202

    Sparsest Cut on Bounded Treewidth Graphs: Algorithms and Hardness Results

    Full text link
    We give a 2-approximation algorithm for Non-Uniform Sparsest Cut that runs in time nO(k)n^{O(k)}, where kk is the treewidth of the graph. This improves on the previous 22k2^{2^k}-approximation in time \poly(n) 2^{O(k)} due to Chlamt\'a\v{c} et al. To complement this algorithm, we show the following hardness results: If the Non-Uniform Sparsest Cut problem has a ρ\rho-approximation for series-parallel graphs (where ρ1\rho \geq 1), then the Max Cut problem has an algorithm with approximation factor arbitrarily close to 1/ρ1/\rho. Hence, even for such restricted graphs (which have treewidth 2), the Sparsest Cut problem is NP-hard to approximate better than 17/16ϵ17/16 - \epsilon for ϵ>0\epsilon > 0; assuming the Unique Games Conjecture the hardness becomes 1/αGWϵ1/\alpha_{GW} - \epsilon. For graphs with large (but constant) treewidth, we show a hardness result of 2ϵ2 - \epsilon assuming the Unique Games Conjecture. Our algorithm rounds a linear program based on (a subset of) the Sherali-Adams lift of the standard Sparsest Cut LP. We show that even for treewidth-2 graphs, the LP has an integrality gap close to 2 even after polynomially many rounds of Sherali-Adams. Hence our approach cannot be improved even on such restricted graphs without using a stronger relaxation

    CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.

    Get PDF
    A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html
    corecore