622,538 research outputs found

    Consistency of Spectral Hypergraph Partitioning under Planted Partition Model

    Full text link
    Hypergraph partitioning lies at the heart of a number of problems in machine learning and network sciences. Many algorithms for hypergraph partitioning have been proposed that extend standard approaches for graph partitioning to the case of hypergraphs. However, theoretical aspects of such methods have seldom received attention in the literature as compared to the extensive studies on the guarantees of graph partitioning. For instance, consistency results of spectral graph partitioning under the stochastic block model are well known. In this paper, we present a planted partition model for sparse random non-uniform hypergraphs that generalizes the stochastic block model. We derive an error bound for a spectral hypergraph partitioning algorithm under this model using matrix concentration inequalities. To the best of our knowledge, this is the first consistency result related to partitioning non-uniform hypergraphs.Comment: 35 pages, 2 figures, 1 tabl

    Isoperimetric Partitioning: A New Algorithm for Graph Partitioning

    Full text link
    Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02582

    Graph Partitioning Induced Phase Transitions

    Full text link
    We study the percolation properties of graph partitioning on random regular graphs with N vertices of degree kk. Optimal graph partitioning is directly related to optimal attack and immunization of complex networks. We find that for any partitioning process (even if non-optimal) that partitions the graph into equal sized connected components (clusters), the system undergoes a percolation phase transition at f=fc=12/kf=f_c=1-2/k where ff is the fraction of edges removed to partition the graph. For optimal partitioning, at the percolation threshold, we find SN0.4S \sim N^{0.4} where SS is the size of the clusters and N0.25\ell\sim N^{0.25} where \ell is their diameter. Additionally, we find that SS undergoes multiple non-percolation transitions for f<fcf<f_c

    Window-based Streaming Graph Partitioning Algorithm

    Full text link
    In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset
    corecore