21 research outputs found

    Hypergraph Artificial Benchmark for Community Detection (h-ABCD)

    Full text link
    The Artificial Benchmark for Community Detection (ABCD) graph is a recently introduced random graph model with community structure and power-law distribution for both degrees and community sizes. The model generates graphs with similar properties as the well-known LFR one, and its main parameter can be tuned to mimic its counterpart in the LFR model, the mixing parameter. In this paper, we introduce hypergraph counterpart of the ABCD model, h-ABCD, which produces random hypergraph with distributions of ground-truth community sizes and degrees following power-law. As in the original ABCD, the new model h-ABCD can produce hypergraphs with various levels of noise. More importantly, the model is flexible and can mimic any desired level of homogeneity of hyperedges that fall into one community. As a result, it can be used as a suitable, synthetic playground for analyzing and tuning hypergraph community detection algorithms.Comment: 18 pages, 6 figures, 2 table

    Properties and Performance of the ABCDe Random Graph Model with Community Structure

    Full text link
    In this paper, we investigate properties and performance of synthetic random graph models with a built-in community structure. Such models are important for evaluating and tuning community detection algorithms that are unsupervised by nature. We propose ABCDe, a multi-threaded implementation of the ABCD (Artificial Benchmark for Community Detection) graph generator. We discuss the implementation details of the algorithm and compare it with both the previously available sequential version of the ABCD model and with the parallel implementation of the standard and extensively used LFR (Lancichinetti--Fortunato--Radicchi) generator. We show that ABCDe is more than ten times faster and scales better than the parallel implementation of LFR provided in NetworKit. Moreover, the algorithm is not only faster but random graphs generated by ABCD have similar properties to the ones generated by the original LFR algorithm, while the parallelized NetworKit implementation of LFR produces graphs that have noticeably different characteristics.Comment: 15 pages, 10 figures, 1 tabl

    Kilometer range filamentation

    No full text
    International audienceWe demonstrate for the first time the possibility to generate long plasma channels up to a distance of 1 km, using the terawatt femtosecond T&T laser facility. The plasma density was optimized by adjusting the chirp, the focusing and beam diameter. The interaction of filaments with transparent and opaque targets was studied

    Large multiservice loss models and applications to ATM networks

    No full text
    The problem of estimating call blocking probabilities in ATM networks is addressed in this thesis. We develop a new, two-step iterated framework for call acceptance control (CAC). In the first step, we define a new variable bit rate (VBR) traffic descriptor called effective rate and we use a known effective bandwidth technique in the second step to estimate cell loss. This approach yields decoupled estimators at the call level so that loss systems models can be used to perform network analysis. Our work on loss systems is divided in three parts: single-link problems, reservation and network problems. In the single-link context, we generalize existing asymptotic approximation formulae for blocking probabilities and propose a uniform estimate under light up to critical loading conditions. We present the salient features of commonly used reservation schemes while proposing and reviewing ways to estimate blocking probabilities in each case. In the case of networks, we provide an overview of classical techniques to evaluate blocking probabilities such as fixed-point methods. We propose a novel fixed-point technique for large capacity systems which yields dramatic reduction in computational complexity. We also analyze loss networks from an analytic point of view using the Laplace method and a change of probability law technique. We obtain asymptotic formulae for every loading conditions and a number of asymptotic results regarding network behavior. Many numerical examples are provided as well as a model example where we illustrate how our asymptotic formulae can be used to perform network optimization

    Artificial benchmark for community detection with outliers (ABCD+o)

    No full text
    Abstract The Artificial Benchmark for Community Detection graph (ABCD) is a random graph model with community structure and power-law distribution for both degrees and community sizes. The model generates graphs with similar properties as the well-known LFR one, and its main parameter Îľ\xi Îľ can be tuned to mimic its counterpart in the LFR model, the mixing parameter ÎĽ\mu ÎĽ . In this paper, we extend the ABCD model to include potential outliers. We perform some exploratory experiments on both the new ABCD+o model as well as a real-world network to show that outliers pose some distinguishable properties. This ensures that our new model may serve as a benchmark of outlier detection algorithms

    Self-Synchronization of Huffman Codes

    No full text
    Variable length binary codes have been frequently used for communications since Huffman’s important paper on constructing minimum average length codes. One drawback of variable length codes is the potential loss of synchronization in the presence of channel errors. However, many variable length codes seem to possess a “self-synchronization” property that lets them recover from bit errors. In particular, for some variable length codes there exists a certain binary string (not necessarily a codeword) which automatically resynchronizes the code. That is, if a transmitted sequence of bits is corrupted by one or more bit errors, then as soon as the receiver by random chance correctly detects a self-synchronizing string, the receiver can continue properly parsing the bit sequence into codewords. Most commonly used binary prefix codes, including Huffman codes, are “complete”, in the sense that the vertices in their decoding trees are either leaves or have two children. An open question has bee

    Clustering via hypergraph modularity.

    No full text
    Despite the fact that many important problems (including clustering) can be described using hypergraphs, theoretical foundations as well as practical algorithms using hypergraphs are not well developed yet. In this paper, we propose a hypergraph modularity function that generalizes its well established and widely used graph counterpart measure of how clustered a network is. In order to define it properly, we generalize the Chung-Lu model for graphs to hypergraphs. We then provide the theoretical foundations to search for an optimal solution with respect to our hypergraph modularity function. A simple heuristic algorithm is described and applied to a few illustrative examples. We show that using a strict version of our proposed modularity function often leads to a solution where a smaller number of hyperedges get cut as compared to optimizing modularity of 2-section graph of a hypergraph
    corecore