16,693 research outputs found

    Deleting and Testing Forbidden Patterns in Multi-Dimensional Arrays

    Get PDF
    Understanding the local behaviour of structured multi-dimensional data is a fundamental problem in various areas of computer science. As the amount of data is often huge, it is desirable to obtain sublinear time algorithms, and specifically property testers, to understand local properties of the data. We focus on the natural local problem of testing pattern freeness: given a large dd-dimensional array AA and a fixed dd-dimensional pattern PP over a finite alphabet, we say that AA is PP-free if it does not contain a copy of the forbidden pattern PP as a consecutive subarray. The distance of AA to PP-freeness is the fraction of entries of AA that need to be modified to make it PP-free. For any ϵ[0,1]\epsilon \in [0,1] and any large enough pattern PP over any alphabet, other than a very small set of exceptional patterns, we design a tolerant tester that distinguishes between the case that the distance is at least ϵ\epsilon and the case that it is at most adϵa_d \epsilon, with query complexity and running time cdϵ1c_d \epsilon^{-1}, where ad<1a_d < 1 and cdc_d depend only on dd. To analyze the testers we establish several combinatorial results, including the following dd-dimensional modification lemma, which might be of independent interest: for any large enough pattern PP over any alphabet (excluding a small set of exceptional patterns for the binary case), and any array AA containing a copy of PP, one can delete this copy by modifying one of its locations without creating new PP-copies in AA. Our results address an open question of Fischer and Newman, who asked whether there exist efficient testers for properties related to tight substructures in multi-dimensional structured data. They serve as a first step towards a general understanding of local properties of multi-dimensional arrays, as any such property can be characterized by a fixed family of forbidden patterns

    Can fusion coefficients be calculated from the depth rule ?

    Full text link
    The depth rule is a level truncation of tensor product coefficients expected to be sufficient for the evaluation of fusion coefficients. We reformulate the depth rule in a precise way, and show how, in principle, it can be used to calculate fusion coefficients. However, we argue that the computation of the depth itself, in terms of which the constraints on tensor product coefficients is formulated, is problematic. Indeed, the elements of the basis of states convenient for calculating tensor product coefficients do not have a well-defined depth! We proceed by showing how one can calculate the depth in an `approximate' way and derive accurate lower bounds for the minimum level at which a coupling appears. It turns out that this method yields exact results for su^(3)\widehat{su}(3) and constitutes an efficient and simple algorithm for computing su^(3)\widehat{su}(3) fusion coefficients.Comment: 27 page

    Space-efficient detection of unusual words

    Full text link
    Detecting all the strings that occur in a text more frequently or less frequently than expected according to an IID or a Markov model is a basic problem in string mining, yet current algorithms are based on data structures that are either space-inefficient or incur large slowdowns, and current implementations cannot scale to genomes or metagenomes in practice. In this paper we engineer an algorithm based on the suffix tree of a string to use just a small data structure built on the Burrows-Wheeler transform, and a stack of O(σ2log2n)O(\sigma^2\log^2 n) bits, where nn is the length of the string and σ\sigma is the size of the alphabet. The size of the stack is o(n)o(n) except for very large values of σ\sigma. We further improve the algorithm by removing its time dependency on σ\sigma, by reporting only a subset of the maximal repeats and of the minimal rare words of the string, and by detecting and scoring candidate under-represented strings that do not occur\textit{do not occur} in the string. Our algorithms are practical and work directly on the BWT, thus they can be immediately applied to a number of existing datasets that are available in this form, returning this string mining problem to a manageable scale.Comment: arXiv admin note: text overlap with arXiv:1502.0637

    Nonlinear optics and light localization in periodic photonic lattices

    Full text link
    We review the recent developments in the field of photonic lattices emphasizing their unique properties for controlling linear and nonlinear propagation of light. We draw some important links between optical lattices and photonic crystals pointing towards practical applications in optical communications and computing, beam shaping, and bio-sensing.Comment: to appear in Journal of Nonlinear Optical Physics & Materials (JNOPM

    Generating constrained random graphs using multiple edge switches

    Get PDF
    The generation of random graphs using edge swaps provides a reliable method to draw uniformly random samples of sets of graphs respecting some simple constraints, e.g. degree distributions. However, in general, it is not necessarily possible to access all graphs obeying some given con- straints through a classical switching procedure calling on pairs of edges. We therefore propose to get round this issue by generalizing this classical approach through the use of higher-order edge switches. This method, which we denote by "k-edge switching", makes it possible to progres- sively improve the covered portion of a set of constrained graphs, thereby providing an increasing, asymptotically certain confidence on the statistical representativeness of the obtained sample.Comment: 15 page
    corecore