1,782 research outputs found

    Permutation groups, simple groups and sieve methods

    Get PDF
    We show that the number of integers n ≤ x which occur as indices of subgroups of nonabelian finite simple groups, excluding that of An-1 in An, is ∼ hx/log x, for some given constant h. This might be regarded as a noncommutative analogue of the Prime Number Theorem (which counts indices n ≤ x of subgroups of abelian simple groups). We conclude that for most positive integers n, the only quasiprimitive permutation groups of degree n are Sn and An in their natural action. This extends a similar result for primitive permutation groups obtained by Cameron, Neumann and Teague in 1982. Our proof combines group-theoretic and number-theoretic methods. In particular, we use the classification of finite simple groups, and we also apply sieve methods to estimate the size of some interesting sets of primes

    A sufficient condition for a number to be the order of a nonsingular derivation of a Lie algebra

    Full text link
    A study of the set N_p of positive integers which occur as orders of nonsingular derivations of finite-dimensional non-nilpotent Lie algebras of characteristic p>0 was initiated by Shalev and continued by the present author. The main goal of this paper is to show the abundance of elements of N_p. Our main result shows that any divisor n of q-1, where q is a power of p, such that n≥(p−1)1/p(q−1)1−1/(2p)n\ge (p-1)^{1/p} (q-1)^{1-1/(2p)}, belongs to N_p. This extends its special case for p=2 which was proved in a previous paper by a different method.Comment: 10 pages. This version has been revised according to a referee's suggestions. The additions include a discussion of the (lower) density of the set N_p, and the results of more extensive machine computations. Note that the title has also changed. To appear in Israel J. Mat

    Adposition and Case Supersenses v2.5: Guidelines for English

    Full text link
    This document offers a detailed linguistic description of SNACS (Semantic Network of Adposition and Case Supersenses; Schneider et al., 2018), an inventory of 50 semantic labels ("supersenses") that characterize the use of adpositions and case markers at a somewhat coarse level of granularity, as demonstrated in the STREUSLE corpus (https://github.com/nert-gu/streusle/; version 4.3 tracks guidelines version 2.5). Though the SNACS inventory aspires to be universal, this document is specific to English; documentation for other languages will be published separately. Version 2 is a revision of the supersense inventory proposed for English by Schneider et al. (2015, 2016) (henceforth "v1"), which in turn was based on previous schemes. The present inventory was developed after extensive review of the v1 corpus annotations for English, plus previously unanalyzed genitive case possessives (Blodgett and Schneider, 2018), as well as consideration of adposition and case phenomena in Hebrew, Hindi, Korean, and German. Hwang et al. (2017) present the theoretical underpinnings of the v2 scheme. Schneider et al. (2018) summarize the scheme, its application to English corpus data, and an automatic disambiguation task

    Subgraphs and network motifs in geometric networks

    Full text link
    Many real-world networks describe systems in which interactions decay with the distance between nodes. Examples include systems constrained in real space such as transportation and communication networks, as well as systems constrained in abstract spaces such as multivariate biological or economic datasets and models of social networks. These networks often display network motifs: subgraphs that recur in the network much more often than in randomized networks. To understand the origin of the network motifs in these networks, it is important to study the subgraphs and network motifs that arise solely from geometric constraints. To address this, we analyze geometric network models, in which nodes are arranged on a lattice and edges are formed with a probability that decays with the distance between nodes. We present analytical solutions for the numbers of all 3 and 4-node subgraphs, in both directed and non-directed geometric networks. We also analyze geometric networks with arbitrary degree sequences, and models with a field that biases for directed edges in one direction. Scaling rules for scaling of subgraph numbers with system size, lattice dimension and interaction range are given. Several invariant measures are found, such as the ratio of feedback and feed-forward loops, which do not depend on system size, dimension or connectivity function. We find that network motifs in many real-world networks, including social networks and neuronal networks, are not captured solely by these geometric models. This is in line with recent evidence that biological network motifs were selected as basic circuit elements with defined information-processing functions.Comment: 9 pages, 6 figure

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Etching of random solids: hardening dynamics and self-organized fractality

    Full text link
    When a finite volume of an etching solution comes in contact with a disordered solid, a complex dynamics of the solid-solution interface develops. Since only the weak parts are corroded, the solid surface hardens progressively. If the etchant is consumed in the chemical reaction, the corrosion dynamics slows down and stops spontaneously leaving a fractal solid surface, which reveals the latent percolation criticality hidden in any random system. Here we introduce and study, both analytically and numerically, a simple model for this phenomenon. In this way we obtain a detailed description of the process in terms of percolation theory. In particular we explain the mechanism of hardening of the surface and connect it to Gradient Percolation.Comment: Latex, aipproc, 6 pages, 3 figures, Proceedings of 6th Granada Seminar on Computational Physic

    Generalization Error in Deep Learning

    Get PDF
    Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results

    Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features

    Full text link
    One-class support vector machine (OC-SVM) for a long time has been one of the most effective anomaly detection methods and extensively adopted in both research as well as industrial applications. The biggest issue for OC-SVM is yet the capability to operate with large and high-dimensional datasets due to optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or autoencoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based one-class support vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descent to obtain end-to-end training. Interestingly, this also opens up the possible use of gradient-based attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our end-to-end training architecture achieves a performance significantly better than the previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201

    Primitive Words, Free Factors and Measure Preservation

    Full text link
    Let F_k be the free group on k generators. A word w \in F_k is called primitive if it belongs to some basis of F_k. We investigate two criteria for primitivity, and consider more generally, subgroups of F_k which are free factors. The first criterion is graph-theoretic and uses Stallings core graphs: given subgroups of finite rank H \le J \le F_k we present a simple procedure to determine whether H is a free factor of J. This yields, in particular, a procedure to determine whether a given element in F_k is primitive. Again let w \in F_k and consider the word map w:G x G x ... x G \to G (from the direct product of k copies of G to G), where G is an arbitrary finite group. We call w measure preserving if given uniform measure on G x G x ... x G, w induces uniform measure on G (for every finite G). This is the second criterion we investigate: it is not hard to see that primitivity implies measure preservation and it was conjectured that the two properties are equivalent. Our combinatorial approach to primitivity allows us to make progress on this problem and in particular prove the conjecture for k=2. It was asked whether the primitive elements of F_k form a closed set in the profinite topology of free groups. Our results provide a positive answer for F_2.Comment: This is a unified version of two manuscripts: "On Primitive words I: A New Algorithm", and "On Primitive Words II: Measure Preservation". 42 pages, 14 figures. Some parts of the paper reorganized towards publication in the Israel J. of Mat
    • …
    corecore