721 research outputs found

    Isometric endomorphisms of free groups

    Get PDF
    An arbitrary homomorphism between groups is nonincreasing for stable commutator length, and there are infinitely many (injective) homomorphisms between free groups which strictly decrease the stable commutator length of some elements. However, we show in this paper that a random homomorphism between free groups is almost surely an isometry for stable commutator length for every element; in particular, the unit ball in the scl norm of a free group admits an enormous number of exotic isometries. Using similar methods, we show that a random fatgraph in a free group is extremal (i.e. is an absolute minimizer for relative Gromov norm) for its boundary; this implies, for instance, that a random element of a free group with commutator length at most n has commutator length exactly n and stable commutator length exactly n-1/2. Our methods also let us construct explicit (and computable) quasimorphisms which certify these facts.Comment: 26 pages, 6 figures; minor typographical edits for final published versio

    Automatic Detection of Abnormal Behavior in Computing Systems

    Get PDF
    I present RAACD, a software suite that detects misbehaving computers in large computing systems and presents information about those machines to the system administrator. I build this system using preexisting anomaly detection techniques. I evaluate my methods using simple synthesized data, real data containing coerced abnormal behavior, and real data containing naturally occurring abnormal behavior. I find that the system adequately detects abnormal behavior and significantly reduces the amount of uninteresting computer health data presented to a system administrator

    Weighted dependency graphs

    Full text link
    The theory of dependency graphs is a powerful toolbox to prove asymptotic normality of sums of random variables. In this article, we introduce a more general notion of weighted dependency graphs and give normality criteria in this context. We also provide generic tools to prove that some weighted graph is a weighted dependency graph for a given family of random variables. To illustrate the power of the theory, we give applications to the following objects: uniform random pair partitions, the random graph model G(n,M)G(n,M), uniform random permutations, the symmetric simple exclusion process and multilinear statistics on Markov chains. The application to random permutations gives a bivariate extension of a functional central limit theorem of Janson and Barbour. On Markov chains, we answer positively an open question of Bourdon and Vall\'ee on the asymptotic normality of subword counts in random texts generated by a Markovian source.Comment: 57 pages. Third version: minor modifications, after review proces

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Siirto-oppiminen kielimalleilla luokitteluongelmille

    Get PDF
    Neural network based modern language models can reach state of the art performance on wide range of natural language tasks. Their success is based on capability to learn from large unlabeled data by pretraining, using transfer learning to learn strong representations for the language and transferring the learned into new domains and tasks. I look at how language models produce transfer learning for NLP. Especially from the viewpoint of classification. How transfer learning can be formally defined? I compare different LM implementations in theory and also use two example data sets for empirically testing their performance on very small labeled training data
    corecore