6,203 research outputs found
Handling Massive N-Gram Datasets Efficiently
This paper deals with the two fundamental problems concerning the handling of
large n-gram language models: indexing, that is compressing the n-gram strings
and associated satellite data without compromising their retrieval speed; and
estimation, that is computing the probability distribution of the strings from
a large textual source. Regarding the problem of indexing, we describe
compressed, exact and lossless data structures that achieve, at the same time,
high space reductions and no time degradation with respect to state-of-the-art
solutions and related software packages. In particular, we present a compressed
trie data structure in which each word following a context of fixed length k,
i.e., its preceding k words, is encoded as an integer whose value is
proportional to the number of words that follow such context. Since the number
of words following a given context is typically very small in natural
languages, we lower the space of representation to compression levels that were
never achieved before. Despite the significant savings in space, our technique
introduces a negligible penalty at query time. Regarding the problem of
estimation, we present a novel algorithm for estimating modified Kneser-Ney
language models, that have emerged as the de-facto choice for language modeling
in both academia and industry, thanks to their relatively low perplexity
performance. Estimating such models from large textual sources poses the
challenge of devising algorithms that make a parsimonious use of the disk. The
state-of-the-art algorithm uses three sorting steps in external memory: we show
an improved construction that requires only one sorting step thanks to
exploiting the properties of the extracted n-gram strings. With an extensive
experimental analysis performed on billions of n-grams, we show an average
improvement of 4.5X on the total running time of the state-of-the-art approach.Comment: Published in ACM Transactions on Information Systems (TOIS), February
2019, Article No: 2
Online Embedding Compression for Text Classification using Low Rank Matrix Factorization
Deep learning models have become state of the art for natural language
processing (NLP) tasks, however deploying these models in production system
poses significant memory constraints. Existing compression methods are either
lossy or introduce significant latency. We propose a compression method that
leverages low rank matrix factorization during training,to compress the word
embedding layer which represents the size bottleneck for most NLP models. Our
models are trained, compressed and then further re-trained on the downstream
task to recover accuracy while maintaining the reduced size. Empirically, we
show that the proposed method can achieve 90% compression with minimal impact
in accuracy for sentence classification tasks, and outperforms alternative
methods like fixed-point quantization or offline word embedding compression. We
also analyze the inference time and storage space for our method through FLOP
calculations, showing that we can compress DNN models by a configurable ratio
and regain accuracy loss without introducing additional latency compared to
fixed point quantization. Finally, we introduce a novel learning rate schedule,
the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate
to outperform other popular adaptive learning rate algorithms on a sentence
classification benchmark.Comment: Accepted in Thirty-Third AAAI Conference on Artificial Intelligence
(AAAI 2019
Compression of Structured High-Throughput Sequencing Data
Large biological datasets are being produced at a rapid pace and create substantial storage challenges, particularly in the domain of high-throughput sequencing (HTS). Most approaches currently used to store HTS data are either unable to quickly adapt to the requirements of new sequencing or analysis methods (because they do not support schema evolution), or fail to provide state of the art compression of the datasets. We have devised new approaches to store HTS data that support seamless data schema evolution and compress datasets substantially better than existing approaches. Building on these new approaches, we discuss and demonstrate how a multi-tier data organization can dramatically reduce the storage, computational and network burden of collecting, analyzing, and archiving large sequencing datasets. For instance, we show that spliced RNA-Seq alignments can be stored in less than 4% the size of a BAM file with perfect data fidelity. Compared to the previous compression state of the art, these methods reduce dataset size more than 40% when storing exome, gene expression or DNA methylation datasets. The approaches have been integrated in a comprehensive suite of software tools (http://goby.campagnelab.org) that support common analyses for a range of high-throughput sequencing assays.National Center for Research Resources (U.S.) (Grant UL1 RR024996)Leukemia & Lymphoma Society of America (Translational Research Program Grant LLS 6304-11)National Institute of Mental Health (U.S.) (R01 MH086883
Magnetic braking in young late-type stars: the effect of polar spots
The concentration of magnetic flux near the poles of rapidly rotating cool
stars has been recently proposed as an alternative mechanism to dynamo
saturation in order to explain the saturation of angular momentum loss. In this
work we study the effect of magnetic surface flux distribution on the coronal
field topology and angular momentum loss rate. We investigate if magnetic flux
concentration towards the pole is a reasonable alternative to dynamo
saturation. We construct a 1D wind model and also apply a 2-D self-similar
analytical model, to evaluate how the surface field distribution affects the
angular momentum loss of the rotating star. From the 1D model we find that, in
a magnetically dominated low corona, the concentrated polar surface field
rapidly expands to regions of low magnetic pressure resulting in a coronal
field with small latitudinal variation. We also find that the angular momentum
loss rate due to a uniform field or a concentrated field with equal total
magnetic flux is very similar. From the 2D wind model we show that there are
several relevant factors to take into account when studying the angular
momentum loss from a star. In particular, we show that the inclusion of force
balance across the field in a wind model is fundamental if realistic
conclusions are to be drawn from the effect of non-uniform surface field
distribution on magnetic braking. This model predicts that a magnetic field
concentrated at high latitudes leads to larger Alfven radii and larger braking
rates than a smoother field distribution. From the results obtained, we argue
that the magnetic surface field distribution towards the pole does not directly
limit the braking efficiency of the wind.Comment: 11 pages, 10 figures, accepted in A&
Integer Set Compression and Statistical Modeling
Compression of integer sets and sequences has been extensively studied for
settings where elements follow a uniform probability distribution. In addition,
methods exist that exploit clustering of elements in order to achieve higher
compression performance. In this work, we address the case where enumeration of
elements may be arbitrary or random, but where statistics is kept in order to
estimate probabilities of elements. We present a recursive subset-size encoding
method that is able to benefit from statistics, explore the effects of
permuting the enumeration order based on element probabilities, and discuss
general properties and possibilities for this class of compression problem
Giant Ringlike Radio Structures Around Galaxy Cluster Abell 3376
In the current paradigm of cold dark matter cosmology, large-scale structures
are assembling through hierarchical clustering of matter. In this process, an
important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising
in gravity-driven supersonic flows of intergalactic matter onto dark
matter-dominated collapsing structures such as pancakes, filaments, and
clusters of galaxies. Here, we report Very Large Array telescope observations
of giant (~2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures,
found at the outskirts of the rich cluster of galaxies Abell 3376. These
structures may trace the elusive shock waves of cosmological large-scale matter
flows, which are energetic enough to power them. These radio sources may also
be the acceleration sites where magnetic shocks are possibly boosting
cosmic-ray particles with energies of up to 10^18 to 10^19 electron volts.Comment: Published on Science, 3 November 2006. Main paper and Supporting
Online Materia
- …