5,769 research outputs found
The Unsupervised Acquisition of a Lexicon from Continuous Speech
We present an unsupervised learning algorithm that acquires a
natural-language lexicon from raw speech. The algorithm is based on the optimal
encoding of symbol sequences in an MDL framework, and uses a hierarchical
representation of language that overcomes many of the problems that have
stymied previous grammar-induction procedures. The forward mapping from symbol
sequences to the speech stream is modeled using features based on articulatory
gestures. We present results on the acquisition of lexicons and language models
from raw speech, text, and phonetic transcripts, and demonstrate that our
algorithm compares very favorably to other reported results with respect to
segmentation performance and statistical efficiency.Comment: 27 page technical repor
Unsupervised Language Acquisition
This thesis presents a computational theory of unsupervised language
acquisition, precisely defining procedures for learning language from ordinary
spoken or written utterances, with no explicit help from a teacher. The theory
is based heavily on concepts borrowed from machine learning and statistical
estimation. In particular, learning takes place by fitting a stochastic,
generative model of language to the evidence. Much of the thesis is devoted to
explaining conditions that must hold for this general learning strategy to
arrive at linguistically desirable grammars. The thesis introduces a variety of
technical innovations, among them a common representation for evidence and
grammars, and a learning strategy that separates the ``content'' of linguistic
parameters from their representation. Algorithms based on it suffer from few of
the search problems that have plagued other computational approaches to
language acquisition.
The theory has been tested on problems of learning vocabularies and grammars
from unsegmented text and continuous speech, and mappings between sound and
representations of meaning. It performs extremely well on various objective
criteria, acquiring knowledge that causes it to assign almost exactly the same
structure to utterances as humans do. This work has application to data
compression, language modeling, speech recognition, machine translation,
information retrieval, and other tasks that rely on either structural or
stochastic descriptions of language.Comment: PhD thesis, 133 page
Examining the Limits of Predictability of Human Mobility
We challenge the upper bound of human-mobility predictability that is widely used to corroborate the accuracy of mobility prediction models. We observe that extensions of recurrent-neural network architectures achieve significantly higher prediction accuracy, surpassing this upper bound. Given this discrepancy, the central objective of our work is to show that the methodology behind the estimation of the predictability upper bound is erroneous and identify the reasons behind this discrepancy. In order to explain this anomaly, we shed light on several underlying assumptions that have contributed to this bias. In particular, we highlight the consequences of the assumed Markovian nature of human-mobility on deriving this upper bound on maximum mobility predictability. By using several statistical tests on three real-world mobility datasets, we show that human mobility exhibits scale-invariant long-distance dependencies, contrasting with the initial Markovian assumption. We show that this assumption of exponential decay of information in mobility trajectories, coupled with the inadequate usage of encoding techniques results in entropy inflation, consequently lowering the upper bound on predictability. We highlight that the current upper bound computation methodology based on Fano’s inequality tends to overlook the presence of long-range structural correlations inherent to mobility behaviors and we demonstrate its significance using an alternate encoding scheme. We further show the manifestation of not accounting for these dependencies by probing the mutual information decay in mobility trajectories. We expose the systematic bias that culminates into an inaccurate upper bound and further explain as to why the recurrent-neural architectures, designed to handle long-range structural correlations, surpass this upper limit on human mobility predictability
File compression using probabilistic grammars and LR parsing
Data compression, the reduction in size of the physical representation
of data being stored or transmitted, has long been of interest both as a research topic and as a practical technique. Different methods are used
for encoding different classes of data files. The purpose of this research
is to compress a class of highly redundant data files whose contents are
partially described by a context-free grammar (i.e. text files containing
computer programs).
An encoding technique is developed for the removal of structural
dependancy due to the context-free structure of such files. The technique
depends on a type of LR parsing method called LALR(K) (Lookahead LRM).
The encoder also pays particular attention to the encoding of editing
characters, comments, names and constants.
The encoded data maintains the exact information content of the
original data. Hence, a decoding technique (depending on the same
parsing method) is developed to recover the original information from
its compressed representation.
The technique is demonstrated by compressing Pascal programs. An
optimal coding scheme (based on Huffman codes) is used to encode the
parsing alternatives in each parsing state. The decoder uses these codes
during the decoding phase. Also Huffman codes, based on the probability
of the symbols c oncerned, are used when coding editing characterst
comments, names and constants. The sizes of the parsing tables (and
subsequently the encoding tables) were considerably reduced by splitting
them into a number of sub-tables.
The minimum and the average code length of the average program are
derived from two different matrices. These matrices are constructed
from a probabilistic grammar, and the language generated by this grammar.
Finally, various comparisons are made with a related encoding method by
using a simple context-free language
An MML-based tool for evaluating the complexity of (stochastic) logic programs
[ES] Esta tesis presenta el primer esquema de codificación MML para evaluar la simplicidad de modelos expresados en forma de programas lógicos estocásticos, asà como su herramienta escrita en Prolog.[EN] The thesis presents the first general MML coding scheme for evaluating the simplicity of models expressed as stochastic logic programs, as a tool in Prolog.Castillo Andreu, H. (2012). An MML-based tool for evaluating the complexity of (stochastic) logic programs. http://hdl.handle.net/10251/17983Archivo delegad
SciTech News Volume 71, No. 2 (2017)
Columns and Reports From the Editor 3
Division News Science-Technology Division 5 Chemistry Division 8 Engineering Division 9 Aerospace Section of the Engineering Division 12 Architecture, Building Engineering, Construction and Design Section of the Engineering Division 14
Reviews Sci-Tech Book News Reviews 16
Advertisements IEEE
- …