13,382 research outputs found
Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature
Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure
A Dynamic Approach to Rhythm in Language: Toward a Temporal Phonology
It is proposed that the theory of dynamical systems offers appropriate tools
to model many phonological aspects of both speech production and perception. A
dynamic account of speech rhythm is shown to be useful for description of both
Japanese mora timing and English timing in a phrase repetition task. This
orientation contrasts fundamentally with the more familiar symbolic approach to
phonology, in which time is modeled only with sequentially arrayed symbols. It
is proposed that an adaptive oscillator offers a useful model for perceptual
entrainment (or `locking in') to the temporal patterns of speech production.
This helps to explain why speech is often perceived to be more regular than
experimental measurements seem to justify. Because dynamic models deal with
real time, they also help us understand how languages can differ in their
temporal detail---contributing to foreign accents, for example. The fact that
languages differ greatly in their temporal detail suggests that these effects
are not mere motor universals, but that dynamical models are intrinsic
components of the phonological characterization of language.Comment: 31 pages; compressed, uuencoded Postscrip
Dynamics on expanding spaces: modeling the emergence of novelties
Novelties are part of our daily lives. We constantly adopt new technologies,
conceive new ideas, meet new people, experiment with new situations.
Occasionally, we as individuals, in a complicated cognitive and sometimes
fortuitous process, come up with something that is not only new to us, but to
our entire society so that what is a personal novelty can turn into an
innovation at a global level. Innovations occur throughout social, biological
and technological systems and, though we perceive them as a very natural
ingredient of our human experience, little is known about the processes
determining their emergence. Still the statistical occurrence of innovations
shows striking regularities that represent a starting point to get a deeper
insight in the whole phenomenology. This paper represents a small step in that
direction, focusing on reviewing the scientific attempts to effectively model
the emergence of the new and its regularities, with an emphasis on more recent
contributions: from the plain Simon's model tracing back to the 1950s, to the
newest model of Polya's urn with triggering of one novelty by another. What
seems to be key in the successful modelling schemes proposed so far is the idea
of looking at evolution as a path in a complex space, physical, conceptual,
biological, technological, whose structure and topology get continuously
reshaped and expanded by the occurrence of the new. Mathematically it is very
interesting to look at the consequences of the interplay between the "actual"
and the "possible" and this is the aim of this short review.Comment: 25 pages, 10 figure
Large-scale analysis of Zipf's law in English texts
Despite being a paradigm of quantitative linguistics, Zipf's law for words
suffers from three main problems: its formulation is ambiguous, its validity
has not been tested rigorously from a statistical point of view, and it has not
been confronted to a representatively large number of texts. So, we can
summarize the current support of Zipf's law in texts as anecdotic.
We try to solve these issues by studying three different versions of Zipf's
law and fitting them to all available English texts in the Project Gutenberg
database (consisting of more than 30000 texts). To do so we use state-of-the
art tools in fitting and goodness-of-fit tests, carefully tailored to the
peculiarities of text statistics. Remarkably, one of the three versions of
Zipf's law, consisting of a pure power-law form in the complementary cumulative
distribution function of word frequencies, is able to fit more than 40% of the
texts in the database (at the 0.05 significance level), for the whole domain of
frequencies (from 1 to the maximum value) and with only one free parameter (the
exponent)
- …