351,239 research outputs found
Mean-payoff Automaton Expressions
Quantitative languages are an extension of boolean languages that assign to
each word a real number. Mean-payoff automata are finite automata with
numerical weights on transitions that assign to each infinite path the long-run
average of the transition weights. When the mode of branching of the automaton
is deterministic, nondeterministic, or alternating, the corresponding class of
quantitative languages is not robust as it is not closed under the pointwise
operations of max, min, sum, and numerical complement. Nondeterministic and
alternating mean-payoff automata are not decidable either, as the quantitative
generalization of the problems of universality and language inclusion is
undecidable.
We introduce a new class of quantitative languages, defined by mean-payoff
automaton expressions, which is robust and decidable: it is closed under the
four pointwise operations, and we show that all decision problems are decidable
for this class. Mean-payoff automaton expressions subsume deterministic
mean-payoff automata, and we show that they have expressive power incomparable
to nondeterministic and alternating mean-payoff automata. We also present for
the first time an algorithm to compute distance between two quantitative
languages, and in our case the quantitative languages are given as mean-payoff
automaton expressions
Simulation for competition of languages with an ageing sexual population
Recently, individual-based models originally used for biological purposes
revealed interesting insights into processes of the competition of languages.
Within this new field of population dynamics a model considering sexual
populations with ageing is presented. The agents are situated on a lattice and
each one speaks one of two languages or both. The stability and quantitative
structure of an interface between two regions, initially speaking different
languages, is studied. We find that individuals speaking both languages do not
prefer any of these regions and have a different age structure than individuals
speaking only one language.Comment: submitted to International Journal of Modern Physics
Comparing Fifty Natural Languages and Twelve Genetic Languages Using Word Embedding Language Divergence (WELD) as a Quantitative Measure of Language Distance
We introduce a new measure of distance between languages based on word
embedding, called word embedding language divergence (WELD). WELD is defined as
divergence between unified similarity distribution of words between languages.
Using such a measure, we perform language comparison for fifty natural
languages and twelve genetic languages. Our natural language dataset is a
collection of sentence-aligned parallel corpora from bible translations for
fifty languages spanning a variety of language families. Although we use
parallel corpora, which guarantees having the same content in all languages,
interestingly in many cases languages within the same family cluster together.
In addition to natural languages, we perform language comparison for the coding
regions in the genomes of 12 different organisms (4 plants, 6 animals, and two
human subjects). Our result confirms a significant high-level difference in the
genetic language model of humans/animals versus plants. The proposed method is
a step toward defining a quantitative measure of similarity between languages,
with applications in languages classification, genre identification, dialect
identification, and evaluation of translations
Recommended from our members
Modelling the developmental patterning of finiteness marking in English, Dutch, German and Spanish using MOSAIC
In this paper we apply MOSAIC (Model of Syntax Acquisition in Children) to the simulation of the developmental patterning of children’s Optional Infinitive (OI) errors in four languages: English, Dutch, German and Spanish. MOSAIC, which has already simulated this phenomenon in Dutch and English, now implements a learning mechanism that better reflects the theoretical assumptions underlying it, as well as a chunking mechanism which results in frequent phrases being treated as one unit. Using one, identical model that learns from child-directed speech, we obtain a close quantitative fit to the data from all four languages, despite there being considerable cross-linguistic and developmental variation in the OI phenomenon. MOSAIC successfully simulates the difference between Spanish (a pro-drop language where OI errors are virtually absent), and Obligatory Subject languages that do display the OI phenomenon. It also highlights differences in the OI phenomenon across German and Dutch, two closely related languages whose grammar is virtually identical with respect to the relation between finiteness and verb placement. Taken together, these results suggest that (a) cross-linguistic differences in the rates at which children produce Optional Infinitives are graded, quantitative differences that closely reflect the statistical properties of the input they are exposed to and (b) theories of syntax acquisition need to consider more closely the role of input characteristics as determinants of quantitative differences in the cross-linguistic patterning of phenomena in language acquisition
Complex network analysis of literary and scientific texts
We present results from our quantitative study of statistical and network
properties of literary and scientific texts written in two languages: English
and Polish. We show that Polish texts are described by the Zipf law with the
scaling exponent smaller than the one for the English language. We also show
that the scientific texts are typically characterized by the rank-frequency
plots with relatively short range of power-law behavior as compared to the
literary texts. We then transform the texts into their word-adjacency network
representations and find another difference between the languages. For the
majority of the literary texts in both languages, the corresponding networks
revealed the scale-free structure, while this was not always the case for the
scientific texts. However, all the network representations of texts were
hierarchical. We do not observe any qualitative and quantitative difference
between the languages. However, if we look at other network statistics like the
clustering coefficient and the average shortest path length, the English texts
occur to possess more clustered structure than do the Polish ones. This result
was attributed to differences in grammar of both languages, which was also
indicated in the Zipf plots. All the texts, however, show network structure
that differs from any of the Watts-Strogatz, the Barabasi-Albert, and the
Erdos-Renyi architectures
Varieties of Cost Functions.
Regular cost functions were introduced as a quantitative generalisation of regular languages, retaining many of their equivalent characterisations and decidability properties. For instance, stabilisation monoids play the same role for cost functions as monoids do for regular languages. The purpose of this article is to further extend this algebraic approach by generalising two results on regular languages to cost functions: Eilenberg's varieties theorem and profinite equational characterisations of lattices of regular languages. This opens interesting new perspectives, but the specificities of cost functions introduce difficulties that prevent these generalisations to be straightforward. In contrast, although syntactic algebras can be defined for formal power series over a commutative ring, no such notion is known for series over semirings and in particular over the tropical semiring
- …