4,172 research outputs found
Complexity transitions in global algorithms for sparse linear systems over finite fields
We study the computational complexity of a very basic problem, namely that of
finding solutions to a very large set of random linear equations in a finite
Galois Field modulo q. Using tools from statistical mechanics we are able to
identify phase transitions in the structure of the solution space and to
connect them to changes in performance of a global algorithm, namely Gaussian
elimination. Crossing phase boundaries produces a dramatic increase in memory
and CPU requirements necessary to the algorithms. In turn, this causes the
saturation of the upper bounds for the running time. We illustrate the results
on the specific problem of integer factorization, which is of central interest
for deciphering messages encrypted with the RSA cryptosystem.Comment: 23 pages, 8 figure
On the entropy flows to disorder
Gamma distributions, which contain the exponential as a special case, have a
distinguished place in the representation of near-Poisson randomness for
statistical processes; typically, they represent distributions of spacings
between events or voids among objects. Here we look at the properties of the
Shannon entropy function and calculate its corresponding flow curves. We
consider univariate and bivariate gamma, as well as Weibull distributions which
also include exponential distributions.Comment: Enlarged version of original. 11 pages, 6 figures, 15 reference
A network model of interpersonal alignment in dialog
In dyadic communication, both interlocutors adapt to each other linguistically, that is, they align interpersonally. In this article, we develop a framework for modeling interpersonal alignment in terms of the structural similarity of the interlocutorsâ dialog lexica. This is done by means of so-called two-layer time-aligned network series, that is, a time-adjusted graph model. The graph model is partitioned into two layers, so that the interlocutorsâ lexica are captured as subgraphs of an encompassing dialog graph. Each constituent network of the series is updated utterance-wise. Thus, both the inherent bipartition of dyadic conversations and their gradual development are modeled. The notion of alignment is then operationalized within a quantitative model of structure formation based on the mutual information of the subgraphs that represent the interlocutorâs dialog lexica. By adapting and further developing several models of complex network theory, we show that dialog lexica evolve as a novel class of graphs that have not been considered before in the area of complex (linguistic) networks. Additionally, we show that our framework allows for classifying dialogs according to their alignment status. To the best of our knowledge, this is the first approach to measuring alignment in communication that explores the similarities of graph-like cognitive representations. Keywords: alignment in communication; structural coupling; linguistic networks; graph distance measures; mutual information of graphs; quantitative network analysi
Reverse-engineering of polynomial dynamical systems
Multivariate polynomial dynamical systems over finite fields have been
studied in several contexts, including engineering and mathematical biology. An
important problem is to construct models of such systems from a partial
specification of dynamic properties, e.g., from a collection of state
transition measurements. Here, we consider static models, which are directed
graphs that represent the causal relationships between system variables,
so-called wiring diagrams. This paper contains an algorithm which computes all
possible minimal wiring diagrams for a given set of state transition
measurements. The paper also contains several statistical measures for model
selection. The algorithm uses primary decomposition of monomial ideals as the
principal tool. An application to the reverse-engineering of a gene regulatory
network is included. The algorithm and the statistical measures are implemented
in Macaulay2 and are available from the authors
Normalized Web Distance and Word Similarity
There is a great deal of work in cognitive psychology, linguistics, and
computer science, about using word (or phrase) frequencies in context in text
corpora to develop measures for word similarity or word association, going back
to at least the 1960s. The goal of this chapter is to introduce the
normalizedis a general way to tap the amorphous low-grade knowledge available
for free on the Internet, typed in by local users aiming at personal
gratification of diverse objectives, and yet globally achieving what is
effectively the largest semantic electronic database in the world. Moreover,
this database is available for all by using any search engine that can return
aggregate page-count estimates for a large range of search-queries. In the
paper introducing the NWD it was called `normalized Google distance (NGD),' but
since Google doesn't allow computer searches anymore, we opt for the more
neutral and descriptive NWD. web distance (NWD) method to determine similarity
between words and phrases. ItComment: Latex, 20 pages, 7 figures, to appear in: Handbook of Natural
Language Processing, Second Edition, Nitin Indurkhya and Fred J. Damerau
Eds., CRC Press, Taylor and Francis Group, Boca Raton, FL, 2010, ISBN
978-142008592
- âŠ