7,705 research outputs found
A survey of random processes with reinforcement
The models surveyed include generalized P\'{o}lya urns, reinforced random
walks, interacting urn models, and continuous reinforced processes. Emphasis is
on methods and results, with sketches provided of some proofs. Applications are
discussed in statistics, biology, economics and a number of other areas.Comment: Published at http://dx.doi.org/10.1214/07-PS094 in the Probability
Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Connectivity-Based Self-Localization in WSNs
Efficient localization methods are among the major challenges in wireless sensor networks today. In this paper, we present our so-called connectivity based approach i.e, based on local connectivity information, to tackle this problem. At first the method fragments the network into larger groups labeled as packs. Based on the mutual connectivity relations with their surrounding packs, we identify border nodes as well as the central node. As this first approach requires some a-priori knowledge on the network topology, we also present a novel segment-based fragmentation method to estimate the central pack of the network as well as detecting so-called corner packs without any a-priori knowledge. Based on these detected points, the network is fragmented into a set of even larger elements, so-called segments built on top of the packs, supporting even more localization information as they all reach the central node
Switcher-random-walks: a cognitive-inspired mechanism for network exploration
Semantic memory is the subsystem of human memory that stores knowledge of
concepts or meanings, as opposed to life specific experiences. The organization
of concepts within semantic memory can be understood as a semantic network,
where the concepts (nodes) are associated (linked) to others depending on
perceptions, similarities, etc. Lexical access is the complementary part of
this system and allows the retrieval of such organized knowledge. While
conceptual information is stored under certain underlying organization (and
thus gives rise to a specific topology), it is crucial to have an accurate
access to any of the information units, e.g. the concepts, for efficiently
retrieving semantic information for real-time needings. An example of an
information retrieval process occurs in verbal fluency tasks, and it is known
to involve two different mechanisms: -clustering-, or generating words within a
subcategory, and, when a subcategory is exhausted, -switching- to a new
subcategory. We extended this approach to random-walking on a network
(clustering) in combination to jumping (switching) to any node with certain
probability and derived its analytical expression based on Markov chains.
Results show that this dual mechanism contributes to optimize the exploration
of different network models in terms of the mean first passage time.
Additionally, this cognitive inspired dual mechanism opens a new framework to
better understand and evaluate exploration, propagation and transport phenomena
in other complex systems where switching-like phenomena are feasible.Comment: 9 pages, 3 figures. Accepted in "International Journal of
Bifurcations and Chaos": Special issue on "Modelling and Computation on
Complex Networks
Quantifying the consistency of scientific databases
Science is a social process with far-reaching impact on our modern society.
In the recent years, for the first time we are able to scientifically study the
science itself. This is enabled by massive amounts of data on scientific
publications that is increasingly becoming available. The data is contained in
several databases such as Web of Science or PubMed, maintained by various
public and private entities. Unfortunately, these databases are not always
consistent, which considerably hinders this study. Relying on the powerful
framework of complex networks, we conduct a systematic analysis of the
consistency among six major scientific databases. We found that identifying a
single "best" database is far from easy. Nevertheless, our results indicate
appreciable differences in mutual consistency of different databases, which we
interpret as recipes for future bibliometric studies.Comment: 20 pages, 5 figures, 4 table
An inverse of the evaluation functional for typed Lambda-calculus
In any model of typed λ-calculus conianing some basic
arithmetic, a functional p - * (procedure—* expression)
will be defined which inverts the evaluation functional
for typed X-terms, Combined with the evaluation
functional, p-e yields an efficient normalization algorithm.
The method is extended to X-calculi with constants
and is used to normalize (the X-representations
of) natural deduction proofs of (higher order) arithmetic.
A consequence of theoretical interest is a strong
completeness theorem for βη-reduction, generalizing
results of Friedman [1] and Statman [31: If two Xterms
have the same value in some model containing
representations of the primitive recursive functions
(of level 1) then they are provably equal in the βη-
calculus
- …