47,197 research outputs found
Talking Nets: A Multi-Agent Connectionist Approach to Communication and Trust between Individuals
A multi-agent connectionist model is proposed that consists of a collection of individual recurrent networks that communicate with each other, and as such is a network of networks. The individual recurrent networks simulate the process of information uptake, integration and memorization within individual agents, while the communication of beliefs and opinions between agents is propagated along connections between the individual networks. A crucial aspect in belief updating based on information from other agents is the trust in the information provided. In the model, trust is determined by the consistency with the receiving agentsâ existing beliefs, and results in changes of the connections between individual networks, called trust weights. Thus activation spreading and weight change between individual networks is analogous to standard connectionist processes, although trust weights take a specific function. Specifically, they lead to a selective propagation and thus filtering out of less reliable information, and they implement Griceâs (1975) maxims of quality and quantity in communication. The unique contribution of communicative mechanisms beyond intra-personal processing of individual networks was explored in simulations of key phenomena involving persuasive communication and polarization, lexical acquisition, spreading of stereotypes and rumors, and a lack of sharing unique information in group decisions
Multi-Agent Simulation of Emergence of Schwa Deletion Pattern in Hindi
Recently, there has been a revival of interest in multi-agent simulation techniques for exploring the nature of language change. However, a lack of appropriate validation of simulation experiments against real language data often calls into question the general applicability of these methods in modeling realistic language change. We try to address this issue here by making an attempt to model the phenomenon of schwa deletion in Hindi through a multi-agent simulation framework. The pattern of Hindi schwa deletion and its diachronic nature are well studied, not only out of general linguistic inquiry, but also to facilitate Hindi grapheme-to-phoneme conversion, which is a preprocessing step to text-to-speech synthesis. We show that under certain conditions, the schwa deletion pattern observed in modern Hindi emerges in the system from an initial state of no deletion. The simulation framework described in this work can be extended to model other phonological changes as well.Language Change, Linguistic Agent, Language Game, Multi-Agent Simulation, Schwa Deletion
Modeling the emergence of universality in color naming patterns
The empirical evidence that human color categorization exhibits some
universal patterns beyond superficial discrepancies across different cultures
is a major breakthrough in cognitive science. As observed in the World Color
Survey (WCS), indeed, any two groups of individuals develop quite different
categorization patterns, but some universal properties can be identified by a
statistical analysis over a large number of populations. Here, we reproduce the
WCS in a numerical model in which different populations develop independently
their own categorization systems by playing elementary language games. We find
that a simple perceptual constraint shared by all humans, namely the human Just
Noticeable Difference (JND), is sufficient to trigger the emergence of
universal patterns that unconstrained cultural interaction fails to produce. We
test the results of our experiment against real data by performing the same
statistical analysis proposed to quantify the universal tendencies shown in the
WCS [Kay P and Regier T. (2003) Proc. Natl. Acad. Sci. USA 100: 9085-9089], and
obtain an excellent quantitative agreement. This work confirms that synthetic
modeling has nowadays reached the maturity to contribute significantly to the
ongoing debate in cognitive science.Comment: Supplementery Information available here
http://www.pnas.org/content/107/6/2403/suppl/DCSupplementa
Communication and rational responsiveness to the world
Donald Davidson has long maintained that in order to be credited with the concept of objectivity â and, so, with language and thought â it is necessary to communicate with at least one other speaker. I here examine Davidsonâs central argument for this thesis and argue that it is unsuccessful. Subsequently, I turn to Robert Brandomâs defense of the thesis in Making It Explicit. I argue that, contrary to Brandom, in order to possess the concept of objectivity it is not necessary to engage in the practice of interpersonal reasoning because possession of the concept is independently integral to the practice of intrapersonal reasoning
Half a billion simulations: evolutionary algorithms and distributed computing for calibrating the SimpopLocal geographical model
Multi-agent geographical models integrate very large numbers of spatial
interactions. In order to validate those models large amount of computing is
necessary for their simulation and calibration. Here a new data processing
chain including an automated calibration procedure is experimented on a
computational grid using evolutionary algorithms. This is applied for the first
time to a geographical model designed to simulate the evolution of an early
urban settlement system. The method enables us to reduce the computing time and
provides robust results. Using this method, we identify several parameter
settings that minimise three objective functions that quantify how closely the
model results match a reference pattern. As the values of each parameter in
different settings are very close, this estimation considerably reduces the
initial possible domain of variation of the parameters. The model is thus a
useful tool for further multiple applications on empirical historical
situations
Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework
In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017
conference (Lisbon, Portugal
A design recording framework to facilitate knowledge sharing in collaborative software engineering
This paper describes an environment that allows a development team to share knowledge about software artefacts
by recording decisions and rationales as well as supporting the team in formulating and maintaining design constraints. It explores the use of multi-dimensional design spaces for capturing various issues arising during development and presenting this meta-information using a network of views. It describes a framework to underlie the collaborative environment and shows the supporting architecture and its implementation. It addresses how the artefacts and their meta-information are captured in a non-invasive way and shows how an artefact repository is embedded to store and manage the artefacts
- âŠ