42,513 research outputs found
Emergence of Self-Organized Symbol-Based Communication \ud in Artificial Creatures
In this paper, we describe a digital scenario where we simulated the emergence of self-organized symbol-based communication among artificial creatures inhabiting a \ud
virtual world of unpredictable predatory events. In our experiment, creatures are autonomous agents that learn symbolic relations in an unsupervised manner, with no explicit feedback, and are able to engage in dynamical and autonomous communicative interactions with other creatures, even simultaneously. In order to synthesize a behavioral ecology and infer the minimum organizational constraints for the design of our creatures, \ud
we examined the well-studied case of communication in vervet monkeys. Our results show that the creatures, assuming the role of sign users and learners, behave collectively as a complex adaptive system, where self-organized communicative interactions play a \ud
major role in the emergence of symbol-based communication. We also strive in this paper for a careful use of the theoretical concepts involved, including the concepts of symbol and emergence, and we make use of a multi-level model for explaining the emergence of symbols in semiotic systems as a basis for the interpretation of inter-level relationships in the semiotic processes we are studying
Forty hours of declarative programming: Teaching Prolog at the Junior College Utrecht
This paper documents our experience using declarative languages to give
secondary school students a first taste of Computer Science. The course aims to
teach students a bit about programming in Prolog, but also exposes them to
important Computer Science concepts, such as unification or searching
strategies. Using Haskell's Snap Framework in combination with our own
NanoProlog library, we have developed a web application to teach this course.Comment: In Proceedings TFPIE 2012, arXiv:1301.465
The Emergence of Symbol-Based Communication in a Complex System of Artificial Creatures
We present here a digital scenario to simulate the emergence of self-organized symbol-based communication among artificial creatures inhabiting a virtual world of predatory events. In order to design the environment and creatures, we seek theoretical and empirical constraints from C.S.Peirce Semiotics and an ethological case study of communication among animals. Our results show that the creatures, assuming the role of sign users and learners, behave collectively as a complex system, where self-organization of communicative interactions plays a major role in the emergence of symbol-based communication. We also strive for a careful use of the theoretical concepts involved, including the concepts of symbol, communication, and emergence, and we use a multi-level model as a basis for the interpretation of inter-level relationships in the semiotic processes we are studying
Approaches to Interpreter Composition
In this paper, we compose six different Python and Prolog VMs into 4 pairwise
compositions: one using C interpreters; one running on the JVM; one using
meta-tracing interpreters; and one using a C interpreter and a meta-tracing
interpreter. We show that programs that cross the language barrier frequently
execute faster in a meta-tracing composition, and that meta-tracing imposes a
significantly lower overhead on composed programs relative to mono-language
programs.Comment: 33 pages, 1 figure, 9 table
Synthetic Semiotics: on modelling and simulating the \ud emergence of sign processes
Based on formal-theoretical principles about the \ud
sign processes involved, we have built synthetic experiments \ud
to investigate the emergence of communication based on \ud
symbols and indexes in a distributed system of sign users, \ud
following theoretical constraints from C.S.Peirce theory of \ud
signs, following a Synthetic Semiotics approach. In this paper, we summarize these computational experiments and results regarding associative learning processes of symbolic sign modality and cognitive conditions in an evolutionary process for the emergence of either symbol-based or index-based communication
From treebank resources to LFG F-structures
We present two methods for automatically annotating treebank resources with functional structures. Both methods define systematic patterns of correspondence between partial PS configurations and functional structures. These are applied to PS rules extracted from treebanks, or directly to constraint set encodings of treebank PS trees
Fine-grained Language Composition: A Case Study
Although run-time language composition is common, it normally takes the form
of a crude Foreign Function Interface (FFI). While useful, such compositions
tend to be coarse-grained and slow. In this paper we introduce a novel
fine-grained syntactic composition of PHP and Python which allows users to
embed each language inside the other, including referencing variables across
languages. This composition raises novel design and implementation challenges.
We show that good solutions can be found to the design challenges; and that the
resulting implementation imposes an acceptable performance overhead of, at
most, 2.6x.Comment: 27 pages, 4 tables, 5 figure
An evaluation of the signature extension approach to large area crop inventories utilizing space image data
The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold
Optimizing Frameworks Performance Using C++ Modules Aware ROOT
ROOT is a data analysis framework broadly used in and outside of High Energy
Physics (HEP). Since HEP software frameworks always strive for performance
improvements, ROOT was extended with experimental support of runtime C++
Modules. C++ Modules are designed to improve the performance of C++ code
parsing. C++ Modules offers a promising way to improve ROOT's runtime
performance by saving the C++ header parsing time which happens during ROOT
runtime. This paper presents the results and challenges of integrating C++
Modules into ROOT.Comment: 8 pages, 3 figures, 6 listing, CHEP 2018 - 23rd International
Conference on Computing in High Energy and Nuclear Physic
ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization
ROOT is an object-oriented C++ framework conceived in the high-energy physics
(HEP) community, designed for storing and analyzing petabytes of data in an
efficient way. Any instance of a C++ class can be stored into a ROOT file in a
machine-independent compressed binary format. In ROOT the TTree object
container is optimized for statistical data analysis over very large data sets
by using vertical data storage techniques. These containers can span a large
number of files on local disks, the web, or a number of different shared file
systems. In order to analyze this data, the user can chose out of a wide set of
mathematical and statistical functions, including linear algebra classes,
numerical algorithms such as integration and minimization, and various methods
for performing regression analysis (fitting). In particular, ROOT offers
packages for complex data modeling and fitting, as well as multivariate
classification based on machine learning techniques. A central piece in these
analysis tools are the histogram classes which provide binning of one- and
multi-dimensional data. Results can be saved in high-quality graphical formats
like Postscript and PDF or in bitmap formats like JPG or GIF. The result can
also be stored into ROOT macros that allow a full recreation and rework of the
graphics. Users typically create their analysis macros step by step, making use
of the interactive C++ interpreter CINT, while running over small data samples.
Once the development is finished, they can run these macros at full compiled
speed over large data sets, using on-the-fly compilation, or by creating a
stand-alone batch program. Finally, if processing farms are available, the user
can reduce the execution time of intrinsically parallel tasks - e.g. data
mining in HEP - by using PROOF, which will take care of optimally distributing
the work over the available resources in a transparent way
- ā¦