245,509 research outputs found
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Connectionism, Analogicity and Mental Content
In Connectionism and the Philosophy of Psychology, Horgan and Tienson (1996) argue that cognitive
processes, pace classicism, are not governed by exceptionless, “representation-level” rules; they
are instead the work of defeasible cognitive tendencies subserved by the non-linear dynamics of
the brainÂ’s neural networks. Many theorists are sympathetic with the dynamical characterisation
of connectionism and the general (re)conception of cognition that it affords. But in all the
excitement surrounding the connectionist revolution in cognitive science, it has largely gone
unnoticed that connectionism adds to the traditional focus on computational processes, a new
focus – one on the vehicles of mental representation, on the entities that carry content through the
mind. Indeed, if Horgan and TiensonÂ’s dynamical characterisation of connectionism is on the
right track, then so intimate is the relationship between computational processes and
representational vehicles, that connectionist cognitive science is committed to a resemblance
theory of mental content
Agent-Based Models and Simulations in Economics and Social Sciences: from conceptual exploration to distinct ways of experimenting
Now that complex Agent-Based Models and computer simulations
spread over economics and social sciences - as in most sciences of complex
systems -, epistemological puzzles (re)emerge. We introduce new
epistemological tools so as to show to what precise extent each author is right
when he focuses on some empirical, instrumental or conceptual significance of
his model or simulation. By distinguishing between models and simulations,
between types of models, between types of computer simulations and between
types of empiricity, section 2 gives conceptual tools to explain the rationale of
the diverse epistemological positions presented in section 1. Finally, we claim
that a careful attention to the real multiplicity of denotational powers of
symbols at stake and then to the implicit routes of references operated by
models and computer simulations is necessary to determine, in each case, the
proper epistemic status and credibility of a given model and/or simulation
Towards a Formal Model of Recursive Self-Reflection
Self-awareness holds the promise of better decision making based on a comprehensive assessment of a system\u27s own situation. Therefore it has been studied for more than ten years in a range of settings and applications. However, in the literature the term has been used in a variety of meanings and today there is no consensus on what features and properties it should include. In fact, researchers disagree on the relative benefits of a self-aware system compared to one that is very similar but lacks self-awareness.
We sketch a formal model, and thus a formal definition, of self-awareness. The model is based on dynamic dataflow semantics and includes self-assessment, a simulation and an abstraction as facilitating techniques, which are modeled by spawning new dataflow actors in the system. Most importantly, it has a method to focus on any of its parts to make it a subject of analysis by applying abstraction, self-assessment and simulation. In particular, it can apply this process to itself, which we call recursive self-reflection. There is no arbitrary limit to this self-scrutiny except resource constraints
Interpolant-Based Transition Relation Approximation
In predicate abstraction, exact image computation is problematic, requiring
in the worst case an exponential number of calls to a decision procedure. For
this reason, software model checkers typically use a weak approximation of the
image. This can result in a failure to prove a property, even given an adequate
set of predicates. We present an interpolant-based method for strengthening the
abstract transition relation in case of such failures. This approach guarantees
convergence given an adequate set of predicates, without requiring an exact
image computation. We show empirically that the method converges more rapidly
than an earlier method based on counterexample analysis.Comment: Conference Version at CAV 2005. 17 Pages, 9 Figure
Predicate Abstraction with Indexed Predicates
Predicate abstraction provides a powerful tool for verifying properties of
infinite-state systems using a combination of a decision procedure for a subset
of first-order logic and symbolic methods originally developed for finite-state
model checking. We consider models containing first-order state variables,
where the system state includes mutable functions and predicates. Such a model
can describe systems containing arbitrarily large memories, buffers, and arrays
of identical processes. We describe a form of predicate abstraction that
constructs a formula over a set of universally quantified variables to describe
invariant properties of the first-order state variables. We provide a formal
justification of the soundness of our approach and describe how it has been
used to verify several hardware and software designs, including a
directory-based cache coherence protocol.Comment: 27 pages, 4 figures, 1 table, short version appeared in International
Conference on Verification, Model Checking and Abstract Interpretation
(VMCAI'04), LNCS 2937, pages = 267--28
- …