957 research outputs found
Tractability and the computational mind
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to be practically computable.
Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability.
We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory
Many important properties of multi-agent systems refer to the participants'
ability to achieve a given goal, or to prevent the system from an undesirable
event. Among intelligent agents, the goals are often of epistemic nature, i.e.,
concern the ability to obtain knowledge about an important fact \phi. Such
properties can be e.g. expressed in ATLK, that is, alternating-time temporal
logic ATL extended with epistemic operators. In many realistic scenarios,
however, players do not need to fully learn the truth value of \phi. They may
be almost as well off by gaining some knowledge; in other words, by reducing
their uncertainty about \phi. Similarly, in order to keep \phi secret, it is
often insufficient that the intruder never fully learns its truth value.
Instead, one needs to require that his uncertainty about \phi never drops below
a reasonable threshold.
With this motivation in mind, we introduce the logic ATLH, extending ATL with
quantitative modalities based on the Hartley measure of uncertainty. The new
logic enables to specify agents' abilities w.r.t. the uncertainty of a given
player about a given set of statements. It turns out that ATLH has the same
expressivity and model checking complexity as ATLK. However, the new logic is
exponentially more succinct than ATLK, which is the main technical result of
this paper
09351 Abstracts Collection -- Information processing, rational belief change and social interaction
From 23.08. to 27.08.2009, the Dagstuhl Seminar 09351 ``Information processing, rational belief change and social interaction \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Logical models for bounded reasoners
This dissertation aims at the logical modelling of aspects of human reasoning, informed by facts on the bounds of human cognition. We break down this challenge into three parts. In Part I, we discuss the place of logical systems for knowledge and belief in the Rationality Debate and we argue for systems that formalize an alternative picture of rationality -- one wherein empirical facts have a key role (Chapter 2). In Part II, we design logical models that encode explicitly the deductive reasoning of a single bounded agent and the variety of processes underlying it. This is achieved through the introduction of a dynamic, resource-sensitive, impossible-worlds semantics (Chapter 3). We then show that this type of semantics can be combined with plausibility models (Chapter 4) and that it can be instrumental in modelling the logical aspects of System 1 (“fast”) and System 2 (“slow”) cognitive processes (Chapter 5). In Part III, we move from single- to multi-agent frameworks. This unfolds in three directions: (a) the formation of beliefs about others (e.g. due to observation, memory, and communication), (b) the manipulation of beliefs (e.g. via acts of reasoning about oneself and others), and (c) the effect of the above on group reasoning. These questions are addressed, respectively, in Chapters 6, 7, and 8. We finally discuss directions for future work and we reflect on the contribution of the thesis as a whole (Chapter 9)
Recommended from our members
Towards integrated neural-symbolic systems for human-level AI: Two research programs helping to bridge the gaps
After a human-level AI-oriented overview of the status quo in neural-symbolic integration, two research programs aiming at overcoming long-standing challenges in the field are suggested to the community: The first program targets a better understanding of foundational differences and relationships on the level of computational complexity between symbolic and subsymbolic computation and representation, potentially providing explanations for the empirical differences between the paradigms in application scenarios and a foothold for subsequent attempts at overcoming these. The second program suggests a new approach and computational architecture for the cognitively-inspired anchoring of an agent's learning, knowledge formation, and higher reasoning abilities in real-world interactions through a closed neural-symbolic acting/sensing-processing-reasoning cycle, potentially providing new foundations for future agent architectures, multi-agent systems, robotics, and cognitive systems and facilitating a deeper understanding of the development and interaction in human-technological settings
- …