25 research outputs found
Postdictive Reasoning in Epistemic Action Theory
If an agent executes an action, this will not only change the world physically, but also the agent's knowledge about the world. Therefore the occurrence of an action can be modeled as an epistemic state transition which maps the knowledge state of an agent to a successor knowledge state. For example, consider that an agent in a state s_0 executes an action a. This causes a transition to a state s_1. Subsequently, the agent executes a sensing action a_s, which produces knowledge and causes a transition to a state s_2. With the information which is gained by the sensation, the agent can not only extend its knowledge about s_2, but also infer additional knowledge about the initial state s_0. That is, the agent uses knowledge about the present to retrospectively acquire additional information about the past. We refer to this temporal form of epistemic inference as postdiction. Existing action theories are not capable of efficiently performing postdictive reasoning because they require an exponential number of state variables to represent an agent's knowledge state. The contribution of this thesis is an approximate epistemic action theory which is capable of postdictive reasoning while it requires only a linear number of state variables to represent an agent's knowledge state. In addition, the theory is able to perform a more general temporal form of postdiction, which most existing approaches do not support. We call the theory the h-approximation (HPX) because it explicitly represents historical knowledge about past world states. In addition to the operational semantics of HPX, we present its formalization in terms of Answer Set Programming (ASP) and provide respective soundness results. The ASP implementation allows us to apply HPX in real robotic applications by using off-the-shelf ASP solvers. Specifically, we integrate of HPX in an online planning framework for Cognitive Robotics where planning, plan execution and abductive explanation tasks are interleaved. As a proof-of-concept, we provide a case-study which demonstrates the application of HPX for high-level robot control in a Smart Home. The case-study emphasizes the usefulness of postdiction for abnormality detection in robotics: actions which are performed by robots are often not successful due to unforeseen practical problems. A solution is to verify action success by observing the effects of the action. If the desired effects do not hold after action execution, then one can postdict the existence of an abnormality
Evolution, Culture and Computation in Psychiatry
This thesis develops an approach to integrate evolutionary, cultural, and computational approaches
to psychiatry in 4 chapters. The claim at the core of this thesis is that a principled holistic explanation
of mental disorders would benefit from the integration of explanations in computational, cultural, and
evolutionary psychiatry. The argument is presented through two models. The first model is presented
in chapter 3, and functions as an ontology of mental disorders that integrates principles of
evolutionary, cultural, and computational psychiatry. The second model is presented in chapter 4 and
implements this integrative view with a computational model of major depressive disorder. The
models that I propose are based on two important philosophical assumptions about active inference,
the formal theory that underwrites them. First, the two models assume that active inference — and
implicitly the free-energy principle — can be applied to the behaviour of non-living systems. Second,
the models assume that the cognition and behaviour (e.g., action, perception, and learning) of living
systems — such as modelled under active inference — have a formal equivalent in non-living
systems. This allows us to apply the free-energy principle to the dynamics of systems that involve
nonliving components such as enculturated humans embedded in a material environment. The first
portion of this thesis contained in chapters 1 and 2 defends these two assumptions. The second
portion of this thesis contained in chapter 3 and 4 presents the two models
Consciousness is learning: predictive processing systems that learn by binding may perceive themselves as conscious
Machine learning algorithms have achieved superhuman performance in specific
complex domains. Yet learning online from few examples and efficiently
generalizing across domains remains elusive. In humans such learning proceeds
via declarative memory formation and is closely associated with consciousness.
Predictive processing has been advanced as a principled Bayesian inference
framework for understanding the cortex as implementing deep generative
perceptual models for both sensory data and action control. However, predictive
processing offers little direct insight into fast compositional learning or the
mystery of consciousness. Here we propose that through implementing online
learning by hierarchical binding of unpredicted inferences, a predictive
processing system may flexibly generalize in novel situations by forming
working memories for perceptions and actions from single examples, which can
become short- and long-term declarative memories retrievable by associative
recall. We argue that the contents of such working memories are unified yet
differentiated, can be maintained by selective attention and are consistent
with observations of masking, postdictive perceptual integration, and other
paradigm cases of consciousness research. We describe how the brain could have
evolved to use perceptual value prediction for reinforcement learning of
complex action policies simultaneously implementing multiple survival and
reproduction strategies. 'Conscious experience' is how such a learning system
perceptually represents its own functioning, suggesting an answer to the meta
problem of consciousness. Our proposal naturally unifies feature binding,
recurrent processing, and predictive processing with global workspace, and, to
a lesser extent, the higher order theories of consciousness.Comment: This version adds 5 figures (new) and only modifies the text to
reference the figure
Cognitive neurorobotics and self in the shared world, a focused review of ongoing research
Through brain-inspired modeling studies, cognitive neurorobotics aims to resolve dynamics essential to different emergent phenomena at the level of embodied agency in an object environment shared with human beings. This article is a review of ongoing research focusing on model dynamics associated with human self-consciousness. It introduces the free energy principle and active inference in terms of Bayesian theory and predictive coding, and then discusses how directed inquiry employing analogous models may bring us closer to representing the sense of self in cognitive neurorobots. The first section quickly locates cognitive neurorobotics in the broad field of computational cognitive modeling. The second section introduces principles according to which cognition may be formalized, and reviews cognitive neurorobotics experiments employing such formalizations. The third section interprets the results of these and other experiments in the context of different senses of self, both “minimal” and “narrative” self. The fourth section considers model validity and discusses what we may expect ongoing cognitive neurorobotics studies to contribute to scientific explanation of cognitive phenomena including the senses of minimal and narrative self