16 research outputs found
The free energy principle for action and perception: A mathematical review
The ‘free energy principle’ (FEP) has been suggested to provide a unified theory of the brain, integrating data and theory relating to action, perception, and learning. The theory and implementation of the FEP combines insights from Helmholtzian ‘perception as inference’, machine learning theory, and statistical thermodynamics. Here, we provide a detailed mathematical evaluation of a suggested biologically plausible implementation of the FEP that has been widely used to develop the theory. Our objectives are (i) to describe within a single article the mathematical structure of this implementation of the FEP; (ii) provide a simple but complete agent-based model utilising the FEP and (iii) to disclose the assumption structure of this implementation of the FEP to help elucidate its significance for the brain sciences
Counter-factual mathematics of counterfactual predictive models
Contains fulltext :
131491.pdf (publisher's version ) (Open Access)2 p
Maximizing entropy of the Predictive Processing framework
The Predictive Processing (PP) framework offers a unifying view on the existence and working of all living systems. The core premise of PP states that as long as agents minimize prediction error, and consequently entropy, they are successful. Current developments and advances in PP indicate that the interaction between agents and their environments is an important component of entropy minimization. In this paper, we explore by means of computer simulations, the interaction between PP-agents and their environments under different conditions. We argue the need to redefine the notion of success in PP in terms of entropy, behavioral and cognitive success, as we show that the environmental conditions that lead to entropy success, are different from conditions that lead to behavioral or cognitive success. Furthermore, we show that being equipped in and applying the mechanisms to minimize prediction error, do not in practice guarantee that the agents will be successful in any sense (entropy, cognitive or behavioral)
Can infants' sense of agency be found in their behavior? Insights from babybot simulations of the mobile-paradigm
Contains fulltext :
194458.pdf (publisher's version ) (Closed access)The development of a sense of agency is essential for understanding the causal structure of the world. Previous studies have shown that infants tend to increase the frequency of an action when it is followed by an effect. This was shown, for instance, in the mobile-paradigm, in which infants were in control of moving an overhead mobile by means of a ribbon attached to one of their limbs. These findings have been interpreted as evidence for a sense of agency early in life, as infants were thought to have detected the causal action-movement relation. We argue that solely the increase in action frequency is insufficient as evidence for this claim. Computer simulations are used to demonstrate that systematic, limb-specific increase in movement frequency found in mobile-paradigm studies can be produced by an artificial agent (a 'babybot') implemented with a mechanism that does not represent cause-effect relations at all. Given that a sense of agency requires representing one's actions as the cause of the effect, a behavior that is reproduced with this non-representational babybot can be argued to be, in itself, insufficient as evidence for a sense of agency. However, a behavioral pattern that to date has received little attention in the context of sense of agency, namely an additional increase in movement frequency after the action-effect relation is discontinued, is not produced by the babybot. Future research could benefit from focusing on patterns whose production cannot be reproduced by our babybot as these may require the capacity for causal learning.7 p
Maximizing entropy of the Predictive Processing framework
The Predictive Processing (PP) framework offers a unifying view on the existence and working of all living systems. The core premise of PP states that as long as agents minimize prediction error, and consequently entropy, they are successful. Current developments and advances in PP indicate that the interaction between agents and their environments is an important component of entropy minimization. In this paper, we explore by means of computer simulations, the interaction between PP-agents and their environments under different conditions. We argue the need to redefine the notion of success in PP in terms of entropy, behavioral and cognitive success, as we show that the environmental conditions that lead to entropy success, are different from conditions that lead to behavioral or cognitive success. Furthermore, we show that being equipped in and applying the mechanisms to minimize prediction error, do not in practice guarantee that the agents will be successful in any sense (entropy, cognitive or behavioral)
Recommended from our members
Leaving Andy Clark's 'safe shores': Scaling predictive processing to higher cognition
Recommended from our members
Leaving Andy Clark's 'safe shores': Scaling predictive processing to higher cognition
Recommended from our members
How did Homo Heuristicus become ecologically rational?
Gigerenzer and colleagues have proposed the ‘adaptive toolbox of heuristics’ as an account of resource-bounded
human decision-making. According to these authors, evolution has endowed such toolboxes with ‘ecological rationality’,
defined as the ability to make good quality decisions in their specific environments. Here we explore to what extent the
mechanisms of evolution alone are sufficient to explain the emergence of ecologically rational toolboxes. It is not clear how
evolution can lead to ecologically rational toolboxes within the space of possible toolboxes. That is, even if one assumes a
very simple environment (e.g., 10 cues and 50 decisions), the number of possible toolboxes (10ÀÜ72) is still astronomical. By
using artificial evolution simulations we investigated the evolvability of ecologically rational toolboxes. We present preliminary
results showing that evolution can produce toolboxes of heuristics that are “good enough” to survive, but those toolboxes are
not ecologically rational
The mobile-paradigm as measure of infants' sense of agency? Insights from babybot simulations
Contains fulltext :
167060.pdf (publisher's version ) (Closed access)The 'sense of agency' refers to the experiential state that one's actions cause events in the world. Developing a sense of agency allows infants to learn from interacting with the social and physical world in ways that would not be possible otherwise [1]. To date, few empirical studies seem to target this phenomenon in infancy directly. Notable exceptions are the work by Rochat and colleagues (e.g. [2]-[4]) and Watanabe and Taga [5], [6]. In these studies, researchers report an increased movement frequency for movements that cause effects in the world. For instance, Watanabe and Taga [5], [6] used a mobile-paradigm [7] to investigate whether infants learned a causal action-effect relation. In this paradigm, one of the infant's limbs is connected to a mobile above their crib by a ribbon. When the infant moves this limb, the mobile moves. This study replicated the effect that is typically found, namely that infants increase the movement frequency of the connected limb relative to baseline. The increased movement of the connected limb has been interpreted as evidence for a sense of agency in young infants [6]. However, it is not clear that the increase in movement frequency necessarily means that infants have built a representation of the cause-effect relation. Here, we used computer simulations to assess whether or not data patterns found in mobile-paradigm studies can be explained by a mechanism that assumes no internally represented cause-effect relations at all, and hence can be argued to be insufficient for a sense of agency [8].2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 19-22 Sept. 2016, Cergy-Pontoise, Paris, Franc
Combining Combinatorial Game Theory with an α-β Solver for Domineering
Combinatorial games are a special category of games sharing the property that the winner is by definition the last player able to move. To solve such games two main methods are being applied. The first is a general NegaScout search with many possible enhancements. This technique is applicable to every game, mainly limited to the size of the game due to the exponential explosion of the solution tree. The second way is to use techniques from Combinatorial Game Theory CGT), with very precise CGT values for (subgames of) combinatorial games. This method is only applicable to relatively small (sub)games. In this paper we show that the methods can be combined in a fruitful way by using endgame databases filled with CGT values. We apply this technique to the game of Domineering, a well-known partisan type of combinatorial game. Our test suite consisted of all 36 non-trivial boards with dimensions from 2 to 7. Endgame databases were created for all subgames of size 15 and less. The CGT values were calculated using the CGSUITE package. We show how CGT values of subgames can be used in several ways as refinements of a basic NegaScout solver. Experiments reveal up to 99% reduction in number of nodes investigated