19,247 research outputs found

    A predictive processing theory of sensorimotor contingencies: explaining the puzzle of perceptual presence and its absence in synesthesia

    Get PDF
    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of “perceptual presence” has motivated “sensorimotor theories” which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative “predictive processing” theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These “counterfactually-rich” generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception

    Biologically Inspired Dynamic Textures for Probing Motion Perception

    Get PDF
    Perception is often described as a predictive process based on an optimal inference with respect to a generative model. We study here the principled construction of a generative model specifically crafted to probe motion perception. In that context, we first provide an axiomatic, biologically-driven derivation of the model. This model synthesizes random dynamic textures which are defined by stationary Gaussian distributions obtained by the random aggregation of warped patterns. Importantly, we show that this model can equivalently be described as a stochastic partial differential equation. Using this characterization of motion in images, it allows us to recast motion-energy models into a principled Bayesian inference framework. Finally, we apply these textures in order to psychophysically probe speed perception in humans. In this framework, while the likelihood is derived from the generative model, the prior is estimated from the observed results and accounts for the perceptual bias in a principled fashion.Comment: Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), Dec 2015, Montreal, Canad

    Literal Perceptual Inference

    Get PDF
    In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module

    Applications of time-series generative models and inference techniques

    Get PDF
    In this dissertation, we apply deep generative modelling, amortised inference and reinforcement learning methods to real-world, practical phenomenon, and we ask if these techniques can be used to predict complex system dynamics, model biologically plausible behaviour, and guide decision making. In the past, probabilistic modelling and Bayesian inference techniques have been successfully applied in a wide array of fields, achieving success in financial market prediction, robotics, and the natural sciences. However, the use of generative models in these contexts has usually required a rigid set of linearity constraints or assumptions about the distributions used for modelling. Furthermore, inference in non-linear models can be very difficult to scale to high-dimensional models. In recent years, deep learning has been a key innovation in enabling non-linear function approximation. When applied to probabilistic modelling, deep non-linear models have significantly improved the generative capabilities of computer vision models. While an important step towards general artificial intelligence, there remains a gap between the successes of these early single-time-step deep generative models and the temporal models that will be required to deploy machine learning in the real-world. We posit that deep non-linear time-series models and sequential inference are useful in a number of these complex domains. In order to test this hypothesis, we made methodological developments related to model learning and approximate inference. We then present experimental results, which address several questions about the application of deep generative models. First, can we train a deep temporal model learning complex dynamics to perform sufficiently accurate inference and predictions at run-time. Here, ``sufficient accuracy'' means that the predictions and inferences made using our model lead to stronger performance than that given by a heuristic approach on some downstream task performed in real-time. We specifically model large compute cluster hardware performance using a deep generative model in order to use the model to tackle the downstream task of improving the overall throughput of the cluster. Generally, this question is useful to answer for a number of wider applications similar to ours which may use such modelling techniques to intervene in real-time. For example, we may be interested in applying generative modelling and inference to come up with better trading algorithms with the goal of increasing returns. We may also wish to use a deep generative epidemiology model to determine government policies that help prevent the spread of disease. Simply put, we want to ask the question, "are deep generative models powerful enough to be useful?" Next, are deep state-space models important for the generative quality of animal-like behaviour? Given a perceptual dataset of animal behaviour, such as camera views of fruit-flies interactions or collections of human handwriting samples, can a deep generative model capture the latent variability underlying such behaviour. As a step towards artificial intelligence that mirrors human and other biological organisms, we must assess whether deep generative modelling is a viable approach to capture what may be one of the most stochastic and challenging phenomenon to model. Finally, is inference a useful perspective in decision making and reinforcement learning? If so, can we improve the uncertainty estimation of different quantities used in classic reinforcement learning to further take advantage of an inference perspective? Answering these questions may help us determine if a ``Reinforcement Learning as Inference'' framework coupled with a distributional estimate of the sum of future rewards can lead to better decision making under the control setting. Although our findings are positive in terms of these questions, they come with caveats for each. First, deep generative models must be accurate to be useful for downstream tasks. Second, modelling biologically plausible behaviour is difficult without additional partial supervision in the latent space. Third, while we have made orthogonal progress in using the inference perspective for policy learning and leveraging a distributional estimate in reinforcement learning, it remains unclear how to best combine these two approaches. This thesis presents the progress made in tackling these challenges

    Active inference, communication and hermeneutics

    Get PDF
    AbstractHermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa
    corecore