865 research outputs found

    The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies

    Get PDF
    Is there a single principle by which neural operations can account for perception, cognition, action, and even consciousness? A strong candidate is now taking shape in the form of “predictive processing”. On this theory, brains engage in predictive inference on the causes of sensory inputs by continuous minimization of prediction errors or informational “free energy”. Predictive processing can account, supposedly, not only for perception, but also for action and for the essential contribution of the body and environment in structuring sensorimotor interactions. In this paper I draw together some recent developments within predictive processing that involve predictive modelling of internal physiological states (interoceptive inference), and integration with “enactive” and “embodied” approaches to cognitive science (predictive perception of sensorimotor contingencies). The upshot is a development of predictive processing that originates, not in Helmholtzian perception-as-inference, but rather in 20th-century cybernetic principles that emphasized homeostasis and predictive control. This way of thinking leads to (i) a new view of emotion as active interoceptive inference; (ii) a common predictive framework linking experiences of body ownership, emotion, and exteroceptive perception; (iii) distinct interpretations of active inference as involving disruptive and disambiguatory—not just confirmatory—actions to test perceptual hypotheses; (iv) a neurocognitive operationalization of the “mastery of sensorimotor contingencies” (where sensorimotor contingencies reflect the rules governing sensory changes produced by various actions); and (v) an account of the sense of subjective reality of perceptual contents (“perceptual presence”) in terms of the extent to which predictive models encode potential sensorimotor relations (this being “counterfactual richness”). This is rich and varied territory, and surveying its landmarks emphasizes the need for experimental tests of its key contributions

    Features and Functions: Decomposing the Neural and Cognitive Bases of Semantic Composition

    Get PDF
    In this dissertation, I present a suite of studies investigating the neural and cognitive bases of semantic composition. First, I motivate why a theory of semantic combinatorics is a fundamental desideratum of the cognitive neuroscience of language. I then introduce a possible typology of semantic composition: one which involves contrasting feature-based composition with function-based composition. Having outlined several different ways we might operationalize such a distinction, I proceed to detail two studies using univariate and multivariate fMRI measures, each examining different dichotomies along which the feature-vs.-function distinction might cleave. I demonstrate evidence that activity in the angular gyrus indexes certain kinds of function-/relation-based semantic operations and may be involved in processing event semantics. These results provide the first targeted comparison of feature- and function-based semantic composition, particularly in the brain, and delineate what proves to be a productive typology of semantic combinatorial operations. The final study investigates a different question regarding semantic composition: namely, how automatic is the interpretation of plural events, and what information does the processor use when committing to either a distributive plural event (comprising separate events) or a collective plural event (consisting of a single joint event)

    The role of terminators and occlusion cues in motion integration and segmentation: a neural network model

    Get PDF
    The perceptual interaction of terminators and occlusion cues with the functional processes of motion integration and segmentation is examined using a computational model. Inte-gration is necessary to overcome noise and the inherent ambiguity in locally measured motion direction (the aperture problem). Segmentation is required to detect the presence of motion discontinuities and to prevent spurious integration of motion signals between objects with different trajectories. Terminators are used for motion disambiguation, while occlusion cues are used to suppress motion noise at points where objects intersect. The model illustrates how competitive and cooperative interactions among cells carrying out these functions can account for a number of perceptual effects, including the chopsticks illusion and the occluded diamond illusion. Possible links to the neurophysiology of the middle temporal visual area (MT) are suggested

    On the functions, mechanisms, and malfunctions of intracortical contextual modulation

    Get PDF
    A broad neuron-centric conception of contextual modulation is reviewed and re-assessed in the light of recent neurobiological studies of amplification, suppression, and synchronization. Behavioural and computational studies of perceptual and higher cognitive functions that depend on these processes are outlined, and evidence that those functions and their neuronal mechanisms are impaired in schizophrenia is summarized. Finally, we compare and assess the long-term biological functions of contextual modulation at the level of computational theory as formalized by the theories of coherent infomax and free energy reduction. We conclude that those theories, together with the many empirical findings reviewed, show how contextual modulation at the neuronal level enables the cortex to flexibly adapt the use of its knowledge to current circumstances by amplifying and grouping relevant activities and by suppressing irrelevant activities

    A control paradigm for general purpose manipulation systems

    Get PDF
    Journal ArticleMechanical end effectors capable of dextrous manipulation are now a reality. Solutions to the high level control issues, however, have so far proved difficult to formulate. We propose a methodology for control which produces the functionality required for a general purpose manipulation system. It is clear that the state of a hand/object system is a complex interaction between the geometry of the object, the character of the contact interaction, and the conditioning of the manipulator. The objective of this work is the creation of a framework within which constraints involving the manipulator, the object, and the hand/object interaction can be exploited to direct a goal oriented manipulation strategy. The set of contacts that are applied to a task can be partitioned into subsets with independent objectives. The individual contacts may then be driven over the interaction surface to improve the state of the grasp while the configuration of the hand addresses the application of required forces. A system of this sort is flexible enough to manage large numbers of contacts and to address manipulation tasks which require the removal and replacement of fingers in the grasp. A simulator has been constructed and results of its application to position synthesis for initial grasps is presented. A discussion of the manipulation testbed under construction at the University of Utah employing the Utah/MIT Dextrous hand is presented

    A quantitative investigation of natural head movement and its contribution to spatial orientation perception.

    Get PDF
    Movement is ubiquitous in everyday life. As we exist in a physical world, we constantly account for our position in it relative to other physical features: both at a conscious, volitional level and an unconscious one. Our experience estimating our own position accumulates over the lifespan, and it is thought that this experience (often referred to as a prior) informs current perception of spatial orientation. Broadly, this perception of spatial orientation is rapidly performed by the nervous system by monitoring, interpreting, and integrated sensory information from multiple sense organs. To do this efficiently, the nervous system likely represents this sensory information in a statistically optimal manner. Some of the most important information for spatial orientation perception comes from visual and vestibular sensation, which rely on sensory organs located in the head. While statistical information about natural visual and vestibular stimuli have been characterized, natural head movement and position, which likely drives correlated dynamics across head-located senses, has not. Furthermore, sensory cues essential to spatial orientation perception are directly affected by head movement specifically. It is likely that measurement of these sensory cues taken during natural behaviors sample a significant portion of the total behaviors that comprise ones’ prior. In this dissertation, I present work quantifying characteristics of head orientation and heading, two dimensions of spatial orientation, over long-duration recordings of natural behavior in humans. Then, I use these to generate priors for Bayesian modeling frameworks which successfully predict observed patterns of orientation and heading perception bias. Given the ability to predict some patterns of bias (head roll and heading azimuth) particularly well, it is likely our data are representative of real behaviors that comprise previous experience the nervous system may have. Natural head orientation and heading distributions reveal several interesting trends that open future lines of research. First, head pitch demonstrates large amounts of inter-subject variability; likely this is due to biomechanical differences, but as these remain relatively stable over the lifespan these should bias head movements. Second, heading azimuth appears to vary significantly as a function of task. Heading azimuth distributions during low velocities (which predominantly consist of stationary activities like standing or sitting) are strongly multimodal across all subjects, while azimuth distributions during high velocities (predominantly consisting of locomotion) are unimodal with relatively low variance. Future work investigating these trends, as well as implications these trends and data have for sensory processing and other applications is discussed

    Perceptual presence in the Kuhnian-Popperian Bayesian brain : a commentary on Anil K. Seth

    Get PDF
    Anil Seth’s target paper connects the framework of PP (predictive processing) and the FEP (free-energy principle) to cybernetic principles. Exploiting an analogy to theory of science, Seth draws a distinction between three types of active inference. The first type involves confirmatory hypothesis-testing. The other types involve seeking disconfirming and disambiguating evidence, respectively. Furthermore, Seth applies PP to various fascinating phenomena, including perceptual presence. In this commentary, I explore how far we can take the analogy between explanation in perception and explanation in science. in the first part, i draw a slightly broader analogy between pp and concepts in theory of science, by asking whether the bayesian brain is kuhnian or popperian. while many aspects of pp are in line with karl popper’s falsificationism, other aspects of pp conform to how thomas kuhn described scientific revolutions. thus, there is both a sense in which the bayesian brain is kuhnian, and a sense in which it is popperian. the upshot of these considerations is that falsification in pp can take many different forms. in particular, active inference can be used to falsify a model in more ways than identified by seth. in the second part of this commentary, i focus on seth’s ppsmct (predictive processing account of sensorimotor contingency theory) and its application to perceptual presence, which assigns a crucial role to counterfactual richness. in my discussion, i question the significance of counterfactual richness for perceptual presence. first, i highlight an ambiguity inherent in seth’s descriptions of the target phenomenon (perceptual presence vs. objecthood). then i suggest that counterfactual richness may not be the crucial underlying feature (of either perceptual presence or objecthood). giving a series of examples, i argue that the degree of represented causal integration is an equally good candidate for accounting for perceptual presence (or objecthood), although more work needs to be done

    Sensors: A Key to Successful Robot-Based Assembly

    Get PDF
    Computer controlled robots offer a number of significant advantages in manufacturing and assembly tasks. These include consistent product reliability and the ability to work in harsh environments. The programmable nature of robotic automation allows the possibility of applying them to a number of tasks. In particular, significant savings can be expected in batch production, if robots can be applied to produce numbers of products successfully without plant re-tooling. Unfortunately, despite considerable progress made in robot programming [Lozano-Perez 83] [Paul 81] ;Ahmad 84] [Graver et al. 84] [Bonner & Shin 82] and in sensing [Gonzalez & Safabakhsh 82] [Fu 82] [Hall et al. 82], [Goto et al. 80], [Hirzinger & Dietrich 86], [Harmon 84], kinematics and control strategies [Whitney 85] [Luh S3] [Lee 82], a number of problems still remain unsolved before en-mass applications take place. In fact, in current applications, the specialized tooling for manufacturing a particular product may make up as much as 80% of the production line cost. In such a production line the robot is often used only as a programmable parts transfer device. Improving robots ability to sense and adapt to different products or environments so as to handle a larger variety of products without retooling is essential. It is just as important to be able to program them easily and quickly, without requiring the user to have a detailed understanding of complex robot programming languages and control schemes such as RCCL [Hayward & Paul 84], VAL-II [Shimano et al., 84], AML [Taylor et al., 83], SR3L-90 [Ahmad 84] or AL [Mujtaba & Goldman 79]. Currently there are a number of Computer Aided Design (CAD) packages available which simplify the robot programming problem. Such packages allow the automation system designer to simulate the assembly workcell which may consist of various machines and robots. The designer can then pick the motion sequences the robot has to execute in order to achieve the desired assembly task. This is done by viewing the motions on a graphical screen from different viewing angles to check for collisions and to ensure the relative positioning is correct, much the same way1 as it is done in on-line teach playback methods (see Figure 1). Off-line robot programming on CAD stations does not always lead to successful results due to two reasons: (i) The robot mechanism is inherently inaccurate due to incorrect kinematic models programmed in their control system [Wu 83] [Hayati 83] [Ahmad 87] [Whitney et ■ al. 84]. (ii) The assembly workcell model represented in the controller is not accurate. As a result parts and tools are not exactly located and their exact position may vary. This causes a predefined kinematic motion sequence program to fail, as it cannot deal with positional uncertainties. Sensors to detect real-time errors in the part and tool positions are obviously required with tailored sensor-based motion strategies to ensure assembly accomplishment. In this chapter we deal with how sensors are used to successfully ensure assembly task accomplishment. We illustrate the use of various sensors by going through an actual assembly of an oil pump. Additionally we illustrate a number of motion strategies which have been developed to deal with assembly errors. Initially, we discuss a number of sensors found in typical robotic assembly systems in Section 1. In Section 2 we discuss how and when sensors are to be used during an assembly operation. Issues relating to sensing and robust assembly systems are discussed very briefly in Section 3. Section 4 details a sensor-based robot assembly to illustrate practical applications

    Object Exploration Using a Parallel Jaw Gripper

    Get PDF
    In this paper we present a system for tactile object exploration. The system is built using a gripper with two parallel fingers, each equipped with a tactile array and a force/torque sensor. We have designed and implemented a set of exploratory procedures for acquiring the following properties: weight, shape, texture, and hardness. The system is successful at extracting these properties from a limited domain of objects. We present a detailed evaluation of the system and the causes of its limitations. The manipulation, motion, and, sensing primitives we have developed in the process of this work could be used for a variety of other tasks, such as model-based recognition, tool manipulation, and assembly
    corecore