167,255 research outputs found

    Observing Action Sequences Elicits Sequence-Specific Neural Representations in Frontoparietal Brain Regions.

    Get PDF
    Learning new skills by watching others is important for social and motor development throughout the lifespan. Prior research has suggested that observational learning shares common substrates with physical practice at both cognitive and brain levels. In addition, neuroimaging studies have used multivariate analysis techniques to understand neural representations in a variety of domains, including vision, audition, memory, and action, but few studies have investigated neural plasticity in representational space. Therefore, although movement sequences can be learned by observing other people's actions, a largely unanswered question in neuroscience is how experience shapes the representational space of neural systems. Here, across a sample of male and female participants, we combined pretraining and posttraining fMRI sessions with 6 d of observational practice to determine whether the observation of action sequences elicits sequence-specific representations in human frontoparietal brain regions and the extent to which these representations become more distinct with observational practice. Our results showed that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). Therefore, on a more fine-grained neural level than demonstrated previously, our findings reveal how the representational structure of frontoparietal cortex maps visual information onto motor circuits in order to enhance motor performance.SIGNIFICANCE STATEMENT Learning by watching others is a cornerstone in the development of expertise and skilled behavior. However, it remains unclear how visual signals are mapped onto motor circuits for such learning to occur. Here, we show that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). More generally, these findings demonstrate how motor circuit involvement in the perception of action sequences shows high fidelity to prior work, which focused on physical performance of action sequences

    The Complementary Brain: From Brain Dynamics To Conscious Experiences

    Full text link
    How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Visual pathways from the perspective of cost functions and multi-task deep neural networks

    Get PDF
    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units.Comment: 16 pages, 5 figure

    Perceptual Pluralism

    Get PDF
    Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations

    The Complementary Brain: A Unifying View of Brain Specialization and Modularity

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-I-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-I-0657

    Perception, cognition, and action in hyperspaces: implications on brain plasticity, learning, and cognition

    Get PDF
    We live in a three-dimensional (3D) spatial world; however, our retinas receive a pair of 2D projections of the 3D environment. By using multiple cues, such as disparity, motion parallax, perspective, our brains can construct 3D representations of the world from the 2D projections on our retinas. These 3D representations underlie our 3D perceptions of the world and are mapped into our motor systems to generate accurate sensorimotor behaviors. Three-dimensional perceptual and sensorimotor capabilities emerge during development: the physiology of the growing baby changes hence necessitating an ongoing re-adaptation of the mapping between 3D sensory representations and the motor coordinates. This adaptation continues in adulthood and is quite general to successfully deal with joint-space changes (longer arms due to growth), skull and eye size changes (and still being able of accurate eye movements), etc. A fundamental question is whether our brains are inherently limited to 3D representations of the environment because we are living in a 3D world, or alternatively, our brains may have the inherent capability and plasticity of representing arbitrary dimensions; however, 3D representations emerge from the fact that our development and learning take place in a 3D world. Here, we review research related to inherent capabilities and limitations of brain plasticity in terms of its spatial representations and discuss whether with appropriate training, humans can build perceptual and sensorimotor representations of spatial 4D environments, and how the presence or lack of ability of a solid and direct 4D representation can reveal underlying neural representations of space.Published versio
    corecore