906 research outputs found

    Perception meets action: fMRI and behavioural investigations of human tool use

    Get PDF
    Tool use is essential and culturally universal to human life, common to hunter-gatherer and modern advanced societies alike. Although the neuroscience of simpler visuomotor behaviors like reaching and grasping have been studied extensively, relatively little is known about the brain mechanisms underlying learned tool use. With learned tool use, stored knowledge of object function and use supervene requirements for action programming based on physical object properties. Contemporary models of tool use based primarily on evidence from the study of brain damaged individuals implicate a set of specialized brain areas underlying the planning and control of learned actions with objects, distinct from areas devoted to more basic aspects of visuomotor control. The findings from the current thesis build on these existing theoretical models and provide new insights into the neural and behavioural mechanisms of learned tool use. In Project 1, I used fMRI to visualize brain activity in response to viewing tool use grasping. Grasping actions typical of how tools are normally grasped during use were found to preferentially activate occipitotemporal areas, including areas specialized for visual object recognition. The findings revealed sensitivity within this network to learned contextual associations tied to stored knowledge of tool-specific actions. The effects were seen to arise implicitly, in the absence of concurrent effects in visuomotor areas of parietofrontal cortex. These findings were taken to reflect the tuning of higher-order visual areas of occipitotemporal cortex to learned statistical regularities of the visual world, including the way in which tools are typically seen to be grasped and used. These areas are likely to represent an important source of inputs to visuomotor areas as to learned conceptual knowledge of tool use. In Project 2, behavioural priming and the kinematics of real tool use grasping was explored. Behavioural priming provides an index into the planning stages of actions. Participants grasped tools to either move them, grasp-to-move (GTM), or to demonstrate their common use, grasp-to-use (GTU), and grasping actions were preceded by a visual preview (prime) of either the same (congruent) or different (incongruent) tool as that which was then acted with. Behavioural priming was revealed as a reaction time advantage for congruent trial types, thought to reflect the triggering of learned use-based motor plans by the viewing of tools at prime events. The findings from two separate experiments revealed differential sensitivity to priming according to task and task setting. When GTU and GTM tasks were presented separately, priming was specific to the GTU task. In contrast, when GTU and GTM tasks were presented in the same block of trials, in a mixed task setting, priming was evident for both tasks. Together the findings indicate the importance of both task and task setting in shaping effects of action priming, likely driven by differences in the allocation of attentional resources. Differences in attention to particular object features, in this case tool identity, modulate affordances driven by those features which in turn determines priming. Beyond the physical properties of objects, knowledge and intention of use provide a mechanism for which affordances and the priming of actions may operate. Project 3 comprised a neuroimaging variant of the behavioural priming paradigm used in Project 2, with tools and tool use actions specially tailored for the fMRI environment. Preceding tool use with a visual preview of the tool to be used gave rise to reliable neural priming, measured as reduced BOLD activity. Neural priming of tool use was taken to reflect increased metabolic efficiency in the retrieval and implementation of stored tool use plans. To demonstrate specificity of priming for familiar tool use, a control task was used whereby actions with tools were determined not by tool identity but by arbitrarily learned associations with handle color. The findings revealed specificity for familiar tool-use priming in four distinct parietofrontal areas, including left inferior parietal cortex previously implicated in the storage of learned tool use plans. Specificity of priming for tool-action and not color-action associations provides compelling evidence for tool-use-experience-dependent plasticity within parietofrontal areas

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    Testing a dynamic account of neural processing: Behavioral and electrophsyiological studies of semantic satiation

    Get PDF
    In everyday perception, we easily and automatically identify objects. However, there is evidence that this ability results from complicated interactions between levels of perception. An example of hierarchical perception is accessing the meaning of visually presented words through the identification of line segments, letters, lexical entries, and meaning. Studies of word reading demonstrate a dynamic course to identification, producing benefits following brief presentations (excitation) but deficits following longer presentations (habituation). This dissertation investigates hierarchical perception and the role of transient excitatory and habituation dynamics through behavioral and neural studies of word reading. More specifically, the effect of interest is 'semantic satiation', which refers to the gradual loss of meaning when repeating a word. The reported studies test the hypothesis that habituation occurs in the associations between levels. As applied to semantic satiation, this theory supposes that there is not a loss of meaning, but, rather, an inability to access meaning from a repeated word. This application was tested in three behavioral experiments using a speeded matching task, demonstrating that meaning is lost when accessing the meaning of a repeated category label, but is not lost when accessing the category through new exemplars, or when the matching task is changed to simple word matching. To model these results, it is assumed that speeded matching results from detection of novel meaning to the target word after presentation of the cue word. This model was tested by examining neural dynamics with MEG recordings. As predicted by semantic satiation through loss of association, repeated cue words produced smaller M170 responses. M400 responses to the cue also diminished, as expected by a hierarchy in which lower levels drive higher levels. If the M400 corresponds to the post-lexical detection of new meaning, this model predicted that the M400 to targets following repeated cues would increase. This unique prediction was confirmed. These results were tested using a new method of analyzing MEG data that can differentiate between response magnitude versus differences in activity patterns. By considering hierarchical perception and processing dynamics, this work presents a new understanding of transient habituation and a new interpretation of electrophysiological data

    Oscillatory neuronal dynamics during lexical-semantic retrieval and integration

    Get PDF
    Current models of language processing advocate that word meaning is partially stored in distributed modality-specific cortical networks. However, while much has been done to investigate where information is represented in the brain, the neuronal dynamics underlying how these networks communicate internally, and with each other are still poorly understood. For example, it is not clear how spatially distributed semantic content is integrated into a coherent conceptual representation. The current thesis investigates how perceptual semantic features are selected and integrated, using oscillatory neuronal dynamics. Cortical oscillations reflect synchronized activity in large neuronal populations that are associated with specific classes of network interactions. The first part of the present thesis addresses how perceptual semantic features are selected in long-term memory. Using electroencephalographic (EEG) recordings, it is demonstrated that retrieving perceptually more complex information is associated with a reduction in oscillatory power, which is in line with the information via desynchronization hypothesis, a recent neurophysiological model for long-term memory retrieval. The second, and third part address how distributed semantic content is integrated and coordinated in the brain. Behavioral evidence suggests that integrating two features of a target word (e.g., Whistle) during a dual property verification task, incurs an additional processing cost if features are from different (visual: tiny, audio: loud), rather than the same modality (visual: tiny, silver). Furthermore, EEG recordings reveal that integrating cross-modal feature pairs is associated with a more sustained low-frequency theta power increase in the left anterior temporal lobe (ATL). The ATL is thought to converge semantic content from different modalities. In line with this notion, ATL is shown to communicate with a widely distributed cortical network at the theta frequency. The fourth part of the thesis uses magnetoencephalographic (MEG) recordings to show that, while low frequency theta oscillations in left ATL are more sensitive to integrating features from different modalities, integrating two features from the same modality induces an early increase in high frequency gamma power in left ATL and modality-specific regions. These results are in line with a recent framework suggesting that local, and long-range network dynamics are reflected in different oscillatory frequencies. The fifth part demonstrates that the connection weights between left ATL and modality-specific regions at the theta frequency are modulated consistently with the content of the word (e.g., visual features enhance connectivity between left ATL and left inferior occipital cortex). The thesis concludes by embedding these results in the context of current neurocognitive models of semantic processing

    Action inhibition and affordances associated with a non-target object : An integrative review

    Get PDF
    This article reviews evidence for the special inhibitory mechanisms required to keep response activation related to affordances of a non-target object from evoking responses. This evidence presents that response activation triggered by affordances of a non-target are automatically inhibited resulting, for example, in decelerated response speed when the response is compatible with the affordance. The article also highlights the neural processes that differentiate these non-target-related affordance effects from other non-target-related effects such as the Eriksen flanker effect that-contrary to these affordance effects-present decelerated response speed when there is incompatibility between the non-target and the response. The article discusses the role of frontal executive mechanisms in controlling action planning processes in these non-target-related affordance effects. It is also proposed that overlapping inhibition mechanisms prevent executing impulsive actions relative to affordances of a target and exaggerate inhibition of response activation triggered by affordances of a non-target.Peer reviewe

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface

    Get PDF
    A direct-speech brain-computer interface (DS-BCI) acquires neural signals corresponding to imagined speech, then processes and decodes these signals to produce a linguistic output in the form of phonemes, words, or sentences. Recent research has shown the potential of neurolinguistics to enhance decoding approaches to imagined speech with the inclusion of semantics and phonology in experimental procedures. As neurolinguistics research findings are beginning to be incorporated within the scope of DS-BCI research, it is our view that a thorough understanding of imagined speech, and its relationship with overt speech, must be considered an integral feature of research in this field. With a focus on imagined speech, we provide a review of the most important neurolinguistics research informing the field of DS-BCI and suggest how this research may be utilized to improve current experimental protocols and decoding techniques. Our review of the literature supports a cross-disciplinary approach to DS-BCI research, in which neurolinguistics concepts and methods are utilized to aid development of a naturalistic mode of communication. : Cognitive Neuroscience; Computer Science; Hardware Interface Subject Areas: Cognitive Neuroscience, Computer Science, Hardware Interfac

    Neural Basis for Priming of Pop-Out during Visual Search Revealed with fMRI

    Get PDF
    Maljkovic and Nakayama first showed that visual search efficiency can be influenced by priming effects. Even "pop-out” targets (defined by unique color) are judged quicker if they appear at the same location and/or in the same color as on the preceding trial, in an unpredictable sequence. Here, we studied the potential neural correlates of such priming in human visual search using functional magnetic resonance imaging (fMRI). We found that repeating either the location or the color of a singleton target led to repetition suppression of blood oxygen level-dependent (BOLD) activity in brain regions traditionally linked with attentional control, including bilateral intraparietal sulci. This indicates that the attention system of the human brain can be "primed,” in apparent analogy to repetition-suppression effects on activity in other neural systems. For repetition of target color but not location, we also found repetition suppression in inferior temporal areas that may be associated with color processing, whereas repetition of target location led to greater reduction of activation in contralateral inferior parietal and frontal areas, relative to color repetition. The frontal eye fields were also implicated, notably when both target properties (color and location) were repeated together, which also led to further BOLD decreases in anterior fusiform cortex not seen when either property was repeated alone. These findings reveal the neural correlates for priming of pop-out search, including commonalities, differences, and interactions between location and color repetition. fMRI repetition-suppression effects may arise in components of the attention network because these settle into a stable "attractor state” more readily when the same target property is repeated than when a different attentional state is require

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution
    corecore