53 research outputs found

    Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging

    Get PDF
    Category-specific brain activation distinguishing between semantic word types has imposed challenges on theories of semantic representations and processes. However, existing metabolic imaging data are still ambiguous about whether these category-specific activations reflect processes involved in accessing the semantic representation of the stimuli, or secondary processes such as deliberate mental imagery. Further information about the response characteristics of category-specific activation is still required. Our study for the first time investigated the differential impact of word frequency on functional magnetic resonance imaging (fMRI) responses to action-related words and visually related words, respectively. First, we corroborated previous results showing that action-relatedness modulates neural responses in action-related areas, while word imageability modulates activation in object processing areas. Second, we provide novel results showing that activation negatively correlated with word frequency in the left fusiform gyrus was specific for visually related words, while in the left middle temporal gyrus word frequency effects emerged only for action-related words. Following the dominant view in the literature that effects of word frequency mainly reflect access to lexico-semantic information, we suggest that category-specific brain activation reflects distributed neuronal ensembles, which ground language and concepts in perception-action systems of the human brain. Our approach can be applied to any event-related data using single-stimulus presentation, and allows a detailed characterization of the functional role of category-specific activation patterns

    Spatial updating in narratives.

    Get PDF
    Across two experiments we investigated spatial updating in environments encoded through narratives. In Experiment 1, in which participants were given visualization instructions to imagine the protagonist’s movement, they formed an initial representation during learning but did not update it during subsequent described movement. In Experiment 2, in which participants were instructed to physically move in space towards the directions of the described objects prior to testing, there was evidence for spatial updating. Overall, findings indicate that physical movement can cause participants to link a spatial representation of a remote environment to a sensorimotor framework and update the locations of remote objects while they move

    Putting Words in Perspective.

    No full text
    This article explores the nature of the conceptual knowledge retrieved when people use words to think about objects. Suppose that conceptual knowledge is used to simulate and guide action in the world. If so, then how one can interact with the object should be reflected in the speed of retrieval and the content that is retrieved. This prediction was tested in three experiments that used a part verification procedure. Experiments 1 and 2 demonstrated that speed of part verification varied with the perspective imposed on the object by the language used to name the object (e.g., "You are driving a car" or "You are fueling a car"). In Experiment 3, parts were chosen so that actions directed toward them (on the real object) require movement upward (e.g, the roof of a car) or downward (e.g., the wheels of a car). Orthogonally, responding "yes" required an upward movement to a response button or a downward movement. Responding in a direction incompatible with the part location (e.g., responding downward to verify that a car has a roof) was slow relative to responding in a direction compatible with the part location. These results provide a strong link between concept knowledge and situated action

    Language-induced motor activity in bimanual object lifting.

    No full text
    Language comprehension requires a simulation process that taps perception and action systems. How speciWc is this simulation? To address this question, participants listened to sentences referring to the lifting of light or heavy objects (e.g., pillow or chest, respectively). Then they lifted one of two boxes that were visually identical, but one was light and the other heavy. We focused on the kinematics of the initial lift (rather than reaching) because it is mostly shaped by proprioceptive features derived from weight that cannot be visually determined. Participants were slower when the weight suggested by the sentence and the weight of the box corresponded. This eVect indicates that language can activate a simulation which is sensitive to intrinsic properties such as weight

    Not Propositions

    No full text
    Item does not contain fulltextCurrent computational accounts of meaning in the cognitive sciences are based on abstract, amodal symbols (e.g., nodes, links, propositions) that are arbitrarily related to their referents. We argue that such accounts lack convincing empirical support and that they do not provide a satisfactory account for linguistic meaning. One historic set of results supporting the abstract symbol view has come from investigation into comprehension of negated sentences, such as “The buttons are not black.” These sentences are presumed to be understood as two propositions composed of abstract symbols. One proposition corresponds to “the buttons are black,” and it is embedded in another proposition corresponding to “it is not true.” Thus, the propositional account predicts (a) that comprehension of negated sentences should take longer than comprehension of the corresponding positive sentence (because of the time needed to construct the embedding), but (b) that the resulting embedded propositions are informationally equivalent (but of opposite valence) to the simple proposition underlying the positive sentence. Contrary to these predictions, Experiment 1 demonstrates that negated sentences out of context are interpreted as situationally ambiguous, that is, as conveying less specific information than positive sentences. Furthermore, Experiment 2 demonstrates that when negated sentences are used in an appropriate context, readers do not take longer to understand them. Thus, difficulty with negation is demonstrated to be an artifact of presentation out of context. After discussing other serious problems with the use of abstract symbols, we describe the Indexical Hypothesis. This embodied account of meaning does not depend on abstract symbols, and hence it provides a more satisfactory account of meaning

    Crossmodal Rhythm Perception

    No full text

    Emotion simulation during language comprehension

    No full text
    We report a novel finding on the relation of emotion and language. Covert manipulation of emotional facial posture interacts with sentence valence when measuring the amount of time to judge valence (Experiment 1) and sensibility (Experiment 2) of the sentence. In each case, an emotion-sentence compatibility effect is found: Judgment times are faster when facial posture and sentence valence match than when they mismatch. We interpret the finding using a simulation account; that is, emotional systems contribute to language comprehension much as they do in social interaction. Because the effect was not observed on a lexical decision task using emotion-laden words (Experiment 3), we suggest that the emotion simulation affects comprehension processes beyond initial lexical access

    Grounding language in bodily states: The case for emotion

    No full text
    Item does not contain fulltex
    corecore