105,560 research outputs found

    Insensitivity of visual short-term memory to irrelevant visual information

    Get PDF
    Several authors have hypothesised that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996a). Experiment 1 replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery

    Three symbol ungrounding problems: Abstract concepts and the future of embodied cognition

    Get PDF
    A great deal of research has focused on the question of whether or not concepts are embodied as a rule. Supporters of embodiment have pointed to studies that implicate affective and sensorimotor systems in cognitive tasks, while critics of embodiment have offered nonembodied explanations of these results and pointed to studies that implicate amodal systems. Abstract concepts have tended to be viewed as an important test case in this polemical debate. This essay argues that we need to move beyond a pretheoretical notion of abstraction. Against the background of current research and theory, abstract concepts do not pose a single, unified problem for embodied cognition but, instead, three distinct problems: the problem of generalization, the problem of flexibility, and the problem of disembodiment. Identifying these problems provides a conceptual framework for critically evaluating, and perhaps improving upon, recent theoretical proposals

    Is there a semantic system for abstract words?

    Get PDF
    Two views on the semantics of concrete words are that their core mental representations are feature-based or are reconstructions of sensory experience. We argue that neither of these approaches is capable of representing the semantics of abstract words, which involve the representation of possibly hypothetical physical and mental states, the binding of entities within a structure, and the possible use of embedding (or recursion) in such structures. Brain based evidence in the form of dissociations between deficits related to concrete and abstract semantics corroborates the hypothesis. Neuroimaging evidence suggests that left lateral inferior frontal cortex supports those processes responsible for the representation of abstract words

    Combining Language and Vision with a Multimodal Skip-gram Model

    Full text link
    We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM) build vector-based word representations by learning to predict linguistic contexts in text corpora. However, for a restricted set of words, the models are also exposed to visual representations of the objects they denote (extracted from natural images), and must predict linguistic and visual features jointly. The MMSKIP-GRAM models achieve good performance on a variety of semantic benchmarks. Moreover, since they propagate visual information to all words, we use them to improve image labeling and retrieval in the zero-shot setup, where the test concepts are never seen during model training. Finally, the MMSKIP-GRAM models discover intriguing visual properties of abstract words, paving the way to realistic implementations of embodied theories of meaning.Comment: accepted at NAACL 2015, camera ready version, 11 page

    Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/

    Get PDF
    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations

    Verbal paired associates and the hippocampus: The role of scenes

    Get PDF
    It is widely agreed that patients with bilateral hippocampal damage are impaired at binding pairs of words together. Consequently, the verbal paired associates (VPA) task has become emblematic of hippocampal function. This VPA deficit is not well understood and is particularly difficult for hippocampal theories with a visuospatial bias to explain (e.g., cognitive map and scene construction theories). Resolving the tension among hippocampal theories concerning the VPA could be important for leveraging a fuller understanding of hippocampal function. Notably, VPA tasks typically use high imagery concrete words and so conflate imagery and binding. To determine why VPA engages the hippocampus, we devised an fMRI encoding task involving closely matched pairs of scene words, pairs of object words, and pairs of very low imagery abstract words. We found that the anterior hippocampus was engaged during processing of both scene and object word pairs in comparison to abstract word pairs, despite binding occurring in all conditions. This was also the case when just subsequently remembered stimuli were considered. Moreover, for object word pairs, fMRI activity patterns in anterior hippocampus were more similar to those for scene imagery than object imagery. This was especially evident in participants who were high imagery users and not in mid and low imagery users. Overall, our results show that hippocampal engagement during VPA, even when object word pairs are involved, seems to be evoked by scene imagery rather than binding. This may help to resolve the issue that visuospatial hippocampal theories have in accounting for verbal memory

    The self-reference effect on memory in early childhood

    Get PDF
    The self-reference effect in memory is the advantage for information encoded about self, relative to other people. The early development of this effect was explored here using a concrete encoding paradigm. Trials comprised presentation of a self- or other-image paired with a concrete object. In Study 1, 4- to 6-year-old children (N = 53) were asked in each trial whether the child pictured would like the object. Recognition memory showed an advantage for self-paired objects. Study 2 (N = 55) replicated this finding in source memory. In Study 3 (N = 56), participants simply indicated object location. Again, recognition and source memory showed an advantage for self-paired items. These findings are discussed with reference to mechanisms that ensure information of potential self-relevance is reliably encoded

    Visual object imagery and autobiographical memory: object imagers are better at remembering their personal past

    Get PDF
    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed
    corecore