261,452 research outputs found

    Language and memory for object location

    Get PDF
    In three experiments, we investigated the influence of two types of language on memory for object location: demonstratives (this, that) and possessives (my, your). Participants first read instructions containing demonstratives/possessives to place objects at different locations, and then had to recall those object locations (following object removal). Experiments 1 and 2 tested contrasting predictions of two possible accounts of language on object location memory: the Expectation Model (Coventry, Griffiths, & Hamilton, 2014) and the congruence account (Bonfiglioli, Finocchiaro, Gesierich, Rositani, & Vescovi, 2009). In Experiment 3, the role of attention allocation as a possible mechanism was investigated. Results across all three experiments show striking effects of language on object location memory, with the pattern of data supporting the Expectation Model. In this model, the expected location cued by language and the actual location are concatenated leading to (mis)memory for object location, consistent with models of predictive coding (Bar, 2009; Friston, 2003)

    The influence of language on spatial memory and visual attention

    Get PDF
    This thesis examines the relationship between language and non-linguistic processes. The experimental work presented, focusses on the influence of language on two non-linguistic processes: spatial memory and visual attention. In the first series of experiments, the influence of spatial demonstratives (this/that) and possessives (my/your) on memory for object location was examined in four experiments, using an adapted version of the memory game procedure (Coventry et al., 2008, 2014). The experiments were designed to test between different models regarding how language affects memory: the Expectation model, the Congruence model, and the Attention-allocation model. Over a series of experiments, our data supports the Expectation model, which suggests, consistent with models of predictive coding (cf., Lupyan & Clark, 2015), that memory for object location is a concatenation of the actual location and the expected location. The expectation of a location can be elicited by language use (e.g., demonstrative or possessive pronouns). The second series of experiments examined demonstratives and memory in English and Japanese. We chose Japanese, because it purportedly employs a three-demonstrative system, compared to a binary system as in English (this, that). Three-way systems can be used to explicitly encode parameters that are not encoded in English, for example the position of a conspecific. In four experiments, we wanted to test whether a system as different as the Japanese demonstrative system is from English, has a similar influence on non-linguistic cognition. To this aim, we had to first experimentally establish which parameters are encoded in the Japanese demonstrative system. Second, we tested how this three-term demonstrative system acted in light of the Expectation model. The idea that Japanese demonstratives encode the position of a conspecific, which we confirmed in this study, poses an interesting problem for the Expectation model. The Expectation model works via the idea of an expected location; but the expected location calculated from a speaker gives a contradicting expectation value to the expected location from a hearer. Our memory data did not completely support any of the current models. However, interestingly, the position effect found in Japanese was also apparent in English. This might suggest that demonstrative pronoun systems, despite the fact that they seem different, could be based on universal mechanisms. However, the effects we found were stronger in Japanese, suggesting the weight of a parameter (such as position) might be influenced by whether or not a language explicitly codes the parameter. In the last experiment, we considered the influence of language on visual attention. Specifically, we examined if language expressing different spatial frames of reference affect how people look at visual scenes. The results showed different eye-movement patterns for different frames of reference (i.e., intrinsic vs. relative). These eye-movement signatures were consistent with participants’ verbal descriptions and persisted throughout the trials. We show for the first time that different reference frames, expressed in language, elicit distinguishable eye-movement patterns. The work presented in this thesis shows effects of language on memory for object location and visual attention. Effects of language on memory for object location were consistent with models of predictive coding. Furthermore, despite the fact that English and Japanese employ different demonstrative systems, results for both languages were remarkably similar. These results could indicate universal parameters underlying demonstrative systems, but perhaps parameters differentially weighted, as a function of whether or not they are explicitly encoded in a language. Finally, we showed that spatial language (prepositions) guide visual attention. To our knowledge this is the first time frames of reference are associated with identifyable eye-movement patterns. The results are discussed and situated in current literature, with theoretical implications and directions for future research highlighted

    Geometric and Extra-Geometric Spatial Conceptualisation: A cross-linguistic and non-verbal perspective

    Get PDF
    Almost all past empirical work exploring the Functional Geometric Framework (FGF) proposed by Coventry and Garrod (2004) for spatial language use has been based on a single language - English. Therefore the extent to which the framework applies across languages has not been established. The current thesis investigated whether geometric and extra-geometric factors affect production and comprehension of spatial language across three languages; English, Finnish and Spanish. Eight cross-linguistic appropriateness rating studies identified similarities and differences in the factors that underlie our verbal conceptualisation of space across three classes of spatial relations/terms: 1) topological relations (e.g., in/on), 2) vertical axis projective terms (e.g., above/below), and 3) horizontal axis projective terms (e.g., in front of/behind) and their Finnish and Spanish counterparts. There was support for the FGF crosslinguistically, and many of the results were in line with what has been previously discovered in research on English, although extra-geometric factors, such as conceptual knowledge and dynamic kinematic-routines, were revealed to often have different weightings in different languages. Given the importance of extra-geometric factors across languages, the second part of the thesis asks whether extra-geometric factors also influence (non-linguistic) memory for spatial object relations. This question was addressed by two non-verbal spatial memory experiments which revealed that this was the case in some circumstances. Horizontal shifts in position by a potentially horizontally mobile object were more accurately remembered in specific conditions, i.e. when the located object was positioned along the diagonal axes of the reference object rather than cardinal axes, and when the movement was against the direction of expected movement of the located object. However, location memory for vertical shifts of position, was not affected in such a way by potentially vertically mobile objects in any circumstances. In the closing chapter of the thesis the generalisability of the FGF for crosslinguistic and non-linguistic relations is discussed

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    Semantic indeterminacy in object relative clauses

    Get PDF
    This article examined whether semantic indeterminacy plays a role in comprehension of complex structures such as object relative clauses. Study 1 used a gated sentence completion task to assess which alternative interpretations are dominant as the relative clause unfolds; Study 2 compared reading times in object relative clauses containing different animacy configurations to unambiguous passive controls; and Study 3 related completion data and reading data. The results showed that comprehension difficulty was modulated by animacy configuration and voice (active vs. passive). These differences were well correlated with the availability of alternative interpretations as the relative clause unfolds, as revealed by the completion data. In contrast to approaches arguing that comprehension difficulty stems from syntactic complexity, these results suggest that semantic indeterminacy is a major source of comprehension difficulty in object relative clauses. Results are consistent with constraint-based approaches to ambiguity resolution and bring new insights into previously identified sources of difficulty. (C) 2007 Elsevier Inc. All rights reserved

    The Meaning of Memory Safety

    Full text link
    We give a rigorous characterization of what it means for a programming language to be memory safe, capturing the intuition that memory safety supports local reasoning about state. We formalize this principle in two ways. First, we show how a small memory-safe language validates a noninterference property: a program can neither affect nor be affected by unreachable parts of the state. Second, we extend separation logic, a proof system for heap-manipulating programs, with a memory-safe variant of its frame rule. The new rule is stronger because it applies even when parts of the program are buggy or malicious, but also weaker because it demands a stricter form of separation between parts of the program state. We also consider a number of pragmatically motivated variations on memory safety and the reasoning principles they support. As an application of our characterization, we evaluate the security of a previously proposed dynamic monitor for memory safety of heap-allocated data.Comment: POST'18 final versio

    Perceptual Pluralism

    Get PDF
    Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations
    corecore