16 research outputs found

    Anticipation, event plausibility and scene constraints: Evidence form eye movements

    Get PDF
    We often use language to refer to items within our immediate proximity whereby the constraints of the visual context serves to restrict the number of possible referents, making it easier to anticipate which item will most likely be referred to next. However, we also use language to refer to past, future, or even imagined events. In such cases, anticipation is no longer restricted by the visual context and may now be influenced by real-world knowledge. In a set of eye-tracking experiments we explored the mapping of language onto internal representations of visually available scenes, as well as previously viewed scenes. Firstly, we were interested in how event-plausibility is able to influence our internal representations of described events and secondly, how these representations might be modulated by the nature of the visual context (as present or absent). Our findings showed that when describing events in the context of a concurrent scene the eye movement patterns during the unfolding language indicated that participants anticipated both plausible and implausible items. However, when the visual scene was removed immediately before the onset of spoken language participants anticipated plausible items, but not implausible items ā€“ only by providing a more constraining linguistic context did we find anticipatory looks to the implausible items. This suggests that in the absence of a visual context we require a more constraining linguistic context to achieve the same degree of constraint provided by a concurrent visual scene. We conclude that the conceptual representations activated during language processing in a concurrent visual context are quantitatively different from those activated when the visual context to which that language applies is absent

    Effects of word-evoked object size on covert numerosity estimations.

    Get PDF
    We investigated whether the size and number of objects mentioned in digit-word expressions influenced participantsā€™ performance in covert numerosity estimations (i.e., property probability ratings). Participants read descriptions of big or small animals standing in short, medium, and long rows (e.g., There are 8 elephants/ants in a row) and subsequently estimated the probability that a health statement about them was true (e.g., All elephants/ants are healthy). Statements about large animals scored lower than statements about small animals, confirming classical findings that humans perceive groups of large objects as being more numerous than groups of small objects (Binet, 1890) and suggesting that object size effects in covert numerosity estimations are particularly robust. Also, statements about longer rows scored lower than statements about shorter rows (cf. Sears, 1983) but no interaction between factors obtained, suggesting that quantity information is not fully retrieved in digitā€”word expressions or that their values are processed separately

    Gestalt Reasoning with Conjunctions and Disjunctions.

    Get PDF
    Reasoning, solving mathematical equations, or planning written and spoken sentences all must factor in stimuli perceptual properties. Indeed, thinking processes are inspired by and subsequently fitted to concrete objects and situations. It is therefore reasonable to expect that the mental representations evoked when people solve these seemingly abstract tasks should interact with the properties of the manipulated stimuli. Here, we investigated the mental representations evoked by conjunction and disjunction expressions in language-picture matching tasks. We hypothesised that, if these representations have been derived using key Gestalt principles, reasoners should use perceptual compatibility to gauge the goodness of fit between conjunction/disjunction descriptions (e.g., the purpleand/ orthe green) and corresponding binary visual displays. Indeed, the results of three experimental studies demonstrate that reasoners associate conjunction descriptions with perceptually-dependent stimuli and disjunction descriptions with perceptually-independent stimuli, where visual dependency status follows the key Gestalt principles of common fate, proximity, and similarity

    Logical Connectives Modulate Attention to Simulations Evoked by the Constituents They Link Together

    No full text
    In previous studies investigating logical-connectives simulations, participants focused their attention on verifying truth-condition satisfaction for connective expressions describing visual stimuli (e.g., Dumitru, 2014; Dumitru and Joergensen, 2016). Here, we sought to replicate and extend the findings that conjunction and disjunction simulations are structured as one and two Gestalts, respectively, by using language ā€“ picture matching tasks where participants focused their attention exclusively on stimuli visuospatial properties. Three studies evaluated perceptual compatibility effects between visual displays varying stimuli direction, size, and orientation, and basic sentences featuring the logical connectives AND, OR, BUT, IF, ALTHOUGH, BECAUSE, and THEREFORE (e.g., ā€œThere is blue AND there is redā€). Response times highlight correlations between the Gestalt arity of connective simulations and visual attention patterns, such that words referring to constituents in the same Gestalt were matched faster to visual stimuli displayed sequentially rather than alternatively, having the same size rather than different sizes, and being oriented along axes other than horizontal. The results also highlight attentional patterns orthogonal to Gestalt arity: visual stimuli corresponding to simulation constituents were processed faster when they appeared onscreen from left to right than from right to left, when they were emphasized or de-emphasized together (i.e., faster processing of all-small or all-large stimuli pairs), and when they formed a downward-oriented diagonal, which signals a simulation boundary. More generally, our findings suggest that logical connectives rapidly evoke simulations that trigger top-down attention patterns over the grouping and properties of visual stimuli corresponding to the constituents they link together

    Average response times and average ā€˜yesā€™ responses across conditions in Experiment 1 (A & B), in Experiment 2 (C & D), and in Experiment 3 (E & F).

    No full text
    <p>Error bars represent 95% confidence intervals. Response times were lowest and accuracy scores highest for conjunctions in one-Gestalt conditions. Conversely, response times were lowest and accuracy scores highest for disjunctions in two-Gestalts conditions.</p

    Example of a sequence of events on a typical trial for each condition in each experiment.

    No full text
    <p>Experiment 1 (A) investigated the Gestalt principle of common fate: Disks were either removed alternatively (left side panels) or simultaneously (right side panels). Experiment 2 (B) investigated the Gestalt principle of proximity: Disks were either placed far apart (left side panel) or close together (right side panel). Experiment 3 (C) investigated the Gestalt principle of similarity: Figures were either of the same shape (left panel) or of different shapes (right panel). All panels featuring visual stimuli were accompanied by a matching description written in the superior quarter of the screen, which contained a conjunction word (<i>the purple</i> <b><i>and</i></b> <i>the green</i>) or a disjunction word (<i>the purple</i> <b><i>or</i></b> <i>the green</i>).</p

    The influence of state change on object representations in language comprehension

    No full text
    We are interested in how linguistic cues, like tense and aspect, are known to influence representations of objects in language comprehension

    Making It Harder to ā€œSeeā€ Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference

    No full text
    First Published April 27, 2020Does the perceptual system for looking at the world overlap with the conceptual system for thinking about it? We conducted two experiments (N = 403) to investigate this question. Experiment 1 showed that when people make simple semantic judgments on words, interference from a concurrent visual task scales in proportion to how much visual experience they have with the things the words refer to. Experiment 2 showed that when people make the same judgments on the very same words, interference from a concurrent manual task scales in proportion to how much manual (but critically, not visual) experience people have with those same things. These results suggest that the meanings of frequently visually experienced things are represented (in part) in the visual system used for actually seeing them, that this visually represented information is a functional part of conceptual knowledge, and that the extent of these visual representations is influenced by visual experience

    The influence of state change on object representations in language comprehension

    No full text
    To understand language people form mental representations of described situations. Linguistic cues are known to influence these representations. In the present study, participants were asked to verify whether the object presented in a picture was mentioned in the preceding words. Crucially, the picture either showed an intact original state or a modified state of an object. Our results showed that the end state of the target object influenced verification responses. When no linguistic context was provided, participants responded faster to the original state of the object compared to the changed state (Experiment 1). However, when linguistic context was provided, participants responded faster to the modified state when it matched, rather than mismatched, the expected outcome of the described event (Experiment 2 and Experiment 3). Interestingly, as for the original state, the match/mismatch effects were only revealed after reading the past tense (Experiment 2) sentences but not the future-tense sentences (Experiment 3). Our findings highlight the need to take account of the dynamics of event representation in language comprehension that captures the interplay between general semantic knowledge about objects and the episodic knowledge introduced by the sentential context
    corecore