1,494 research outputs found

    Encoding actions and verbs : Tracking the timecourse of relational encoding during message and sentence formulation

    Get PDF
    Many thanks to Annelies van Wijngaarden and student assistants from the Psychology of Language Department (in particular Esther Kroese, Marloes Graauwmans, and Ilse Wagemakers) for help with data collection and processing, and Tess Forest and Antje Meyer for helpful discussions.Peer reviewedPostprin

    The Importance of Semantics: Visual World Studies on Drawing Inferences and Resolving Anaphors

    Get PDF
    The present thesis investigated the importance of semantics in generating inferences during discourse processing. Three aspects of semantics, gender stereotypes, implicit causality information and proto-role properties, were used to investigate whether semantics is activated elaboratively during discourse comprehension and what its relative importance is in backward inferencing compared to discourse/structural cues. Visual world eye-tracking studies revealed that semantics plays an important role in both backward and forward inferencing: Gender stereotypes and implicit causality information is activated elaboratively during online discourse comprehension. Moreover, gender stereotypes, implicit causality and proto-role properties of verbs are all used in backward inferencing. Importantly, the studies demonstrated that semantic cues are weighed against discourse/structural cues. When the structural cues consist of a combination of cues that have been independently shown to be important in backward inferencing, semantic effects may be masked, whereas when the structural cues consist of a combination of fewer prominent cues, semantics can have an earlier effect than structural factors in pronoun resolution. In addition, the type of inference matters, too: During anaphoric inferencing semantics has a prominent role, while discourse/structural salience attains more prominence during non-anaphoric inferencing. Finally, semantics exhibits a strong role in inviting new inferences to revise earlier made inferences even in the case the additional inference is not needed to establish coherence in discourse. The findings are generally in line with the Mental Model approaches. Two extended model versions are presented that incorporate the current findings into the earlier literature. These models allow both forward and backward inferencing to occur at any given moment during the course of processing; they also allow semantic and discourse/structural cues to contribute to both of these processes. However, while the Mental Model 1 does not assume interactions between semantic and discourse/structural factors in forward inferencing, the Mental Model 2 does assume such a link.Siirretty Doriast

    Priming sentence planning

    No full text
    Sentence production requires mapping preverbal messages onto linguistic structures. Because sentences are normally built incrementally, the information encoded in a sentence-initial increment is critical for explaining how the mapping process starts and for predicting its timecourse. Two experiments tested whether and when speakers prioritize encoding of different types of information at the outset of formulation by comparing production of descriptions of transitive events (e.g., A dog is chasing the mailman) that differed on two dimensions: the ease of naming individual characters and the ease of apprehending the event gist (i.e., encoding the relational structure of the event). To additionally manipulate ease of encoding, speakers described the target events after receiving lexical primes (facilitating naming; Experiment 1) or structural primes (facilitating generation of a linguistic structure; Experiment 2). Both properties of the pictured events and both types of primes influenced the form of target descriptions and the timecourse of formulation: character-specific variables increased the probability of speakers encoding one character with priority at the outset of formulation, while the ease of encoding event gist and of generating a syntactic structure increased the likelihood of early encoding of information about both characters. The results show that formulation is flexible and highlight some of the conditions under which speakers might employ different planning strategies

    Agentivity drives real-time pronoun resolution : Evidence from German er and der

    Get PDF
    We report two experiments on the referential resolution of the German subject pronoun er and the demonstrative der (‘he’). Using the visual world eye-tracking paradigm, we examined the effects of grammatical role, thematic role and the information status of potential referents in the antecedent clause operationalized by word-order (canonical/non-canonical), in the context of active--accusative verbs (Exp. 1) and dative-experiencer verbs (Exp. 2). In information-structurally neutral contexts, er prefers the proto-agent and der the protopatient. This suggests that agentivity is a better predictor for pronoun resolution than subjecthood or sentence topic as previously proposed. It further supports the claim that agentivity is a core property of language processing and it more generally substantiates the proposal from cognitive sciences that agentivity represents core knowledge of the human attentional system. With non-canonical antecedent clauses, because they lack alignment of prominence features, interpretive preferences become less stable, indicating that multiple cues are involved in pronoun resolution. The data further suggest that the demonstrative pronoun elicits more reliable interpretive biases than the personal pronou

    Preliminary measurements of the edge magnetic field pitch from 2-D Doppler backscattering in MAST and NSTX-U

    Get PDF
    The Synthetic Aperture Microwave Imaging (SAMI) system is a novel diagnostic consisting of an array of 8 independently-phased antennas. At any one time, SAMI operates at one of 16 frequencies in the range 10-34.5GHz. The imaging beam is steered in software post-shot to create a picture of the entire emission surface. In SAMI’s active probing mode of operation, the plasma edge is illuminated with a monochromatic source and SAMI reconstructs an image of the Doppler back-scattered (DBS) signal. By assuming that density fluctuations are extended along magnetic field lines, and knowing that the strongest back-scattered signals are directed perpendicular to the density fluctuations, SAMI’s 2-D DBS imaging capability can be used to measure the pitch of the edge magnetic field. In this paper we present preliminary pitch angle measurements obtained by SAMI on the Mega-Amp Spherical Tokamak (MAST) at Culham Centre for Fusion Energy and on the National Spherical Torus Experiment Upgrade at Princeton Plasma Physics Laboratory. The results demonstrate encouraging agreement between SAMI and other independent measurements

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses

    One Object at a Time: Accurate and Robust Structure From Motion for Robots

    Full text link
    A gaze-fixating robot perceives distance to the fixated object and relative positions of surrounding objects immediately, accurately, and robustly. We show how fixation, which is the act of looking at one object while moving, exploits regularities in the geometry of 3D space to obtain this information. These regularities introduce rotation-translation couplings that are not commonly used in structure from motion. To validate, we use a Franka Emika Robot with an RGB camera. We a) find that error in distance estimate is less than 5 mm at a distance of 15 cm, and b) show how relative position can be used to find obstacles under challenging scenarios. We combine accurate distance estimates and obstacle information into a reactive robot behavior that is able to pick up objects of unknown size, while impeded by unforeseen obstacles. Project page: https://oxidification.com/p/one-object-at-a-time/ .Comment: v3: Add link to project page v2: Update DOI v1: Accepted at 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Look Who's Talking: Pre-Verbal Infants’ Perception of Face-to-Face and Back-to-Back Social Interactions

    Get PDF
    Four-, 6-, and 11-month old infants were presented with movies in which two adult actors conversed about everyday events, either by facing each other or looking in opposite directions. Infants from 6 months of age made more gaze shifts between the actors, in accordance with the flow of conversation, when the actors were facing each other. A second experiment demonstrated that gaze following alone did not cause this difference. Instead the results are consistent with a social cognitive interpretation, suggesting that infants perceive the difference between face-to-face and back-to-back conversations and that they prefer to attend to a typical pattern of social interaction from 6 months of age
    corecore