148 research outputs found

    There is More to Gesture Than Meets the Eye: Visual Attention to Gesture’s Referents Cannot Account for Its Facilitative Effects During Math Instruction

    Get PDF
    Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & GoldinMeadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eye tracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence lesson that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor. However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects

    Distortions of Subjective Time Perception Within and Across Senses

    Get PDF
    Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions

    Visual onset expands subjective time

    Get PDF
    We report a distortion of subjective time perception in which the duration of a first interval is perceived to be longer than the succeeding interval of the same duration. The amount of time expansion depends on the onset type defining the first interval. When a stimulus appears abruptly, its duration is perceived to be longer than when it appears following a stationary array. The difference in the processing time for the stimulus onset and motion onset, measured as reaction times, agrees with the difference in time expansion. Our results suggest that initial transient responses for a visual onset serve as a temporal marker for time estimation, and a systematic change in the processing time for onsets affects perceived time

    Ambient light modulation of exogenous attention to threat

    Full text link
    Planet Earth’s motion yields a 50 % day–50 % night yearly balance in every latitude or longitude, so survival must be guaranteed in very different light conditions in many species, including human. Cone- and rod-dominant vision, respectively specialized in light and darkness, present several processing differences, which are—at least partially—reflected in event-related potentials (ERPs). The present experiment aimed at characterizing exogenous attention to threatening (spiders) and neutral (wheels) distractors in two environmental light conditions, low mesopic (L, 0.03 lx) and high mesopic (H, 6.5 lx), yielding a differential photoreceptor activity balance: rod > cone and rod < cone, respectively. These distractors were presented in the lower visual hemifield while the 40 participants were involved in a digit categorization task. Stimuli, both targets (digits) and distractors, were exactly the same in L and H. Both ERPs and behavioral performance in the task were recorded. Enhanced attentional capture by salient distractors was observed regardless of ambient light level. However, ERPs showed a differential pattern as a function of ambient light. Thus, significantly enhanced amplitude to salient distractors was observed in posterior P1 and early anterior P2 (P2a) only during the H context, in late P2a during the L context, and in occipital P3 during both H and L contexts. In other words, while exogenous attention to threat was equally efficient in light and darkness, cone-dominant exogenous attention was faster than rod-dominant, in line with previous data indicating slower processing times for rod- than for cone-dominant visionThis research was supported by the Grants PSI2014-54853-P and PSI2012-37090 from the Ministerio de Economía y Competitividad of Spain (MINECO

    Gaze following in multiagent contexts: Evidence for a quorum-like principle

    Get PDF
    Research shows that humans spontaneously follow another individual’s gaze. However, little remains known on how they respond when multiple gaze cues diverge across members of a social group. To address this question, we presented participants with displays depicting three (Experiment 1) or five (Experiment 2) agents showing diverging social cues. In a three-person group, one individual looking at the target (33% of the group) was sufficient to elicit gaze-facilitated target responses. With a five-person group, however, three individuals looking at the target (60% of the group) were necessary to produce the same effect. Gaze following in small groups therefore appears to be based on a quorum-like principle, whereby the critical level of social information needed for gaze following is determined by a proportion of consistent social cues scaled as a function of group size. As group size grows, greater agreement is needed to evoke joint attention

    Searching for the Majority: Algorithms of Voluntary Control

    Get PDF
    Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows) as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5) and content (ratio of left and right pointing arrows within a set) of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search). The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations

    EEG Correlates of Attentional Load during Multiple Object Tracking

    Get PDF
    While human subjects tracked a subset of ten identical, randomly-moving objects, event-related potentials (ERPs) were evoked at parieto-occipital sites by task-irrelevant flashes that were superimposed on either tracked (Target) or non-tracked (Distractor) objects. With ERPs as markers of attention, we investigated how allocation of attention varied with tracking load, that is, with the number of objects that were tracked. Flashes on Target discs elicited stronger ERPs than did flashes on Distractor discs; ERP amplitude (0–250 ms) decreased monotonically as load increased from two to three to four (of ten) discs. Amplitude decreased more rapidly for Target discs than Distractor discs. As a result, with increasing tracking loads, the difference between ERPs to Targets and Distractors diminished. This change in ERP amplitudes with load accords well with behavioral performance, suggesting that successful tracking depends upon the relationship between the neural signals associated with attended and non-attended objects

    Numerosity Estimation in Visual Stimuli in the Absence of Luminance-Based Cues

    Get PDF
    Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based) visual motion, to create stimuli that exclude all first-order (luminance-based) cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation

    Towards a framework for attention cueing in instructional animations: Guidelines for research and design

    Get PDF
    This paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on Mayer’s theory of multimedia learning, a framework was developed for classifying three functions for cueing: (1) selection—cues guide attention to specific locations, (2) organization—cues emphasize structure, and (3) integration—cues explicate relations between and within elements. The framework was used to structure the discussion of studies on cueing in animations. It is concluded that attentional cues may facilitate the selection of information in animations and sometimes improve learning, whereas organizational and relational cueing requires more consideration on how to enhance understanding. Consequently, it is suggested to develop cues that work in animations rather than borrowing effective cues from static representations. Guidelines for future research on attention cueing in animations are presented
    corecore