68 research outputs found

    Motion Extrapolation in the Central Fovea

    Get PDF
    Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent “correction-for-extrapolation” hypothesis suggests that the absence of forward shifts is caused by sensory signals representing ‘failed’ predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea

    Phrase Depicting Immoral Behavior Dilates Its Subjective Time Judgment

    Get PDF
    Intuitive moral emotions play a major role in forming our opinions and moral decisions. However, it is not yet known how we perceive the subjective time of moral-related information. In this study, we compared subjective durations of phrases depicting immoral, disgust, or neutral behaviors in a duration bisection task and found that phrases depicting immoral behavior were perceived as lasting longer than the neutral and disgusting phrases. By contrast, the subjective duration of the disgusting phrase, unlike the immoral phrase, was comparable to the neutral phrase. Moreover, the lengthening effect of the immoral phrase relative to the neutral phrase was significantly correlated to the anonymously prosocial tendency of the observer. Our findings suggest that immoral phrases induce embodied moral reaction, which alters emotional state and subsequently lengthens subjective time

    Variation in the “coefficient of variation”

    Get PDF
    The coefficient of variation (CV), also known as relative standard deviation, has been used to measure the constancy of the Weber fraction, a key signature of efficient neural coding in time perception. It has long been debated whether or not duration judgments follow Weber's law, with arguments based on examinations of the CV. However, what has been largely ignored in this debate is that the observed CVs may be modulated by temporal context and decision uncertainty, thus questioning conclusions based on this measure. Here, we used a temporal reproduction paradigm to examine the variation of the CV with two types of temporal context: full-range mixed vs. sub-range blocked intervals, separately for intervals presented in the visual and auditory modalities. We found a strong contextual modulation of both interval-duration reproductions and the observed CVs. We then applied a two-stage Bayesian model to predict those variations. Without assuming a violation of the constancy of the Weber fraction, our model successfully predicted the central-tendency effect and the variation in the CV. Our findings and modeling results indicate that both the accuracy and precision of our timing behavior are highly dependent on the temporal context and decision uncertainty. And, critically, they advise caution with using variations of the CV to reject the constancy of the Weber fraction of duration estimation

    Temporal bisection is influenced by ensemble statistics of the stimulus set

    Get PDF
    Although humans are well capable of precise time measurement, their duration judgments are nevertheless susceptible to temporal context. Previous research on temporal bisection has shown that duration comparisons are influenced by both stimulus spacing and ensemble statistics. However, theories proposed to account for bisection performance lack a plausible justification of how the effects of stimulus spacing and ensemble statistics are actually combined in temporal judgments. To explain the various contextual effects in temporal bisection, we develop a unified ensemble-distribution account (EDA), which assumes that the mean and variance of the duration set serve as a reference, rather than the short and long standards, in duration comparison. To validate this account, we conducted three experiments that varied the stimulus spacing (Experiment 1), the frequency of the probed durations (Experiment 2), and the variability of the probed durations (Experiment 3). The results revealed significant shifts of the bisection point in Experiments 1 and 2, and a change of the sensitivity of temporal judgments in Experiment 3-which were all well predicted by EDA. In fact, comparison of EDA to the extant prior accounts showed that using ensemble statistics can parsimoniously explain various stimulus set-related factors (e.g., spacing, frequency, variance) that influence temporal judgments

    Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

    Get PDF
    Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing `contextual cueing'. This effect was enhanced in the multisensory session---importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift--diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone

    Learning to suppress likely distractor locations in visual search is driven by the local distractor frequency

    Get PDF
    Salient but task-irrelevant distractors interfere less with visual search when they appear in a display region where distractors have appeared more frequently in the past (‘distractor-location probability cueing’). This effect could reflect the (re-)distribution of a global, limited attentional ‘inhibition resource’. Accordingly, changing the frequency of distractor appearance in one display region should also affect the magnitude of interference generated by distractors in a different region. Alternatively, distractor-location learning may reflect a local response (e.g., ‘habituation’) to distractors occurring at a particular location. In this case, the local distractor frequency in one display region should not affect distractor interference in a different region. To decide between these alternatives, we conducted three experiments in which participants searched for an orientation-defined target while ignoring a more salient orientation distractor that occurred more often in one vs. another display region. Experiment 1 varied the ratio of distractors appearing in the frequent vs. rare regions (60/40–90/10), with a fixed global distractor frequency. The results revealed the cueing effect to increase with increasing probability ratio. In Experiments 2 and 3, one (‘test’) region was assigned the same local distractor frequency as in one of the conditions of Experiment 1, but a different frequency in the other region – dissociating local from global distractor frequency. Together, the three experiments showed that distractor interference in the test region was not significantly influenced by the frequency in the other region, consistent with purely local learning. We discuss the implications for theories of statistical distractor-location learning

    Cross-modal contextual memory guides selective attention in visual-search tasks

    Get PDF
    Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items (contextual cueing). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively;both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target

    Influences of luminance contrast and ambient lighting on visual context learning and retrieval

    Get PDF
    Invariant spatial context can guide attention and facilitate visual search, an effect referred to as “contextual cueing.” Most previous studies on contextual cueing were conducted under conditions of photopic vision and high search item to background luminance contrast, leaving open the question whether the learning and/or retrieval of context cues depends on luminance contrast and ambient lighting. Given this, we conducted three experiments (each contains two subexperiments) to compare contextual cueing under different combinations of luminance contrast (high/low) and ambient lighting (photopic/mesopic). With high-contrast displays, we found robust contextual cueing in both photopic and mesopic environments, but the acquired contextual cueing could not be transferred when the display contrast changed from high to low in the photopic environment. By contrast, with low-contrast displays, contextual facilitation manifested only in mesopic vision, and the acquired cues remained effective following a switch to high-contrast displays. This pattern suggests that, with low display contrast, contextual cueing benefited from a more global search mode, aided by the activation of the peripheral rod system in mesopic vision, but was impeded by a more local, fovea-centered search mode in photopic vision

    Little engagement of attention by salient distractors defined in a different dimension or modality to the visual search target

    Get PDF
    Singleton distractors may inadvertently capture attention, interfering with the task at hand. The underlying neural mechanisms of how we prevent or handle distractor interference remain elusive. Here, we varied the type of salient distractor introduced in a visual search task: the distractor could be defined in the same (shape) dimension as the target, a different (color) dimension, or a different (tactile) modality (intra-dimensional, cross-dimensional, and, respectively, cross-modal distractor, all matched for physical salience); and besides behavioral interference, we measured lateralized electrophysiological indicators of attentional selectivity (the N2pc, Ppc, PD, CCN/CCP, CDA, and cCDA). The results revealed the intra-dimensional distractor to produce the strongest reaction-time interference, associated with the smallest target-elicited N2pc. In contrast, the cross-dimensional and cross-modal distractors did not engender any significant interference, and the target-elicited N2pc was comparable to the condition in which the search display contained only the target singleton, thus ruling out early attentional capture. Moreover, the cross-modal distractor elicited a significant early CCN/CCP, but did not influence the target-elicited N2pc, suggesting that the tactile distractor is registered by the somatosensory system (rather than being proactively suppressed), without, however, engaging attention. Together, our findings indicate that, in contrast to distractors defined in the same dimension as the target, distractors singled out in a different dimension or modality can be effectively prevented to engage attention, consistent with dimension- or modality-weighting accounts of attentional priority computation
    corecore