7 research outputs found

    Does changing distractor environments eliminate spatiomotor biases?

    Get PDF
    This research explored how sensitive spatiomotor biases, or location-response integration effects, are to differences between visual environments. According to feature integration and episodic retrieval theories, a target’s location and response are integrated to form an event representation in memory. A repetition of the prior location or target response retrieves the previously associated response or location, respectively. This leads to interference or slower responding when the retrieved event information mismatches the current event. In the four experiments here, to generate these spatiomotor biases, participants discriminated serially presented target stimuli that randomly repeated or changed location. Crucially, the visual environment of the target changed from moment-to-moment by either adding or removing distractors and placeholders. Spatiomotor biases were strong and robust across all environmental changes, with minimal to no effect of the environment on them. Thus, the spatiomotor biases generalize very well beyond the environments in which they are generated, showing that the representation of a target location and response event is not necessarily integrated with the representation of the global visual environment

    Embodied Seeing: The Space Near the Hands

    No full text
    Recent research has revealed a special roleof the hands inguiding vision. Weprocess differently elements in the space around our hands and objects that are near the targetof a hand movement. In this chapterwe reviewseveral different but interrelated domains of research that have approached questions about the role of the hands fromsomewhat distinct perspectives. We organize our discussion by considering changes in vision during three different phases of a hand movement: (1) when a hand movement is not being contemplated yet is possible in the future, (2) when a handmovement is being planned or produced, and (3) after a hand movement has been completed. Consideration of these phases together reveals important connections between the different areas of research and may lead to enhanced understanding of the underlying processes. © 2015 Elsevier Inc.11Nssciscopu

    Action history influences eye movements

    No full text
    Recent research has revealed that simple actions can have a profound effect on subsequent perception–people are faster to find a target that shares features with a previously acted on object even when those features are irrelevant to their task (the action effect). However, the majority of the evidence for this interaction between action and perception has come from manual response data. Therefore, it is unknown whether action affects early visual search processes, if it modulates post-attentional-selection processes, or both. To investigate this, we tracked participants’ spontaneous eye movements as they performed an action effect task. In two experiments we found that participants looked more quickly to the colour of an object they had previously acted on, compared to if they had viewed but not acted on the object, showing that action influenced early visual search processes. Additionally, there was evidence for post-selection effects as well. The results suggest that prior action affects both pre-selection and post-selection processes–spontaneously guiding attention to, and maintaining it on, objects that were previously important to the observer.11Nssciscopu

    Does feature-based attention play a role in the episodic retrieval of event files?

    No full text
    ©American Psychological Association, 2020. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author's permission. The final article is available, upon publication, at: https://www.doi.org/10.1037/xhp0000709In stimulus identification tasks, stimulus and response, and location and response information, is thought to become integrated into a common event representation following a response. Evidence for this feature integration comes from paradigms requiring keypress responses to pairs of sequentially presented stimuli. In such paradigms, there is a robust cost when a target event only partially matches the preceding event representation. This is known as the partial repetition cost. Notably, however, these experiments rely on discrimination responses. Recent evidence has suggested that changing the responses to localization or detection responses eliminates partial repetition costs. If changing the response type can eliminate partial repetition costs it becomes necessary to question whether partial repetition costs reflect feature integration or some other mechanism. In the current study, we look to answer this question by using a design that as closely as possible matched typical partial repetition cost experiments in overall stimulus processing and response requirements. Unlike typical experiments where participants make a cued response to a first stimulus before making a discrimination response to a second stimulus, here we reversed that sequence such that participants made a discrimination response to the first stimulus before making a cued response to the second. In Experiment 1, this small change eliminated or substantially reduced the typically large partial repetition costs. In Experiment 2 we returned to the typical sequence and restored the large partial repetition costs. Experiment 3 confirmed these findings, which have implications for interpreting partial repetition costs and for feature integration theories in general.This project was supported by the Natural Sciences and Engineering Research Council of Canada through Discovery Grant (480593) awarded to Jay Pratt and a post-doctoral scholarship awarded to Matthew D. Hilchey
    corecore