34 research outputs found

    Visual search without central vision – no single pseudofovea location is best

    Get PDF
    We typically fixate targets such that they are projected onto the fovea for best spatial resolution. Macular degeneration patients often develop fixation strategies such that targets are projected to an intact eccentric part of the retina, called pseudofovea. A longstanding debate concerns which pseudofovea-location is optimal for non-foveal vision. We examined how pseudofovea position and eccentricity affect performance in visual search, when vision is restricted to an off-foveal retinal region by a gaze-contingent display that dynamically blurs the stimulus except within a small viewing window (forced field location). Trained normally sighted participants were more accurate when forced field location was congruent with the required scan path direction; this contradicts the view that a single pseudofovea location is generally best. Rather, performance depends on the congruence between pseudofovea location and scan path direction

    Modality shift effects mimic multisensory interactions: an event-related potential study

    Get PDF
    A frequent approach to study interactions of the auditory and the visual system is to measure event-related potentials (ERPs) to auditory, visual, and auditory-visual stimuli (A, V, AV). A nonzero result of the AV − (A + V) comparison indicates that the sensory systems interact at a specific processing stage. Two possible biases weaken the conclusions drawn by this approach: first, subtracting two ERPs from one requires that A, V, and AV do not share any common activity. We have shown before (Gondan and Röder in Brain Res 1073–1074:389–397, 2006) that the problem of common activity can be avoided using an additional tactile stimulus (T) and evaluating the ERP difference (T + TAV) − (TA + TV). A second possible confound is the modality shift effect (MSE): for example, the auditory N1 is increased if an auditory stimulus follows a visual stimulus, whereas it is smaller if the modality is unchanged (ipsimodal stimulus). Bimodal stimuli might be affected less by MSEs because at least one component always matches the preceding trial. Consequently, an apparent amplitude modulation of the N1 would be observed in AV. We tested the influence of MSEs on auditory-visual interactions by comparing the results of AV − (A + V) using (a) all stimuli and using (b) only ipsimodal stimuli. (a) and (b) differed around 150 ms, this indicates that AV − (A + V) is indeed affected by the MSE. We then formally and empirically demonstrate that (T + TAV) − (TA + TV) is robust against possible biases due to the MSE

    Normal and deviant lexical processing: Reply to Dell and O'Seaghdha (1991).

    Get PDF

    The fast and the slow of skilled bimanual rhythm production: Parallel versus integrated timing

    No full text
    Professional pianists performed 2 bimanual rhythms at a wide range of different tempos. The polyrhythmic task required the combination of 2 isochronous sequences (3 against 4) between the hands; in the syncopated rhythm task successive keystrokes formed intervals of identical (isochronous) durations. At slower tempos, pianists relied on integrated timing control merging successive intervals between the hands into a common reference frame. A timer-motor model is proposed based on the concepts of rate fluctuation and the distinction between target specification and timekeeper execution processes as a quantitative account of performance at slow tempos. At rapid rates expert pianists used hand-independent, parallel timing control. In alternative to a model based on a single central clock, findings support a model of flexible control structures with multiple timekeepers that can work in parallel to accommodate specific task constraints.status: publishe

    Timing of Two-Handed Rhythmic Performancea.b

    No full text

    Editorial

    No full text

    Disentangling semantic and response learning effects in color-word contingency learning.

    No full text
    It is easier to indicate the ink color of a color-neutral noun when it is presented in the color in which it has frequently been shown before, relative to print colors in which it has been shown less often. This phenomenon is known as color-word contingency learning. It remains unclear whether participants actually learn semantic (word-color) associations and/or response (word-button) associations. We present a novel variant of the paradigm that can disentangle semantic and response learning, because word-color and word-button associations are manipulated independently. In four experiments, each involving four daily sessions, pseudowords-such as enas, fatu or imot-were probabilistically associated with either a particular color, a particular response-button position, or both. Neutral trials without color-pseudoword association were also included, and participants' awareness of the contingencies was manipulated. The data showed no influence of explicit contingency awareness, but clear evidence both for response learning and for semantic learning, with effects emerging swiftly. Deeper processing of color information, with color words presented in black instead of color patches to indicate response-button positions, resulted in stronger effects, both for semantic and response learning. Our data add a crucial piece of evidence lacking so far in color-word contingency learning studies: Semantic learning effectively takes place even when associations are learned in an incidental way
    corecore