405 research outputs found

    No effect of auditory–visual spatial disparity on temporal recalibration

    Get PDF
    It is known that the brain adaptively recalibrates itself to small (∼100 ms) auditory–visual (AV) temporal asynchronies so as to maintain intersensory temporal coherence. Here we explored whether spatial disparity between a sound and light affects AV temporal recalibration. Participants were exposed to a train of asynchronous AV stimulus pairs (sound-first or light-first) with sounds and lights emanating from either the same or a different location. Following a short exposure phase, participants were tested on an AV temporal order judgement (TOJ) task. Temporal recalibration manifested itself as a shift of subjective simultaneity in the direction of the adapted audiovisual lag. The shift was equally big when exposure and test stimuli were presented from the same or different locations. These results provide strong evidence for the idea that spatial co-localisation is not a necessary constraint for intersensory pairing to occur

    Identification of Berezin-Toeplitz deformation quantization

    Full text link
    We give a complete identification of the deformation quantization which was obtained from the Berezin-Toeplitz quantization on an arbitrary compact Kaehler manifold. The deformation quantization with the opposite star-product proves to be a differential deformation quantization with separation of variables whose classifying form is explicitly calculated. Its characteristic class (which classifies star-products up to equivalence) is obtained. The proof is based on the microlocal description of the Szegoe kernel of a strictly pseudoconvex domain given by Boutet de Monvel and Sjoestrand.Comment: 26 page

    Classification of Invariant Star Products up to Equivariant Morita Equivalence on Symplectic Manifolds

    Full text link
    In this paper we investigate equivariant Morita theory for algebras with momentum maps and compute the equivariant Picard groupoid in terms of the Picard groupoid explicitly. We consider three types of Morita theory: ring-theoretic equivalence, *-equivalence and strong equivalence. Then we apply these general considerations to star product algebras over symplectic manifolds with a Lie algebra symmetry. We obtain the full classification up to equivariant Morita equivalence.Comment: 28 pages. Minor update, fixed typos

    Morita Equivalence, Picard Groupoids and Noncommutative Field Theories

    Full text link
    In this article we review recent developments on Morita equivalence of star products and their Picard groups. We point out the relations between noncommutative field theories and deformed vector bundles which give the Morita equivalence bimodules.Comment: Latex2e, 10 pages. Conference Proceeding for the Sendai Meeting 2002. Some typos fixe

    Cognitive factors and adaptation to auditory-visual discordance

    Full text link

    Codimension one symplectic foliations and regular Poisson structures

    Get PDF
    Original manuscript June 21, 2011In this short note we give a complete characterization of a certain class of compact corank one Poisson manifolds, those equipped with a closed one-form defining the symplectic foliation and a closed two-form extending the symplectic form on each leaf. If such a manifold has a compact leaf, then all the leaves are compact, and furthermore the manifold is a mapping torus of a compact leaf. These manifolds and their regular Poisson structures admit an extension as the critical hypersurface of a b-Poisson manifold as we will see in [9]

    Auditory grouping occurs prior to intersensory pairing: evidence from temporal ventriloquism

    Get PDF
    The authors examined how principles of auditory grouping relate to intersensory pairing. Two sounds that normally enhance sensitivity on a visual temporal order judgement task (i.e. temporal ventriloquism) were embedded in a sequence of flanker sounds which either had the same or different frequency (Exp. 1), rhythm (Exp. 2), or location (Exp. 3). In all experiments, we found that temporal ventriloquism only occurred when the two capture sounds differed from the flankers, demonstrating that grouping of the sounds in the auditory stream took priority over intersensory pairing. By combining principles of auditory grouping with intersensory pairing, we demonstrate that capture sounds were, counter-intuitively, more effective when their locations differed from that of the lights rather than when they came from the same position as the lights

    Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion

    Get PDF
    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects
    corecore