326,710 research outputs found

    Visual working memory contents bias ambiguous structure from motion perception

    Get PDF
    The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM) can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM

    Multiple M-wave interaction with fluxes

    Full text link
    We present the equations of motion for multiple M0-brane (multiple M-wave or mM0) system in general eleven dimensional supergravity background. These are obtained in the frame of superembedding approach, but have a rigid structure: they can be restored from SO(1,1) x SO(9) symmetry characteristic for M0. BPS conditions for the 1/2 supersymmetric solution of these equations have the fuzzy 2-sphere solution describing M2-brane.Comment: 4 pages, no figures, RevTeX4. V2. The discussion on BPS conditions and some supersymmetric solutions is added. The explicit values of the coefficients for the interacting terms are presented. Also a couple of minor changes. V3: a small misrint corrected. Published: Phys.Rev.Lett.105 (2010) 07160

    Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere

    Full text link
    Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static scene by embedded brightness and depth sensors. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting asymptotic observer is governed by a non-linear integro/partial differential system where the two independent scalar variables indexing the pixels live on the unit sphere of the 3D Euclidian space. The observer design and analysis are strongly simplified by coordinate-free differential calculus on the unit sphere equipped with its natural Riemannian structure. The observer convergence is investigated under C^1 regularity assumptions on the object motion and its scene. It relies on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It is proved that the estimated biases converge towards the true ones, if and only if, the scene admits no cylindrical symmetry. The observer design can be adapted to realistic sensors where brightness and depth data are only available on a subset of the unit sphere. Preliminary simulations with synthetic brightness and depth images (corrupted by noise around 10%) indicate that such Lyapunov-based observers should be robust and convergent for much weaker regularity assumptions.Comment: 30 pages, 6 figures, submitte

    A Variational Framework for Structure from Motion inOmnidirectional Image Sequences

    Get PDF
    We address the problem of depth and ego-motion estimation from omnidirectional images. We propose a correspondence-free structure-from-motion problem for sequences of images mapped on the 2-sphere. A novel graph-based variational framework is first proposed for depth estimation between pairs of images. The estimation is cast as a TV-L1 optimization problem that is solved by a fast graph-based algorithm. The ego-motion is then estimated directly from the depth information without explicit computation of the optical flow. Both problems are finally addressed together in an iterative algorithm that alternates between depth and ego-motion estimation for fast computation of 3D information from motion in image sequences. Experimental results demonstrate the effective performance of the proposed algorithm for 3D reconstruction from synthetic and natural omnidirectional image
    • …
    corecore