4,278 research outputs found

    Cenozoic paleoceanography 1986: An introduction

    Get PDF
    New developments in Cenozoic paleoceanography include the application of climate models and atmospheric general circulation models to questions of climate reconstruction, the refinement of conceptual models for interpretation of the carbon isotope record in terms of carbon mass balance, paleocirculation, paleoproductivity, and the regional mapping of paleoceanographic events by acoustic stratigraphy. Sea level change emerges as a master variable to which changes in the ocean environment must be traced in many cases, and tests of the onlap-offlap paradigm therefore are of crucial importance

    Does A Flatter General Gradient of Visual Attention Explain Peripheral Advantages and Central Deficits in Deaf Adults?

    Get PDF
    Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4◦ ) and peripheral (13.3◦ ) targets among distractors. NoCI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group’s central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals

    Camera Relocalization with Ellipsoidal Abstraction of Objects

    Get PDF
    International audienceWe are interested in AR applications which take place in man-made GPS-denied environments, as industrial or indoor scenes. In such environments, relocalization may fail due to repeated patterns and large changes in appearance which occur even for small changes in viewpoint. We investigate in this paper a new method for relocalization which operates at the level of objects and takes advantage of the impressive progress realized in object detection. Recent works have opened the way towards object oriented reconstruction from elliptic approximation of objects detected in images. We go one step further and propose a new method for pose computation based on ellipse/ellipsoid correspondences. We consider in this paper the practical common case where an initial guess of the rotation matrix of the pose is known, for instance with an inertial sensor or from the estimation of orthogonal vanishing points. Our contributions are twofold: we prove that a closed-form estimate of the translation can be computed from one ellipse-ellipsoid correspondence. The accuracy of the method is assessed on the LINEMOD database using only one correspondence. Second, we prove the effectiveness of the method on real scenes from a set of object detections generated by YOLO. A robust framework that is able to choose the best set of hypotheses is proposed and is based on an appropriate estimation of the reprojection error of ellipsoids. Globally, considering pose at the level of object allows us to avoid common failures due to repeated structures. In addition, due to the small combinatory induced by object correspondences, our method is well suited to fast rough localization even in large environments

    Camera Pose Estimation with Semantic 3D Model

    Get PDF
    International audienceIn computer vision, estimating camera pose from correspondences between 3D geometric entities and their projections into the image is a widely investigated problem. Although most state-of-the-art methods exploit simple primitives such as points or lines, and thus require dense scene models, the emergence of very effective CNN-based object detectors in the recent years have paved the way to the use of much lighter 3D models composed solely of a few semantically relevant features. In that context, we propose a novel model-based camera pose estimation method in which the scene is modeled by a set of virtual ellipsoids. We show that 6-DoF camera pose can be determined by optimizing only the three orientation parameters, and that at least two correspondences between 3D ellipsoids and their 2D projections are necessary in practice. We validate the approach on both simulated and real environments

    Evidence for an oscillatory singularity in generic U(1) symmetric cosmologies on T3×RT^3 \times R

    Get PDF
    A longstanding conjecture by Belinskii, Lifshitz, and Khalatnikov that the singularity in generic gravitational collapse is locally oscillatory is tested numerically in vacuum, U(1) symmetric cosmological spacetimes on T3×RT^3 \times R. If the velocity term dominated (VTD) solution to Einstein's equations is substituted into the Hamiltonian for the full Einstein evolution equations, one term is found to grow exponentially. This generates a prediction that oscillatory behavior involving this term and another (which the VTD solution causes to decay exponentially) should be observed in the approach to the singularity. Numerical simulations strongly support this prediction.Comment: 15 pages, Revtex, includes 12 figures, psfig. High resolution versions of figures 7, 8, 9, and 11 may be obtained from anonymous ftp to ftp://vela.acs.oakland.edu/pub/berger/u1genfig
    • …
    corecore