6,659 research outputs found

    Localization of shapes: eye movements and perception compared

    Get PDF
    AbstractThe localization of spatially extended objects is thought to be based on the computation of a default reference position, such as the center of gravity. This position can serve as the goal point for a saccade, a locus for fixation, or the reference for perceptual localization. We compared perceptual and saccadic localization for non-convex shapes where the center of gravity (COG) was located outside the boundary of the shape and did not coincide with any prominent perceptual features. The landing positions of single saccades made to the shape, as well as the preferred loci for fixation, were near the center of gravity, although local features such as part boundaries were influential. Perceptual alignment positions were also close to the center of gravity, but showed configural effects that did not influence either saccades or fixation. Saccades made in a more naturalistic sequential scanning task landed near the center of gravity with a considerably higher degree of accuracy (mean error <4% of saccade size) and showed no effects of local features, constituent parts, or stimulus configuration. We conclude that perceptual and oculomotor localization is based on the computation of a precise central reference position, which coincides with the center of gravity in sequential scanning. The saliency of the center of gravity, relative to other prominent visual features, can depend on the specific localization task or the relative configuration of elements. Sequential scanning, the more natural of the saccadic tasks, may provide a better way to evaluate the “default” reference position for localization. The fact that the reference position used in both oculomotor and perceptual tasks fell outside the boundary of the shapes supports the importance of spatial pooling, in contrast to local features, in object localization

    Neuroscience discipline science plan

    Get PDF
    Over the past two decades, NASA's efforts in the neurosciences have developed into a program of research directed at understanding the acute changes that occur in the neurovestibular and sensorimotor systems during short-duration space missions. However, the proposed extended-duration flights of up to 28 days on the Shuttle orbiter and 6 months on Space Station Freedom, a lunar outpost, and Mars missions of perhaps 1-3 years in space, make it imperative that NASA's Life Sciences Division begin to concentrate research in the neurosciences on the chronic effects of exposure to microgravity on the nervous system. Major areas of research will be directed at understanding (1) central processing, (2) motor systems, (3) cognitive/spatial orientation, and (4) sensory receptors. The purpose of the Discipline Science Plan is to provide a conceptual strategy for NASA's Life Sciences Division research and development activities in the comprehensive area of neurosciences. It covers the significant research areas critical to NASA's programmatic requirements for the Extended-Duration Orbiter, Space Station Freedom, and exploration mission science activities. These science activities include ground-based and flight; basic, applied, and operational; and animal and human research and development. This document summarizes the current status of the program, outlines available knowledge, establishes goals and objectives, identifies science priorities, and defines critical questions in the subdiscipline areas of nervous system function. It contains a general plan that will be used by NASA Headquarters Program Offices and the field centers to review and plan basic, applied, and operational intramural and extramural research and development activities in this area

    Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems

    Full text link
    Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.Comment: To appear on ICRA 201

    Connecting the Retina to the Brain

    Get PDF
    The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Work in the laboratory of LE is funded by the BBSRC [BB/J00815X/1] and the R.S. Macdonald Charitable Trust. Research in the laboratory of EH is funded by grants from the Regional Government [Prometeo2012-005], the Spanish Ministry of Economy and Competitiveness [BFU2010-16563] and the European Research Council [ERC2011-StG20101109].Peer reviewedPublisher PD

    Perceived Vertical and Lateropulsion: Clinical Syndromes, Localization, and Prognosis

    Get PDF
    We present a clinical classification of central vestibular syndromes according to the three major planes of action of the vestibulo-ocular reflex: yaw, roll, and pitch. The plane-specific syndromes are determined by ocular motor, postural, and percep tual signs. Yaw plane signs are horizontal nystagmus, past pointing, rotational and lat eral body falls, deviation of perceived straight-ahead to the left or right. Roll plane signs are torsional nystagmus, skew deviation, ocular torsion, tilts of head, body, and perceived vertical in a clockwise or counterclockwise direction. Pitch plane signs are upbeat/downbeat nystagmus, forward/backward tilts and falls, deviations of the per ceived horizon. The thus defined vestibular syndromes allow a precise topographic analysis of brainstem lesions according to their level and side. Special emphasis is placed on the vestibular roll plane syndromes of ocular tilt reaction, lateropulsion in Wallenberg's syndrome, thalamic and cortical astasia and their association with roll plane tilt of perceived vertical. Recovery is based on a functionally significant central compensation of a vestibular tone imbalance, the mechanism of which is largely un known. Physical therapy may facilitate this central compensation, but this has not yet been proven in prospective studies

    YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection

    Full text link
    Vehicle perception systems strive to achieve comprehensive and rapid visual interpretation of their surroundings for improved safety and navigation. We introduce YOLO-BEV, an efficient framework that harnesses a unique surrounding cameras setup to generate a 2D bird's-eye view of the vehicular environment. By strategically positioning eight cameras, each at a 45-degree interval, our system captures and integrates imagery into a coherent 3x3 grid format, leaving the center blank, providing an enriched spatial representation that facilitates efficient processing. In our approach, we employ YOLO's detection mechanism, favoring its inherent advantages of swift response and compact model structure. Instead of leveraging the conventional YOLO detection head, we augment it with a custom-designed detection head, translating the panoramically captured data into a unified bird's-eye view map of ego car. Preliminary results validate the feasibility of YOLO-BEV in real-time vehicular perception tasks. With its streamlined architecture and potential for rapid deployment due to minimized parameters, YOLO-BEV poses as a promising tool that may reshape future perspectives in autonomous driving systems
    corecore