98,279 research outputs found

    Wide-Range Feature Point Tracking with Corresponding Point Search and Accurate Feature Point Tracking with Mean-Shift

    No full text

    Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images

    Full text link
    Iris centre localization in low-resolution visible images is a challenging problem in computer vision community due to noise, shadows, occlusions, pose variations, eye blinks, etc. This paper proposes an efficient method for determining iris centre in low-resolution images in the visible spectrum. Even low-cost consumer-grade webcams can be used for gaze tracking without any additional hardware. A two-stage algorithm is proposed for iris centre localization. The proposed method uses geometrical characteristics of the eye. In the first stage, a fast convolution based approach is used for obtaining the coarse location of iris centre (IC). The IC location is further refined in the second stage using boundary tracing and ellipse fitting. The algorithm has been evaluated in public databases like BioID, Gi4E and is found to outperform the state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201

    A parallel implementation of a multisensor feature-based range-estimation method

    Get PDF
    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer

    Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate

    Full text link
    Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378

    The PRIMA fringe sensor unit

    Full text link
    The Fringe Sensor Unit (FSU) is the central element of the Phase Referenced Imaging and Micro-arcsecond Astrometry (PRIMA) dual-feed facility and provides fringe sensing for all observation modes, comprising off-axis fringe tracking, phase referenced imaging, and high-accuracy narrow-angle astrometry. It is installed at the Very Large Telescope Interferometer (VLTI) and successfully servoed the fringe tracking loop during the initial commissioning phase. Unique among interferometric beam combiners, the FSU uses spatial phase modulation in bulk optics to retrieve real-time estimates of fringe phase after spatial filtering. A R=20 spectrometer across the K-band makes the retrieval of the group delay signal possible. The FSU was integrated and aligned at the VLTI in summer 2008. It yields phase and group delay measurements at sampling rates up to 2 kHz, which are used to drive the fringe tracking control loop. During the first commissioning runs, the FSU was used to track the fringes of stars with K-band magnitudes as faint as m_K=9.0, using two VLTI Auxiliary Telescopes (AT) and baselines of up to 96 m. Fringe tracking using two Very Large Telescope (VLT) Unit Telescopes (UT) was demonstrated. During initial commissioning and combining stellar light with two ATs, the FSU showed its ability to improve the VLTI sensitivity in K-band by more than one magnitude towards fainter objects, which is of fundamental importance to achieve the scientific objectives of PRIMA.Comment: 19 pages, 23 figures. minor changes and language editing. this version equals the published articl
    corecore