12 research outputs found

    Gaze shift reflex in a humanoid active vision system

    Get PDF
    Full awareness of sensory surroundings requires active attentional and behavioural exploration. In visual animals, visual, auditory and tactile stimuli elicit gaze shifts (head and eye movements) aimed at optimising visual perception of stimuli. Such gaze shifts can either be top-down attention driven (e.g. visual search) or they can be reflex movements triggered by unexpected changes in the surroundings. Here we present a model active vision system with focus on multi-sensory integration and the generation of desired gaze shift commands. Our model is based on recent data from studies of primate superior colliculus and is developed as part of the sensory-motor control of the humanoid robot CB

    Gaze shift reflex in a humanoid active vision system

    Get PDF
    Full awareness of sensory surroundings requires active attentional and behavioural exploration. In visual animals, visual, auditory and tactile stimuli elicit gaze shifts (head and eye movements) aimed at optimising visual perception of stimuli. Such gaze shifts can either be top-down attention driven (e.g. visual search) or they can be reflex movements triggered by unexpected changes in the surroundings. Here we present a model active vision system with focus on multi-sensory integration and the generation of desired gaze shift commands. Our model is based on recent data from studies of primate superior colliculus and is developed as part of the sensory-motor control of the humanoid robot CB

    Feature-specific interactions in salience from combined feature contrasts: Evidence for a bottom-up saliency map in V1

    No full text
    Items that stand out from their surroundings, that is, those that attract attention, are considered to be salient. Salience is generated by input features in many stimulus dimensions, like motion (M), color (C), orientation (O), and others. We focus on bottom–up salience generated by contrast between the feature properties of an item and its surroundings. We compare the singleton search reaction times (RTs) of items that differ from their surroundings in more than one feature (e.g., C + O, denoted as CO) against the RTs of items that differ from their surroundings in only a single feature (e.g., O or C). The measured RTs for the double-feature singletons are compared against “race model ” predictions to evaluate whether salience in the double-feature conditions is greater than the salience of either of its feature components. Affirmative answers were found in MO and CO but not in CM. These results are consistent with some V1 neurons being conjunctively selective to MO, others to CO, but almost none to CM. They provide support for the V1 hypothesis of bottom–up salience (Z. Li, 2002) but are contrary to expectation from the “feature summation ” hypothesis, in which different stimulus features are initially analyzed independently and subsequently summed to form a single salience map (L. Itti & C. Koch, 2001; C. Koch &amp

    Feature-specific interactions in salience from combined feature contrasts: Evidence for a bottom–up saliency map in V1

    No full text

    Cause of kinematic differences during centrifugal and centripetal saccades

    Get PDF
    AbstractMeasurements of eye movements have shown that centrifugal movements (i.e. away from the primary position) have a lower maximum velocity and a longer duration than centripetal movements (i.e. toward the primary position) of the same size. In 1988 Pelisson proposed that these kinematic differences might be caused by differences in the neural command signals, oculomotor mechanics or a combination of the two.By using the result of muscle force measurements that were made in recent years (OrbitTM1.8 Gaze mechanics simulation, Eidactics, San Francisco, 1999) we simulated the muscle forces during centrifugal and centripetal saccades. Based on these simulations we show that the cause of the kinematic differences between the centrifugal and centripetal saccades is the non-linear force–velocity relationship (i.e. muscle viscosity) of the muscles

    Relative information content of gestural features of non-verbal communication related to object-transfer interactions

    Get PDF
    In order to implement reliable, safe and smooth human-robot object handover it will be necessary for service robots to identify non-verbal communication gestures in real-time. This study presents an analysis of the relative information content in the gestural features that together constitute a communication gesture. Based on this information theoretic analysis we propose that the computational complexity of gesture classification, for object handover, can be greatly reduced by applying attention filters focused on static hand shape and orientation
    corecore