486 research outputs found

    On Real-Time Synthetic Primate Vision

    Get PDF
    The primate vision system exhibits numerous capabilities. Some important basic visual competencies include: 1) a consistent representation of visual space across eye movements; 2) egocentric spatial perception; 3) coordinated stereo fixation upon and pursuit of dynamic objects; and 4) attentional gaze deployment. We present a synthetic vision system that incorporates these competencies.We hypothesize that similarities between the underlying synthetic system model and that of the primate vision system elicit accordingly similar gaze behaviors. Psychophysical trials were conducted to record human gaze behavior when free-viewing a reproducible, dynamic, 3D scene. Identical trials were conducted with the synthetic system. A statistical comparison of synthetic and human gaze behavior has shown that the two are remarkably similar

    Minimalistic vision-based cognitive SLAM

    Get PDF
    The interest in cognitive robotics is still increasing, a major goal being to create a system which can adapt to dynamic environments and which can learn from its own experiences. We present a new cognitive SLAM architecture, but one which is minimalistic in terms of sensors and memory. It employs only one camera with pan and tilt control and three memories, without additional sensors nor any odometry. Short-term memory is an egocentric map which holds information at close range at the actual robot position. Long-term memory is used for mapping the environment and registration of encountered objects. Object memory holds features of learned objects which are used as navigation landmarks and task targets. Saliency maps are used to sequentially focus important areas for object and obstacle detection, but also for selecting directions of movements. Reinforcement learning is used to consolidate or enfeeble environmental information in long-term memory. The system is able to achieve complex tasks by executing sequences of visuomotor actions, decisions being taken by goal-detection and goal-completion tasks. Experimental results show that the system is capable of executing tasks like localizing specific objects while building a map, after which it manages to return to the start position even when new obstacles have appeared

    Activie vision in robot cognition

    Get PDF
    Tese de doutoramento, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016As technology and our understanding of the human brain evolve, the idea of creating robots that behave and learn like humans seems to get more and more attention. However, although that knowledge and computational power are constantly growing we still have much to learn to be able to create such machines. Nonetheless, that does not mean we cannot try to validate our knowledge by creating biologically inspired models to mimic some of our brain processes and use them for robotics applications. In this thesis several biologically inspired models for vision are presented: a keypoint descriptor based on cortical cell responses that allows to create binary codes which can be used to represent speci c image regions; and a stereo vision model based on cortical cell responses and visual saliency based on color, disparity and motion. Active vision is achieved by combining these vision modules with an attractor dynamics approach for head pan control. Although biologically inspired models are usually very heavy in terms of processing power, these models were designed to be lightweight so that they can be tested for real-time robot navigation, object recognition and vision steering. The developed vision modules were tested on a child-sized robot, which uses only visual information to navigate, to detect obstacles and to recognize objects in real time. The biologically inspired visual system is integrated with a cognitive architecture, which combines vision with short- and long-term memory for simultaneous localization and mapping (SLAM). Motor control for navigation is also done using attractor dynamics

    Real-time synthetic primate vision

    Get PDF

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    Perceptual abstraction and attention

    Get PDF
    This is a report on the preliminary achievements of WP4 of the IM-CleVeR project on abstraction for cumulative learning, in particular directed to: (1) producing algorithms to develop abstraction features under top-down action influence; (2) algorithms for supporting detection of change in motion pictures; (3) developing attention and vergence control on the basis of locally computed rewards; (4) searching abstract representations suitable for the LCAS framework; (5) developing predictors based on information theory to support novelty detection. The report is organized around these 5 tasks that are part of WP4. We provide a synthetic description of the work done for each task by the partners

    The SmartVision local navigation aid for blind and visually impaired persons

    Get PDF
    The SmartVision prototype is a small, cheap and easily wearable navigation aid for blind and visually impaired persons. Its functionality addresses global navigation for guiding the user to some destiny, and local navigation for negotiating paths, sidewalks and corridors, with avoidance of static as well as moving obstacles. Local navigation applies to both in- and outdoor situations. In this article we focus on local navigation: the detection of path borders and obstacles in front of the user and just beyond the reach of the white cane, such that the user can be assisted in centering on the path and alerted to looming hazards. Using a stereo camera worn at chest height, a portable computer in a shoulder-strapped pouch or pocket and only one earphone or small speaker, the system is inconspicuous, it is no hindrence while walking with the cane, and it does not block normal surround sounds. The vision algorithms are optimised such that the system can work at a few frames per second

    A vision system for detecting paths and moving obstacles for the blind

    Get PDF
    In this paper we present a monocular vision system for a navigation aid. The system assists blind persons in following paths and sidewalks, and it alerts the user to moving obstacles which may be on collision course. Path borders and the vanishing point are de-tected by edges and an adapted Hough transform. Opti-cal flow is detected by using a hierarchical, multi-scale tree structure with annotated keypoints. The tree struc-ture also allows to segregate moving objects, indicating where on the path the objects are. Moreover, the centre of the object relative to the vanishing point indicates whether an object is approaching or not
    corecore