526 research outputs found

    Sensory processing and world modeling for an active ranging device

    Get PDF
    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented

    State of the art review on walking support system for visually impaired people

    Get PDF
    The technology for terrain detection and walking support system for blind people has rapidly been improved the last couple of decades but to assist visually impaired people may have started long ago. Currently, a variety of portable or wearable navigation system is available in the market to help the blind for navigating their way in his local or remote area. The focused category in this work can be subgroups as electronic travel aids (ETAs), electronic orientation aids (EOAs) and position locator devices (PLDs). However, we will focus mainly on electronic travel aids (ETAs). This paper presents a comparative survey among the various portable or wearable walking support systems as well as informative description (a subcategory of ETAs or early stages of ETAs) with its working principal advantages and disadvantages so that the researchers can easily get the current stage of assisting blind technology along with the requirement for optimising the design of walking support system for its users

    An annotated bibligraphy of multisensor integration

    Get PDF
    technical reportIn this paper we give an annotated bibliography of the multisensor integration literature

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    Robot Mapping and Navigation by Fusing Sensory Information

    Get PDF

    Reliable non-prehensile door opening through the combination of vision, tactile and force feedback

    Get PDF
    Whereas vision and force feedback—either at the wrist or at the joint level—for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored. In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the Cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution. Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.This research was partly supported by the Korea Science and Engineering Foundation under the WCU (World Class University) program funded by the Ministry of Education, Science and Technology, S. Korea (Grant No. R31-2008-000-10062-0), by the European Commission’s Seventh Framework Programme FP7/2007-2013 under grant agreements 217077 (EYESHOTS project), and 248497(TRIDENT Project), by Ministerio de Ciencia e Innovación (DPI-2008-06636; and DPI2008-06548-C03-01), by Fundació Caixa Castelló-Bancaixa (P1-1B2008-51; and P1-1B2009-50) and by Universitat Jaume I

    Navigation system for a mobile robot incorporating trinocular vision for range imaging

    Full text link
    This research focuses on the development of software for the navigation of a mobile robot. The software developed to control the robot uses sensory data obtained from ultra sound, infra red and tactile sensors, along with depth maps using trinocular vision. Robot navigation programs were written to navigate the robot and were tested in a simulated environment as well as the real world. Data from the various sensors was read and successfully utilized in the control of the robot motion. Software was developed to obtain the range and bearing of the closest obstacle in sight using the trinocular vision system. An operator supervised navigation system was also developed that enabled the navigation of the robot based on the inference from the camera images

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented
    corecore