22 research outputs found

    Cognitive robotics: a new approach to simultaneous localisation and mapping

    Get PDF
    Most simultaneous localisation and mapping (SLAM) solutions were developed for navigation of non-cognitive robots. By using a variety of sensors, the distances to walls and other objects are determined, which are then used to generate a map of the environment and to update the robot’s position. When developing a cognitive robot, such a solution is not appropriate since it requires accurate sensors and precise odometry, also lacking fundamental features of cognition such as time and memory. In this paper we present a SLAM solution in which such features are taken into account and integrated. Moreover, this method does not require precise odometry nor accurate ranging sensors

    Egospace Motion Planning Representations for Micro Air Vehicles

    Get PDF
    Navigation of micro air vehicles (MAVs) in unknown environments is a complex sensing and trajectory generation task, particularly at high velocities. In this work, we introduce an efficient sense-and-avoid pipeline that compactly represents range measurements from multiple sensors, trajectory generation, and motion planning in a 2.5–dimensional projective data structure called an egospace representation. Egospace coordinates generalize depth image obstacle representations and are a particularly convenient choice for configuration flat mobile robots, which are differentially flat in their configuration variables and include a number of commonly used MAV plant models. After characterizing egospace obstacle avoidance for robots with trivial dynamics and establishing limits on applicability and performance, we generalize to motion planning over full configuration flat dynamics using motion primitives expressed directly in egospace coordinates. In comparison to approaches based on world coordinates, egospace uses the natural sensor geometry to combine the benefits of a multi-resolution and multi-sensor representation architecture into a single simple and efficient layer. We also present an experimental implementation, based on perception with stereo vision and an egocylinder obstacle representation, that demonstrates the specialization of our theoretical results to particular mission scenarios. The natural pixel parameterization of the egocylinder is used to quickly identify dynamically feasible maneuvers onto radial paths, expressed directly in egocylinder coordinates, that enable finely detailed planning at extreme ranges within milliseconds. We have implemented our obstacle avoidance pipeline with an Asctec Pelican quadcopter, and demonstrate the efficiency of our approach experimentally with a set of challenging field scenarios. The scalability potential of our system is discussed in terms of sensor horizon, actuation, and computational limitations and the speed limits that each imposes, and its generality to more challenging environments with multiple moving obstacles is developed as an immediate extension to the static framework

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object

    Robot manipulation in human environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 211-228).Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world. In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces.by Aaron Ladd Edsinger.Ph.D

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operator’s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    Activie vision in robot cognition

    Get PDF
    Tese de doutoramento, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016As technology and our understanding of the human brain evolve, the idea of creating robots that behave and learn like humans seems to get more and more attention. However, although that knowledge and computational power are constantly growing we still have much to learn to be able to create such machines. Nonetheless, that does not mean we cannot try to validate our knowledge by creating biologically inspired models to mimic some of our brain processes and use them for robotics applications. In this thesis several biologically inspired models for vision are presented: a keypoint descriptor based on cortical cell responses that allows to create binary codes which can be used to represent speci c image regions; and a stereo vision model based on cortical cell responses and visual saliency based on color, disparity and motion. Active vision is achieved by combining these vision modules with an attractor dynamics approach for head pan control. Although biologically inspired models are usually very heavy in terms of processing power, these models were designed to be lightweight so that they can be tested for real-time robot navigation, object recognition and vision steering. The developed vision modules were tested on a child-sized robot, which uses only visual information to navigate, to detect obstacles and to recognize objects in real time. The biologically inspired visual system is integrated with a cognitive architecture, which combines vision with short- and long-term memory for simultaneous localization and mapping (SLAM). Motor control for navigation is also done using attractor dynamics
    corecore