306,028 research outputs found

    Comparison of Selection Methods in On-line Distributed Evolutionary Robotics

    Get PDF
    In this paper, we study the impact of selection methods in the context of on-line on-board distributed evolutionary algorithms. We propose a variant of the mEDEA algorithm in which we add a selection operator, and we apply it in a taskdriven scenario. We evaluate four selection methods that induce different intensity of selection pressure in a multi-robot navigation with obstacle avoidance task and a collective foraging task. Experiments show that a small intensity of selection pressure is sufficient to rapidly obtain good performances on the tasks at hand. We introduce different measures to compare the selection methods, and show that the higher the selection pressure, the better the performances obtained, especially for the more challenging food foraging task

    Characterizing Input Methods for Human-to-robot Demonstrations

    Full text link
    Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing input methods as well as a novel input method that we introduce, the instrumented tongs. We detail the design specifications for our method and present a user study that compares it against three common input methods: free-hand manipulation, kinesthetic guidance, and teleoperation. Study results show that instrumented tongs provide high quality demonstrations and a positive experience for the demonstrator while offering good correspondence to the target robot.Comment: 2019 ACM/IEEE International Conference on Human-Robot Interaction (HRI

    OmniDRL: Robust Pedestrian Detection using Deep Reinforcement Learning on Omnidirectional Cameras

    Full text link
    Pedestrian detection is one of the most explored topics in computer vision and robotics. The use of deep learning methods allowed the development of new and highly competitive algorithms. Deep Reinforcement Learning has proved to be within the state-of-the-art in terms of both detection in perspective cameras and robotics applications. However, for detection in omnidirectional cameras, the literature is still scarce, mostly because of their high levels of distortion. This paper presents a novel and efficient technique for robust pedestrian detection in omnidirectional images. The proposed method uses deep Reinforcement Learning that takes advantage of the distortion in the image. By considering the 3D bounding boxes and their distorted projections into the image, our method is able to provide the pedestrian's position in the world, in contrast to the image positions provided by most state-of-the-art methods for perspective cameras. Our method avoids the need of pre-processing steps to remove the distortion, which is computationally expensive. Beyond the novel solution, our method compares favorably with the state-of-the-art methodologies that do not consider the underlying distortion for the detection task.Comment: Accepted in 2019 IEEE Int'l Conf. Robotics and Automation (ICRA

    Space Station robotics planning tools

    Get PDF
    The concepts are described for the set of advanced Space Station Freedom (SSF) robotics planning tools for use in the Space Station Control Center (SSCC). It is also shown how planning for SSF robotics operations is an international process, and baseline concepts are indicated for that process. Current SRMS methods provide the backdrop for this SSF theater of multiple robots, long operating time-space, advanced tools, and international cooperation

    How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

    Full text link
    Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power. The competitive accuracy and robustness of these algorithms compared to state-of-the-art feature-based methods, as well as their natural ability to yield dense maps, makes them an appealing choice for a variety of mobile robotics applications. However, direct methods remain brittle in the face of appearance change due to their underlying assumption of photometric consistency, which is commonly violated in practice. In this paper, we propose to mitigate this problem by training deep convolutional encoder-decoder models to transform images of a scene such that they correspond to a previously-seen canonical appearance. We validate our method in multiple environments and illumination conditions using high-fidelity synthetic RGB-D datasets, and integrate the trained models into a direct visual localization pipeline, yielding improvements in visual odometry (VO) accuracy through time-varying illumination conditions, as well as improved metric relocalization performance under illumination change, where conventional methods normally fail. We further provide a preliminary investigation of transfer learning from synthetic to real environments in a localization context. An open-source implementation of our method using PyTorch is available at https://github.com/utiasSTARS/cat-net.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201

    Distributed intelligent robotics : research & development in fault-tolerant control and size/position identification : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University

    Get PDF
    This thesis presents research conducted on aspects of intelligent robotic systems. In the past two decades, robotics has become one of the most rapidly expanding and developing fields of science. Robotics can be considered as the science of using artificial intelligence in the physical world. Many areas of study exist in robotics. Among these, two fields that are of paramount importance in real world applications are fault tolerance, and sensory systems. Fault tolerance is necessary since a robot in the real world could encounter internal faults, and may also have to continue functioning under adverse conditions. Sensory mechanisms are essential since a robot will possess little intelligence if it does not have methods of acquiring information about its environment. Both these fields are researched in this thesis. In particular, emphasis is placed on distributed intelligent autonomous systems. Experiments and simulations have been conducted to investigate design for fault tolerance. A suitable platform was also chosen for an implementation of a visual system, as an example of a working sensory mechanism
    corecore