864 research outputs found

    Introduction: The Third International Conference on Epigenetic Robotics

    Get PDF
    This paper summarizes the paper and poster contributions to the Third International Workshop on Epigenetic Robotics. The focus of this workshop is on the cross-disciplinary interaction of developmental psychology and robotics. Namely, the general goal in this area is to create robotic models of the psychological development of various behaviors. The term "epigenetic" is used in much the same sense as the term "developmental" and while we could call our topic "developmental robotics", developmental robotics can be seen as having a broader interdisciplinary emphasis. Our focus in this workshop is on the interaction of developmental psychology and robotics and we use the phrase "epigenetic robotics" to capture this focus

    Multiparty motion coordination: from choreographies to robotics programs

    Get PDF
    We present a programming model and typing discipline for complex multi-robot coordination programming. Our model encompasses both synchronisation through message passing and continuous-time dynamic motion primitives in physical space. We specify continuous-time motion primitives in an assume-guarantee logic that ensures compatibility of motion primitives as well as collision freedom. We specify global behaviour of programs in a choreographic type system that extends multiparty session types with jointly executed motion primitives, predicated refinements, as well as a separating conjunction that allows reasoning about subsets of interacting robots. We describe a notion of well-formedness for global types that ensures motion and communication can be correctly synchronised and provide algorithms for checking well-formedness, projecting a type, and local type checking. A well-typed program is communication safe, motion compatible, and collision free. Our type system provides a compositional approach to ensuring these properties. We have implemented our model on top of the ROS framework. This allows us to program multi-robot coordination scenarios on top of commercial and custom robotics hardware platforms. We show through case studies that we can model and statically verify quite complex manoeuvres involving multiple manipulators and mobile robots---such examples are beyond the scope of previous approaches

    Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning

    Get PDF
    The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things such as the edge of a table, a nearby human, or any other known object in the robot’s workspace. Planning reaches may seem easy to us humans, because we do it so intuitively, but it has proven to be a challenging problem, which continues to limit the versatility of what robots can do today. In this document, I propose a novel intrinsically motivated RL system that draws on both Path/Motion Planning and Reactive Control. Through Reinforcement Learning, it tightly integrates these two previously disparate approaches to robotics. The RL system is evaluated on a task, which is as yet unsolved by roboticists in practice. That is to put the palm of the iCub humanoid robot on arbitrary target objects in its workspace, start- ing from arbitrary initial configurations. Such motions can be generated by planning, or searching the configuration space, but this typically results in some kind of trajectory, which must then be tracked by a separate controller, and such an approach offers a brit- tle runtime solution because it is inflexible. Purely reactive systems are robust to many problems that render a planned trajectory infeasible, but lacking the capacity to search, they tend to get stuck behind constraints, and therefore do not replace motion planners. The planner/controller proposed here is novel in that it deliberately plans reaches without the need to track trajectories. Instead, reaches are composed of sequences of reactive motion primitives, implemented by my Modular Behavioral Environment (MoBeE), which provides (fictitious) force control with reactive collision avoidance by way of a realtime kinematic/geometric model of the robot and its workspace. Thus, to the best of my knowledge, mine is the first reach planning approach to simultaneously offer the best of both the Path/Motion Planning and Reactive Control approaches. By controlling the real, physical robot directly, and feeling the influence of the con- straints imposed by MoBeE, the proposed system learns a stochastic model of the iCub’s configuration space. Then, the model is exploited as a multiple query path planner to find sensible pre-reach poses, from which to initiate reaching actions. Experiments show that the system can autonomously find practical reaches to target objects in workspace and offers excellent robustness to changes in the workspace configuration as well as noise in the robot’s sensory-motor apparatus

    Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics?

    Get PDF
    In this chapter we will describe a legal framework for Next Generation Robots (NGRs) that has safety as its central focus. The framework is offered in response to the current lack of clarity regarding robot safety guidelines, despite the development and impending release of tens of thousands of robots into workplaces and homes around the world. We also describ

    Recovering Heading for Visually-Guided Navigation

    Get PDF
    We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain self-moving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by Longuet-Higgens and Prazdny (1981). The algorithm uses velocity differences computed in regions of high depth variation to estimate the location of the focus of expansion, which indicates the observer's heading direction. We relate the behavior of the proposed model to psychophysical observations regarding the ability of human observers to judge their heading direction, and show how the model can cope with self-moving objects in the environment. We also discuss this model in the broader context of a navigational system that performs tasks requiring rapid sensing and response through the interaction of simple task-specific routines

    An intelligent, free-flying robot

    Get PDF
    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base

    Proceedings of the NASA Conference on Space Telerobotics, volume 2

    Get PDF
    These proceedings contain papers presented at the NASA Conference on Space Telerobotics held in Pasadena, January 31 to February 2, 1989. The theme of the Conference was man-machine collaboration in space. The Conference provided a forum for researchers and engineers to exchange ideas on the research and development required for application of telerobotics technology to the space systems planned for the 1990s and beyond. The Conference: (1) provided a view of current NASA telerobotic research and development; (2) stimulated technical exchange on man-machine systems, manipulator control, machine sensing, machine intelligence, concurrent computation, and system architectures; and (3) identified important unsolved problems of current interest which can be dealt with by future research

    Basic set of behaviours for programming assembly robots

    Get PDF
    We know from the well established Church-Turing thesis that any computer program­ming language needs just a limited set of commands in order to perform any computable process. However, programming in these terms is so very inconvenient that a larger set of machine codes need to be introduced and on top of these higher programming languages are erected.In Assembly Robotics we could theoretically formulate any assembly task, in terms of moves. Nevertheless, it is as tedious and error prone to program assemblies at this low level as it would be to program a computer by using just Turing Machine commands.An interesting survey carried out in the beginning of the nineties showed that the most common assembly operations in manufacturing industry cluster in just seven classes. Since the research conducted in this thesis is developed within the behaviour-based assembly paradigm which views every assembly task as the external manifestation of the execution of a behavioural module, we wonder whether there exists a limited and ergonomical set of elementary modules with which to program at least 80% of the most common operations.IIn order to investigate such a problem, we set a project in which, taking into account the statistics of the aforementioned survey, we analyze the experimental behavioural decomposition of three significant assembly tasks (two similar benchmarks, the STRASS assembly, and a family of torches). From these three we establish a basic set of such modules.The three test assemblies with which we ran the experiments can not possibly exhaust ah the manufacturing assembly tasks occurring in industry, nor can the results gathered or the speculations made represent a theoretical proof of the existence of the basic set. They simply show that it is possible to formulate different assembly tasks in terms of a small set of about 10 modules, which may be regarded as an embryo of a basic set of elementary modules.Comparing this set with Kondoleon’s tasks and with Balch’s general-purpose robot routines, we observed that ours was general enough to represent 80% of the most com­mon manufacturing assembly tasks and ergonomical enough to be easily used by human operators or automatic planners. A final discussion shows that it would be possible to base an assembly programming language on this kind of set of basic behavioural modules
    • …
    corecore