2,692 research outputs found

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    A Review of Platforms for the Development of Agent Systems

    Full text link
    Agent-based computing is an active field of research with the goal of building autonomous software of hardware entities. This task is often facilitated by the use of dedicated, specialized frameworks. For almost thirty years, many such agent platforms have been developed. Meanwhile, some of them have been abandoned, others continue their development and new platforms are released. This paper presents a up-to-date review of the existing agent platforms and also a historical perspective of this domain. It aims to serve as a reference point for people interested in developing agent systems. This work details the main characteristics of the included agent platforms, together with links to specific projects where they have been used. It distinguishes between the active platforms and those no longer under development or with unclear status. It also classifies the agent platforms as general purpose ones, free or commercial, and specialized ones, which can be used for particular types of applications.Comment: 40 pages, 2 figures, 9 tables, 83 reference

    On the Implementation of Behavior Trees in Robotics

    Full text link
    There is a growing interest in Behavior Trees (BTs) as a tool to describe and implement robot behaviors. BTs were devised in the video game industry and their adoption in robotics resulted in the development of ad-hoc libraries to design and execute BTs that fit complex robotics software architectures. While there is broad consensus on how BTs work, some characteristics rely on the implementation choices done in the specific software library used. In this letter, we outline practical aspects in the adoption of BTs and the solutions devised by the robotics community to fully exploit the advantages of BTs in real robots. We also overview the solutions proposed in open-source libraries used in robotics, we show how BTs fit in robotic software architecture, and we present a use case example

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Programming Robots for Activities of Everyday Life

    Get PDF
    Text-based programming remains a challenge to novice programmers in\ua0all programming domains including robotics. The use of robots is gainingconsiderable traction in several domains since robots are capable of assisting\ua0humans in repetitive and hazardous tasks. In the near future, robots willbe used in tasks of everyday life in homes, hotels, airports, museums, etc.\ua0However, robotic missions have been either predefined or programmed usinglow-level APIs, making mission specification task-specific and error-prone.\ua0To harness the full potential of robots, it must be possible to define missionsfor specific applications domains as needed. The specification of missions of\ua0robotic applications should be performed via easy-to-use, accessible ways, and\ua0at the same time, be accurate, and unambiguous. Simplicity and flexibility in\ua0programming such robots are important, since end-users come from diverse\ua0domains, not necessarily with suffcient programming knowledge.The main objective of this licentiate thesis is to empirically understand the\ua0state-of-the-art in languages and tools used for specifying robot missions byend-users. The findings will form the basis for interventions in developing\ua0future languages for end-user robot programming.During the empirical study, DSLs for robot mission specification were\ua0analyzed through published literature, their websites, user manuals, samplemissions and using the languages to specify missions for supported robots.After extracting data from 30 environments, 133 features were identified.\ua0A feature matrix mapping the features to the environments was developedwith a feature model for robotic mission specification DSLs.Our results show that most end-user facing environments exist in the\ua0education domain for teaching novice programmers and STEM subjects. Mostof the visual languages are developed using Blockly and Scratch libraries.\ua0The end-user domain abstraction needs more work since most of the visualenvironments abstract robotic and programming language concepts but not\ua0end-user concepts. In future works, it is important to focus on the development\ua0of reusable libraries for end-user concepts; and further, explore how end-user\ua0facing environments can be adapted for novice programmers to learn\ua0general programming skills and robot programming in low resource settings\ua0in developing countries, like Uganda
    • …
    corecore