323 research outputs found

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    A Framework for Interactive Teaching of Virtual Borders to Mobile Robots

    Full text link
    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot's workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure

    Sound source localization through shape reconfiguration in a snake robot

    Get PDF
    This paper describes a snake robot system that uses sound source localization. We show in this paper as to how we can localize a sound source in 3D and solve the classic forward backward problem in sound source localization using minimum number of audio sensors by using the multiple degrees of freedom of the snake robot. We describe the hardware and software architecture of the robot and show the results of several sound tracking experiments we did with our snake robot. We also present biologically inspired sound tracking behavior in different postures of a biological snake robot as "Digital Snake Charming"

    The effect of non-humanoid size cues for the perception of physics plausibility in virtual reality

    Get PDF
    Abstract. This thesis studies the relationship between inhabited scale and the perception of physics in virtual reality. The work builds upon the findings of an earlier study on the perception of physics when a user is virtually scaled down. One of these studies involved having users evaluate the movement of soda tabs dropped and thrown by a doll-sized humanoid robot when the user was either scaled normally or scaled down. This thesis aimed to replicate the study with the alteration of using a cat as a more natural, non-humanoid actor to throw the soda tabs. Similarly to the previous study, it was hypothesized that participants would prefer realistic physics when at a normal scale and unrealistic physics when virtually scaled down. For this, a photo-realistic virtual environment and a realistic animated cat were created. The method of study involved participants observing the cat drop soda tabs from an elevated platform. Participants experienced the event with both realistic physics (dubbed true physics) and unrealistic physics (dubbed movie physics) and were asked to choose the one they perceived as most expected. This method was repeated for participants at a normal scale and when they were virtually scaled down. The study recruited 40 participants, and the results were unable to confirm either hypothesis and were unable to find a preference towards either physics preference. The result differs from Pouke’s study which was able to find a preference for movie physics when participants were virtually scaled down. This thesis discusses the findings and also uses supplementary gathered data to offer potential rationalizations and insights into the received result.Ei-humanoidin koko vihjeiden vaikutus fysiikan uskottavuuden havainnollistamiseen virtuaalitodellisuudessa. Tiivistelmä. Tämä diplomityö tutkii käyttäjän koon ja fysiikan havainnollistamisen välistä suhdetta virtuaalitodellisuudessa. Tämä työ perustuu Pouken tekemiin löytöihin fysiikan havaitsemisessa kun käyttäjää virtuaalisesti kutistetaan virtuaalitodellisuudessa. Yhdessä näistä tutkimuksista käyttäjiä kysyttiin arvioimaan nukkekokoisen humanoidirobotin heittämien tölkinrenkaiden liikettä kun käyttäjä oli joko normaalin kokoinen tai virtuaalisesti kutistettu pieneksi. Tämän työn tavoitteena oli toistaa kyseinen tutkimus, mutta vaihtaa humanoidirobotin tilalle kissa toimimaan luonnollisempana tekijänä. Kuten aiemassa tutkimuksessa, tämän työn hypoteesiksi oletetaan että käyttäjät suosivat todenmukaista fysiikkaa normaalissa mittakaavassa ja epärealistista fysiikkaa kutistettuna. Tämän selvittämistä varten luotiin fotorealistinen virtuaaliympäristö sekä realistisesti animoitu kissa. Tutkimuksen menetelmässä osallistujat tarkkailivat kissaa, joka pudotti tölkinrenkaita korotetulta alustalta. Osallistujat kokivat tapahtuman sekä realistisella fysiikalla että epärealistisella fysiikalla, ja heitä pyydettiin valitsemaan se, jonka he pitivät odotetuimpana. Tämä menetelmä toistettiin osallistujille normaalissa mittakaavassa ja kutistettuna. Tutkimukseen rekrytoitiin 40 osallistujaa, ja tulokset eivät pystyneet vahvistamaan kumpaakaan hypoteesia eivätkä löytäneet mieltymystä kumpaankaan fysiikkaan. Tulos eroaa edellisestä tutkimuksesta, joka löysi mieltymyksen epärealistiseen fysiikkaan, kun osallistujia oli kutistettuna. Tässä työssä keskustellaan tästä havainnosta sekä tarjotaan mahdollisia rationalisointeja ja muita löydöksiä saaduista täydentävistä tuloksista

    Towards Connecting Control to Perception: High-Performance Whole-Body Collision Avoidance Using Control-Compatible Obstacles

    Full text link
    One of the most important aspects of autonomous systems is safety. This includes ensuring safe human-robot and safe robot-environment interaction when autonomously performing complex tasks or in collaborative scenarios. Although several methods have been introduced to tackle this, most are unsuitable for real-time applications and require carefully hand-crafted obstacle descriptions. In this work, we propose a method combining high-frequency and real-time self and environment collision avoidance of a robotic manipulator with low-frequency, multimodal, and high-resolution environmental perceptions accumulated in a digital twin system. Our method is based on geometric primitives, so-called primitive skeletons. These, in turn, are information-compressed and real-time compatible digital representations of the robot's body and environment, automatically generated from ultra-realistic virtual replicas of the real world provided by the digital twin. Our approach is a key enabler for closing the loop between environment perception and robot control by providing the millisecond real-time control stage with a current and accurate world description, empowering it to react to environmental changes. We evaluate our whole-body collision avoidance on a 9-DOFs robot system through five experiments, demonstrating the functionality and efficiency of our framework.Comment: Accepted for publication at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    Deep robot sketching: an application of deep Q-learning networks for human-like sketching

    Get PDF
    © 2023 The Authors. Published by Elsevier B.V. This research has been financed by ALMA, ‘‘Human Centric Algebraic Machine Learning’’, H2020 RIA under EU grant agreement 952091; ROBOASSET, ‘‘Sistemas robóticos inteligentes de diagnóstico y rehabilitación de terapias de miembro superior’’, PID2020-113508RBI00, financed by AEI/10.13039/501100011033; ‘‘RoboCity2030-DIHCM, Madrid Robotics Digital Innovation Hub’’, S2018/NMT-4331, financed by ‘‘Programas de Actividades I+D en la Comunidad de Madrid’’; ‘‘iREHAB: AI-powered Robotic Personalized Rehabilitation’’, ISCIIIAES-2022/003041 financed by ISCIII and UE; and EU structural fundsThe current success of Reinforcement Learning algorithms for its performance in complex environments has inspired many recent theoretical approaches to cognitive science. Artistic environments are studied within the cognitive science community as rich, natural, multi-sensory, multi-cultural environments. In this work, we propose the introduction of Reinforcement Learning for improving the control of artistic robot applications. Deep Q-learning Neural Networks (DQN) is one of the most successful algorithms for the implementation of Reinforcement Learning in robotics. DQN methods generate complex control policies for the execution of complex robot applications in a wide set of environments. Current art painting robot applications use simple control laws that limits the adaptability of the frameworks to a set of simple environments. In this work, the introduction of DQN within an art painting robot application is proposed. The goal is to study how the introduction of a complex control policy impacts the performance of a basic art painting robot application. The main expected contribution of this work is to serve as a first baseline for future works introducing DQN methods for complex art painting robot frameworks. Experiments consist of real world executions of human drawn sketches using the DQN generated policy and TEO, the humanoid robot. Results are compared in terms of similarity and obtained reward with respect to the reference inputs.Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEUnión Europea. H2020Ministerio de Ciencia e Innovación (MICINN)/ AEI/10.13039/501100011033;Comunidad de MadridInstituto de Salud Carlos III (ISCIII)/UEROBOTICSLABpu

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    Get PDF
    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence
    corecore