2,527 research outputs found

    Investigating the Effects of Robot Engagement Communication on Learning from Demonstration

    Full text link
    Robot Learning from Demonstration (RLfD) is a technique for robots to derive policies from instructors' examples. Although the reciprocal effects of student engagement on teacher behavior are widely recognized in the educational community, it is unclear whether the same phenomenon holds true for RLfD. To fill this gap, we first design three types of robot engagement behavior (attention, imitation, and a hybrid of the two) based on the learning literature. We then conduct, in a simulation environment, a within-subject user study to investigate the impact of different robot engagement cues on humans compared to a "without-engagement" condition. Results suggest that engagement communication significantly changes the human's estimation of the robots' capability and significantly raises their expectation towards the learning outcomes, even though we do not run actual learning algorithms in the experiments. Moreover, imitation behavior affects humans more than attention does in all metrics, while their combination has the most profound influences on humans. We also find that communicating engagement via imitation or the combined behavior significantly improve humans' perception towards the quality of demonstrations, even if all demonstrations are of the same quality.Comment: Under revie

    Cognitive Robots for Social Interactions

    Get PDF
    One of my goals is to work towards developing Cognitive Robots, especially with regard to improving the functionalities that facilitate the interaction with human beings and their surrounding objects. Any cognitive system designated for serving human beings must be capable of processing the social signals and eventually enable efficient prediction and planning of appropriate responses. My main focus during my PhD study is to bridge the gap between the motoric space and the visual space. The discovery of the mirror neurons ([RC04]) shows that the visual perception of human motion (visual space) is directly associated to the motor control of the human body (motor space). This discovery poses a large number of challenges in different fields such as computer vision, robotics and neuroscience. One of the fundamental challenges is the understanding of the mapping between 2D visual space and 3D motoric control, and further developing building blocks (primitives) of human motion in the visual space as well as in the motor space. First, I present my study on the visual-motoric mapping of human actions. This study aims at mapping human actions in 2D videos to 3D skeletal representation. Second, I present an automatic algorithm to decompose motion capture (MoCap) sequences into synergies along with the times at which they are executed (or "activated") for each joint. Third, I proposed to use the Granger Causality as a tool to study the coordinated actions performed by at least two units. Recent scientific studies suggest that the above "action mirroring circuit" might be tuned to action coordination rather than single action mirroring. Fourth, I present the extraction of key poses in visual space. These key poses facilitate the further study of the "action mirroring circuit". I conclude the dissertation by describing the future of cognitive robotics study

    Machine Body Language: Expressing a Smart Speaker’s Activity with Intelligible Physical Motion

    Get PDF
    People’s physical movement and body language implicitly convey what they think and feel, are doing or are about to do. In contrast, current smart speakers miss out on this richness of body language, primarily relying on voice commands only. We present QUBI, a dynamic smart speaker that leverages expressive physical motion – stretching, nodding, turning, shrugging, wiggling, pointing and leaning forwards/backwards – to convey cues about its underlying behaviour and activities. We conducted a qualitative Wizard of Oz lab study, in which 12 participants interacted with QUBI in four scripted scenarios. From our study, we distilled six themes: (1) mirroring and mimicking motions; (2) body language to supplement voice instructions; (3) anthropomorphism and personality; (4) audio can trump motion; (5) reaffirming uncertain interpretations to support mutual understanding; and (6) emotional reactions to QUBI’s behaviour. From this, we discuss design implications for future smart speakers

    Industrial human-robot collaboration: maximizing performance while maintaining safety

    Get PDF
    The goal of this thesis is to maximize performance in collaborative applications, while maintaining safety. For this, assembly workplaces are analyzed, typical tasks identified, and the potential of collaborative robots is elaborated. Current safety regulations are analyzed in order to identify the challenges in safe human-robot collaboration. Different methods are proposed to solve inefficiency in collaborative applications, in particular, intuitive programming of collaborative robots, efficient control with human-in-the-loop constraints, and a hardware solution, the Robotic Airbag.Das Ziel dieser Arbeit ist die Steigerung der Effizienz in kollaborativen Anwendungen, bei gleichzeitiger Einhaltung der Sicherheitsbestimmungen. Dazu werden Montagearbeitsplätze analysiert und das Potenzial kollaborativer Roboter erarbeitet. Aktuelle Sicherheitsvorschriften werden analysiert, um die Herausforderungen einer sicheren Mensch-Roboter-Zusammenarbeit zu identifizieren. Verschiedene Methoden wie intuitive Programmierung von kollaborativen Robotern, eine effiziente Steuerung mit Human-in-the-Loop Beschränkungen und eine Hardwarelösung - der Robotic Airbag - werden präsentiert

    Representation recovers information

    Get PDF
    Early agreement within cognitive science on the topic of representation has now given way to a combination of positions. Some question the significance of representation in cognition. Others continue to argue in favor, but the case has not been demonstrated in any formal way. The present paper sets out a framework in which the value of representation-use can be mathematically measured, albeit in a broadly sensory context rather than a specifically cognitive one. Key to the approach is the use of Bayesian networks for modeling the distal dimension of sensory processes. More relevant to cognitive science is the theoretical result obtained, which is that a certain type of representational architecture is *necessary* for achievement of sensory efficiency. While exhibiting few of the characteristics of traditional, symbolic encoding, this architecture corresponds quite closely to the forms of embedded representation now being explored in some embedded/embodied approaches. It becomes meaningful to view that type of representation-use as a form of information recovery. A formal basis then exists for viewing representation not so much as the substrate of reasoning and thought, but rather as a general medium for efficient, interpretive processing

    Discovery and recognition of motion primitives in human activities

    Get PDF
    We present a novel framework for the automatic discovery and recognition of motion primitives in videos of human activities. Given the 3D pose of a human in a video, human motion primitives are discovered by optimizing the `motion flux', a quantity which captures the motion variation of a group of skeletal joints. A normalization of the primitives is proposed in order to make them invariant with respect to a subject anatomical variations and data sampling rate. The discovered primitives are unknown and unlabeled and are unsupervisedly collected into classes via a hierarchical non-parametric Bayes mixture model. Once classes are determined and labeled they are further analyzed for establishing models for recognizing discovered primitives. Each primitive model is defined by a set of learned parameters. Given new video data and given the estimated pose of the subject appearing on the video, the motion is segmented into primitives, which are recognized with a probability given according to the parameters of the learned models. Using our framework we build a publicly available dataset of human motion primitives, using sequences taken from well-known motion capture datasets. We expect that our framework, by providing an objective way for discovering and categorizing human motion, will be a useful tool in numerous research fields including video analysis, human inspired motion generation, learning by demonstration, intuitive human-robot interaction, and human behavior analysis

    Upper limb soft robotic wearable devices: a systematic review

    Get PDF
    Introduction: Soft robotic wearable devices, referred to as exosuits, can be a valid alternative to rigid exoskeletons when it comes to daily upper limb support. Indeed, their inherent flexibility improves comfort, usability, and portability while not constraining the user’s natural degrees of freedom. This review is meant to guide the reader in understanding the current approaches across all design and production steps that might be exploited when developing an upper limb robotic exosuit. Methods: The literature research regarding such devices was conducted in PubMed, Scopus, and Web of Science. The investigated features are the intended scenario, type of actuation, supported degrees of freedom, low-level control, high-level control with a focus on intention detection, technology readiness level, and type of experiments conducted to evaluate the device. Results: A total of 105 articles were collected, describing 69 different devices. Devices were grouped according to their actuation type. More than 80% of devices are meant either for rehabilitation, assistance, or both. The most exploited actuation types are pneumatic (52%) and DC motors with cable transmission (29%). Most devices actuate 1 (56%) or 2 (28%) degrees of freedom, and the most targeted joints are the elbow and the shoulder. Intention detection strategies are implemented in 33% of the suits and include the use of switches and buttons, IMUs, stretch and bending sensors, EMG and EEG measurements. Most devices (75%) score a technology readiness level of 4 or 5. Conclusion: Although few devices can be considered ready to reach the market, exosuits show very high potential for the assistance of daily activities. Clinical trials exploiting shared evaluation metrics are needed to assess the effectiveness of upper limb exosuits on target users

    Behavioural attentiveness patterns analysis – detecting distraction behaviours

    Get PDF
    The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção
    corecore