1,877 research outputs found

    The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

    Get PDF
    This is the peer reviewed version of the following article: Frank Broz et al, “The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning”, Topics in Cognitive Science, Vol 6(3): 534-544, June 2014, which has been published in final form at doi: http://dx.doi.org/10.1111/tops.12099 This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving." Copyright © 2014 Cognitive Science Society, Inc.This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.Peer reviewe

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Conceptual Design of COD-E Humanoid Robots

    Get PDF
    The conceptualizing process plays an important role in assisting designers’ creativity in form and styling development. It contributes to representing the cultural elements before product transformation, which has a limited investigation. This research aims to identify the metaphorical form element that conveys the brain impaired as factors of selection and defining form development of the humanoid robot embodiment. Design Protocol Analysis obtains to into design linguistic interpretations and synthesizing design based on perceptual product experience. Findings have outlined the theory of metaphorical form element selection and identification that could represent brain impaired product in assisting humanoid robotic acceptance among autism. Keywords: Design-Inspire; Humanoid Robot; Children with Autism; Design Protocol Analysis. eISSN: 2398-4287© 2020. The Authors. Published for AMER ABRA cE-Bsby e-International Publishing House, Ltd., UK. This is an open access article under the CC BYNC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer–review under responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians) and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. DOI: https://doi.org/10.21834/ebpj.v5iSI3.253

    What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy

    Get PDF
    The paper provides a summary of our recent research on preverbal infants (using violation-of-expectation and observational learning paradigms) demonstrating that one-year-olds interpret and draw systematic inferences about other’s goal-directed actions, and can rely on such inferences when imitating other’s actions or emulating their goals. To account for these findings it is proposed that one-year-olds apply a non-mentalistic action interpretational system, the ’teleological stance’ that represents actions by relating relevant aspects of reality (action, goal-state, and situational constraints) through the principle of rational action, which assumes that actions function to realize goal-states by the most efficient means available in the actor’s situation. The relevance of these research findings and the proposed theoretical model for how to realize the goal of epigenetic robotics of building a ’socially relevant’ humanoid robot is discussed

    A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

    Full text link
    Embodied Conversational Agents (ECA) take on different forms, including virtual avatars or physical agents, such as a humanoid robot. ECAs are often designed to produce nonverbal behaviour to complement or enhance its verbal communication. One form of nonverbal behaviour is co-speech gesturing, which involves movements that the agent makes with its arms and hands that is paired with verbal communication. Co-speech gestures for ECAs can be created using different generation methods, such as rule-based and data-driven processes. However, reports on gesture generation methods use a variety of evaluation measures, which hinders comparison. To address this, we conducted a systematic review on co-speech gesture generation methods for iconic, metaphoric, deictic or beat gestures, including their evaluation methods. We reviewed 22 studies that had an ECA with a human-like upper body that used co-speech gesturing in a social human-agent interaction, including a user study to evaluate its performance. We found most studies used a within-subject design and relied on a form of subjective evaluation, but lacked a systematic approach. Overall, methodological quality was low-to-moderate and few systematic conclusions could be drawn. We argue that the field requires rigorous and uniform tools for the evaluation of co-speech gesture systems. We have proposed recommendations for future empirical evaluation, including standardised phrases and test scenarios to test generative models. We have proposed a research checklist that can be used to report relevant information for the evaluation of generative models as well as to evaluate co-speech gesture use.Comment: 9 page

    로봇의 고개를 움직이는 동작과 타이밍이 인간과 로봇의 상호작용에 미치는 효과

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 인문대학 협동과정 인지과학전공, 2023. 2. Sowon Hahn.In recent years, robots with artificial intelligence capabilities have become ubiquitous in our daily lives. As intelligent robots are interacting closely with humans, social abilities of robots are increasingly more important. In particular, nonverbal communication can enhance the efficient social interaction between human users and robots, but there are limitations of behavior expression. In this study, we investigated how minimal head movements of the robot influence human-robot interaction. We newly designed a robot which has a simple shaped body and minimal head movement mechanism. We conducted an experiment to examine participants' perception of robots different head movements and timing. Participants were randomly assigned to one of three movement conditions, head nodding (A), head shaking (B) and head tilting (C). Each movement condition included two timing variables, prior head movement of utterance and simultaneous head movement with utterance. For all head movement conditions, participants' perception of anthropomorphism, animacy, likeability and intelligence were higher compared to non-movement (utterance only) condition. In terms of timing, when the robot performed head movement prior to utterance, perceived naturalness was rated higher than simultaneous head movement with utterance. The findings demonstrated that head movements of the robot positively affects user perception of the robot, and head movement prior to utterance can make human-robot conversation more natural. By implementation of head movement and movement timing, simple shaped robots can have better social interaction with humans.최근 인공지능 로봇은 일상에서 흔하게 접할 수 있는 것이 되었다. 인간과의 교류가 늘어남에 따라 로봇의 사회적 능력은 더 중요해지고 있다. 인간과 로봇의 사회적 상호작용은 비언어적 커뮤니케이션을 통해 강화될 수 있다. 그러나 로봇은 비언어적 제스처의 표현에 제약을 갖는다. 또한 로봇의 응답 지연 문제는 인간이 불편한 침묵의 순간을 경험하게 한다. 본 연구를 통해 로봇의 고개 움직임이 인간과 로봇의 상호작용에 어떤 영향을 미치는지 알아보았다. 로봇의 고개 움직임을 탐구하기 위해 단순한 형상과 고개를 움직이는 구조를 가진 로봇을 새롭게 디자인하였다. 이 로봇을 활용하여 로봇의 머리 움직임과 타이밍이 참여자에게 어떻게 지각되는지 실험하였다. 참여자들은 3가지 움직임 조건인, 끄덕임 (A), 좌우로 저음 (B), 기울임 (C) 중 한 가지 조건에 무작위로 선정되었다. 각각의 고개 움직임 조건은 두 가지 타이밍(음성보다 앞선 고개 움직임, 음성과 동시에 일어나는 고개 움직임)의 변수를 갖는다. 모든 타입의 고개 움직임에서 움직임이 없는 조건과 비교하여 로봇의 인격화, 활동성, 호감도, 감지된 지능이 향상된 것을 관찰하였다. 타이밍은 로봇의 음성보다 고개 움직임이 앞설 때 자연스러움이 높게 지각되는 것으로 관찰되었다. 결과적으로, 로봇의 고개 움직임은 사용자의 지각에 긍정적인 영향을 주며, 앞선 타이밍의 고개 움직임이 자연스러움을 향상시키는 것을 확인하였다. 고개를 움직이는 동작과 타이밍을 통해 단순한 형상의 로봇과 인간의 상호작용이 향상될 수 있음을 본 연구를 통해 확인하였다.Chapter 1. Introduction 1 1.1. Motivation 1 1.2. Literature Review and Hypotheses 3 1.3. Purpose of Study 11 Chapter 2. Experiment 13 2.1. Methods 13 2.2. Results 22 2.3. Discussion 33 Chapter 3. Conclusion 35 Chapter 4. General Discussion 37 4.1. Theoretical Implications 37 4.2. Practical Implications 38 4.3. Limitations and Future work 39 References 41 Appendix 53 Abstract in Korean 55석
    corecore