444 research outputs found

    Conjunctive Visual and Auditory Development via Real-Time Dialogue

    Get PDF
    Human developmental learning is capable of dealing with the dynamic visual world, speech-based dialogue, and their complex real-time association. However, the architecture that realizes this for robotic cognitive development has not been reported in the past. This paper takes up this challenge. The proposed architecture does not require a strict coupling between visual and auditory stimuli. Two major operations contribute to the โ€œabstractionโ€ process: multiscale temporal priming and high-dimensional numeric abstraction through internal responses with reduced variance. As a basic principle of developmental learning, the programmer does not know the nature of the world events at the time of programming and, thus, hand-designed task-specific representation is not possible. We successfully tested the architecture on the SAIL robot under an unprecedented challenging multimodal interaction mode: use real-time speech dialogue as a teaching source for simultaneous and incremental visual learning and language acquisition, while the robot is viewing a dynamic world that contains a rotating object to which the dialogue is referring

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Sensorimotor Representation Learning for an โ€œActive Selfโ€ in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ๋ฏธ์น˜๋Š” ํšจ๊ณผ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต, 2023. 2. Sowon Hahn.In recent years, robots with artificial intelligence capabilities have become ubiquitous in our daily lives. As intelligent robots are interacting closely with humans, social abilities of robots are increasingly more important. In particular, nonverbal communication can enhance the efficient social interaction between human users and robots, but there are limitations of behavior expression. In this study, we investigated how minimal head movements of the robot influence human-robot interaction. We newly designed a robot which has a simple shaped body and minimal head movement mechanism. We conducted an experiment to examine participants' perception of robots different head movements and timing. Participants were randomly assigned to one of three movement conditions, head nodding (A), head shaking (B) and head tilting (C). Each movement condition included two timing variables, prior head movement of utterance and simultaneous head movement with utterance. For all head movement conditions, participants' perception of anthropomorphism, animacy, likeability and intelligence were higher compared to non-movement (utterance only) condition. In terms of timing, when the robot performed head movement prior to utterance, perceived naturalness was rated higher than simultaneous head movement with utterance. The findings demonstrated that head movements of the robot positively affects user perception of the robot, and head movement prior to utterance can make human-robot conversation more natural. By implementation of head movement and movement timing, simple shaped robots can have better social interaction with humans.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ ๋กœ๋ด‡์€ ์ผ์ƒ์—์„œ ํ”ํ•˜๊ฒŒ ์ ‘ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์ด ๋˜์—ˆ๋‹ค. ์ธ๊ฐ„๊ณผ์˜ ๊ต๋ฅ˜๊ฐ€ ๋Š˜์–ด๋‚จ์— ๋”ฐ๋ผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ๋Šฅ๋ ฅ์€ ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ์ƒํ˜ธ์ž‘์šฉ์€ ๋น„์–ธ์–ด์  ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์„ ํ†ตํ•ด ๊ฐ•ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋กœ๋ด‡์€ ๋น„์–ธ์–ด์  ์ œ์Šค์ฒ˜์˜ ํ‘œํ˜„์— ์ œ์•ฝ์„ ๊ฐ–๋Š”๋‹ค. ๋˜ํ•œ ๋กœ๋ด‡์˜ ์‘๋‹ต ์ง€์—ฐ ๋ฌธ์ œ๋Š” ์ธ๊ฐ„์ด ๋ถˆํŽธํ•œ ์นจ๋ฌต์˜ ์ˆœ๊ฐ„์„ ๊ฒฝํ—˜ํ•˜๊ฒŒ ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์•Œ์•„๋ณด์•˜๋‹ค. ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์„ ํƒ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ๊ณผ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง„ ๋กœ๋ด‡์„ ์ƒˆ๋กญ๊ฒŒ ๋””์ž์ธํ•˜์˜€๋‹ค. ์ด ๋กœ๋ด‡์„ ํ™œ์šฉํ•˜์—ฌ ๋กœ๋ด‡์˜ ๋จธ๋ฆฌ ์›€์ง์ž„๊ณผ ํƒ€์ด๋ฐ์ด ์ฐธ์—ฌ์ž์—๊ฒŒ ์–ด๋–ป๊ฒŒ ์ง€๊ฐ๋˜๋Š”์ง€ ์‹คํ—˜ํ•˜์˜€๋‹ค. ์ฐธ์—ฌ์ž๋“ค์€ 3๊ฐ€์ง€ ์›€์ง์ž„ ์กฐ๊ฑด์ธ, ๋„๋•์ž„ (A), ์ขŒ์šฐ๋กœ ์ €์Œ (B), ๊ธฐ์šธ์ž„ (C) ์ค‘ ํ•œ ๊ฐ€์ง€ ์กฐ๊ฑด์— ๋ฌด์ž‘์œ„๋กœ ์„ ์ •๋˜์—ˆ๋‹ค. ๊ฐ๊ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„ ์กฐ๊ฑด์€ ๋‘ ๊ฐ€์ง€ ํƒ€์ด๋ฐ(์Œ์„ฑ๋ณด๋‹ค ์•ž์„  ๊ณ ๊ฐœ ์›€์ง์ž„, ์Œ์„ฑ๊ณผ ๋™์‹œ์— ์ผ์–ด๋‚˜๋Š” ๊ณ ๊ฐœ ์›€์ง์ž„)์˜ ๋ณ€์ˆ˜๋ฅผ ๊ฐ–๋Š”๋‹ค. ๋ชจ๋“  ํƒ€์ž…์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์—์„œ ์›€์ง์ž„์ด ์—†๋Š” ์กฐ๊ฑด๊ณผ ๋น„๊ตํ•˜์—ฌ ๋กœ๋ด‡์˜ ์ธ๊ฒฉํ™”, ํ™œ๋™์„ฑ, ํ˜ธ๊ฐ๋„, ๊ฐ์ง€๋œ ์ง€๋Šฅ์ด ํ–ฅ์ƒ๋œ ๊ฒƒ์„ ๊ด€์ฐฐํ•˜์˜€๋‹ค. ํƒ€์ด๋ฐ์€ ๋กœ๋ด‡์˜ ์Œ์„ฑ๋ณด๋‹ค ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์•ž์„ค ๋•Œ ์ž์—ฐ์Šค๋Ÿฌ์›€์ด ๋†’๊ฒŒ ์ง€๊ฐ๋˜๋Š” ๊ฒƒ์œผ๋กœ ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์€ ์‚ฌ์šฉ์ž์˜ ์ง€๊ฐ์— ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ์ฃผ๋ฉฐ, ์•ž์„  ํƒ€์ด๋ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ž์—ฐ์Šค๋Ÿฌ์›€์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์„ ํ†ตํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ์˜ ๋กœ๋ด‡๊ณผ ์ธ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์ด ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Motivation 1 1.2. Literature Review and Hypotheses 3 1.3. Purpose of Study 11 Chapter 2. Experiment 13 2.1. Methods 13 2.2. Results 22 2.3. Discussion 33 Chapter 3. Conclusion 35 Chapter 4. General Discussion 37 4.1. Theoretical Implications 37 4.2. Practical Implications 38 4.3. Limitations and Future work 39 References 41 Appendix 53 Abstract in Korean 55์„

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    Toward Context-Aware, Affective, and Impactful Social Robots

    Get PDF
    • โ€ฆ
    corecore