121 research outputs found

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    Socially assistive robots : the specific case of the NAO

    Get PDF
    Numerous researches have studied the development of robotics, especially socially assistive robots (SAR), including the NAO robot. This small humanoid robot has a great potential in social assistance. The NAO robot’s features and capabilities, such as motricity, functionality, and affective capacities, have been studied in various contexts. The principal aim of this study is to gather every research that has been done using this robot to see how the NAO can be used and what could be its potential as a SAR. Articles using the NAO in any situation were found searching PSYCHINFO, Computer and Applied Sciences Complete and ACM Digital Library databases. The main inclusion criterion was that studies had to use the NAO robot. Studies comparing it with other robots or intervention programs were also included. Articles about technical improvements were excluded since they did not involve concrete utilisation of the NAO. Also, duplicates and articles with an important lack of information on sample were excluded. A total of 51 publications (1895 participants) were included in the review. Six categories were defined: social interactions, affectivity, intervention, assisted teaching, mild cognitive impairment/dementia, and autism/intellectual disability. A great majority of the findings are positive concerning the NAO robot. Its multimodality makes it a SAR with potential

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Designing Sound for Social Robots: Advancing Professional Practice through Design Principles

    Full text link
    Sound is one of the core modalities social robots can use to communicate with the humans around them in rich, engaging, and effective ways. While a robot's auditory communication happens predominantly through speech, a growing body of work demonstrates the various ways non-verbal robot sound can affect humans, and researchers have begun to formulate design recommendations that encourage using the medium to its full potential. However, formal strategies for successful robot sound design have so far not emerged, current frameworks and principles are largely untested and no effort has been made to survey creative robot sound design practice. In this dissertation, I combine creative practice, expert interviews, and human-robot interaction studies to advance our understanding of how designers can best ideate, create, and implement robot sound. In a first step, I map out a design space that combines established sound design frameworks with insights from interviews with robot sound design experts. I then systematically traverse this space across three robot sound design explorations, investigating (i) the effect of artificial movement sound on how robots are perceived, (ii) the benefits of applying compositional theory to robot sound design, and (iii) the role and potential of spatially distributed robot sound. Finally, I implement the designs from prior chapters into humanoid robot Diamandini, and deploy it as a case study. Based on a synthesis of the data collection and design practice conducted across the thesis, I argue that the creation of robot sound is best guided by four design perspectives: fiction (sound as a means to convey a narrative), composition (sound as its own separate listening experience), plasticity (sound as something that can vary and adapt over time), and space (spatial distribution of sound as a separate communication channel). The conclusion of the thesis presents these four perspectives and proposes eleven design principles across them which are supported by detailed examples. This work contributes an extensive body of design principles, process models, and techniques providing researchers and designers with new tools to enrich the way robots communicate with humans

    Machine Performers: Agents in a Multiple Ontological State

    Get PDF
    In this thesis, the author explores and develops new attributes for machine performers and merges the trans-disciplinary fields of the performing arts and artificial intelligence. The main aim is to redefine the term “embodiment” for robots on the stage and to demonstrate that this term requires broadening in various fields of research. This redefining has required a multifaceted theoretical analysis of embodiment in the field of artificial intelligence (e.g. the uncanny valley), as well as the construction of new robots for the stage by the author. It is hoped that these practical experimental examples will generate more research by others in similar fields. Even though the historical lineage of robotics is engraved with theatrical strategies and dramaturgy, further application of constructive principles from the performing arts and evidence from psychology and neurology can shift the perception of robotic agents both on stage and in other cultural environments. In this light, the relation between representation, movement and behaviour of bodies has been further explored to establish links between constructed bodies (as in artificial intelligence) and perceived bodies (as performers on the theatrical stage). In the course of this research, several practical works have been designed and built, and subsequently presented to live audiences and research communities. Audience reactions have been analysed with surveys and discussions. Interviews have also been conducted with choreographers, curators and scientists about the value of machine performers. The main conclusions from this study are that fakery and mystification can be used as persuasive elements to enhance agency. Morphologies can also be applied that tightly couple brain and sensorimotor actions and lead to a stronger stage presence. In fact, if this lack of presence is left out of human replicants, it causes an “uncanny” lack of agency. Furthermore, the addition of stage presence leads to stronger identification from audiences, even for bodies dissimilar to their own. The author demonstrates that audience reactions are enhanced by building these effects into machine body structures: rather than identification through mimicry, this causes them to have more unambiguously biological associations. Alongside these traits, atmospheres such as those created by a cast of machine performers tend to cause even more intensely visceral responses. In this thesis, “embodiment” has emerged as a paradigm shift – as well as within this shift – and morphological computing has been explored as a method to deepen this visceral immersion. Therefore, this dissertation considers and builds machine performers as “true” performers for the stage, rather than mere objects with an aura. Their singular and customized embodiment can enable the development of non-anthropocentric performances that encompass the abstract and conceptual patterns in motion and generate – as from human performers – empathy, identification and experiential reactions in live audiences

    Abstraction of representation in live theater

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-158).Early in Tod Machover's opera Death and the Powers, the main character, Simon Powers, is subsumed into a technological environment of his own creation. The theatrical set comes alive in the form of robotic, visual, and sonic elements that allow the actor to extend his range and influence across the stage in unique and dynamic ways. The environment must compellingly assume the behavior and expression of the absent Simon. This thesis presents a new approach called Disembodied Performance that adapts ideas from affective psychology, cognitive science, and the theatrical tradition to create a framework for thinking about the translation of stage presence. An implementation of a system informed by this methodology is demonstrated. In order to distill the essence of this character, we recover performance parameters in real-time from physiological sensors, voice, and vision systems. This system allows the offstage actor to express emotion and interact with others onstage. The Disembodied Performance approach takes a new direction in augmented performance by employing a nonrepresentational abstraction of a human presence that fully translates a character into an environment. The technique and theory presented also have broad-reaching applications outside of theater for personal expression, telepresence, and storytelling.Peter Alexander Torpey.S.M
    corecore