47 research outputs found

    Issues in the Development of Conversation Dialog for Humanoid Nursing Partner Robots in Long-Term Care

    Get PDF
    The purpose of this chapter is to explore the issues of development of conversational dialog of robots for nursing, especially for long-term care, and to forecast humanoid nursing partner robots (HNRs) introduced into clinical practice. In order to satisfy the required performance of HNRs, it is important that anthropomorphic robots act with high-quality conversational dialogic functions. As for its hardware, by allowing independent range of action and degree of freedom, the burden of quality exerted in human-robot communication is reduced, thereby unburdening nurses and professional caregivers. Furthermore, it is critical to develop a friendlier type of robot by equipping it with non-verbal emotive expressions that older people can perceive. If these functions are conjoined, anthropomorphic intelligent robots will serve as possible instructors, particularly for rehabilitation and recreation activities of older people. In this way, more than ever before, the HNRs will play an active role in healthcare and in the welfare fields

    Modelling Multimodal Dialogues for Social Robots Using Communicative Acts

    Get PDF
    Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot’s applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human–robot interactions.The research leading to these results has received funding from the projects Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; and Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidade

    Spoken Language Interaction with Robots: Recommendations for Future Research

    Get PDF
    With robotics rapidly advancing, more effective human–robot interaction is increasingly needed to realize the full potential of robots for society. While spoken language must be part of the solution, our ability to provide spoken language interaction capabilities is still very limited. In this article, based on the report of an interdisciplinary workshop convened by the National Science Foundation, we identify key scientific and engineering advances needed to enable effective spoken language interaction with robotics. We make 25 recommendations, involving eight general themes: putting human needs first, better modeling the social and interactive aspects of language, improving robustness, creating new methods for rapid adaptation, better integrating speech and language with other communication modalities, giving speech and language components access to rich representations of the robot’s current knowledge and state, making all components operate in real time, and improving research infrastructure and resources. Research and development that prioritizes these topics will, we believe, provide a solid foundation for the creation of speech-capable robots that are easy and effective for humans to work with

    Modelling User Preference for Embodied Artificial Intelligence and Appearance in Realistic Humanoid Robots

    Get PDF
    Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a life-like vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human–robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable

    Speech-Driven Gesture Generation of Social Robot and Embodied Agents

    Full text link
    With the development of artificial intelligence in the field of human-computer interaction, embedded agents in virtual agents or social robots are rapidly becoming more widespread. In human-to-human interactions, humans use nonverbal behaviors to express their atti- tudes, emotions, and intentions. Consequently, embedded agents also need this capability to improve the quality and e ectiveness of their speech. The problem of how to gen- erate interaction gestures for social robots and virtual agents is crucial and challenging. Data-driven approaches, especially those based on machine learning and deep learning, are more capable of generating more types and more comfortable gestures than tradi- tional rule-based approaches. However, unlike other highly developed application areas such as natural language processing and computer vision, deep learning applied to gesture generation is still at a relatively early stage. And, the evaluation criteria and validity of its practical applications remain to be verified, along with many related issues to be explored. The aim of this thesis is to use deep learning techniques to solve the problem of speech gesture generation and to implement and evaluate it on a robot. This thesis begins with an introduction to gestures in human-robot interaction and their background and applications. Then it presents a literature review focusing on rule-based gesture-generation systems as well as data-driven gesture-generation systems. Also, this thesis presents feature extraction methods, gesture generation framework and mapping function for 3D gesture re-targeting. Furthermore, this thesis presents a novel neural network system for data-driven gesture generation, where our system is able to extract semantic and acoustic features of speech to automatically generate the corresponding gestures that can be used in social robots and virtual agents. For our gesture generation system, both subjective and objective evaluations are conducted. Compared to state-of-the-art gesture generation systems, our system shows improved performance. In addition, the gestures generated by our system can be easily deployed on virtual agents, and for implementation on robots, we use a mapping function to convert the generated gestures into gesture sequence data that can be used in robot space. We also conduct an evaluation of the mapping function. Finally, this thesis discusses future work and potential directions for improvement in speech gesture generation
    corecore