133 research outputs found

    People Interpret Robotic Non-linguistic Utterances Categorically

    Get PDF
    We present results of an experiment probing whether adults exhibit categorical perception when affectively rating robot-like sounds (Non-linguistic Utterances). The experimental design followed the traditional methodology from the psychology domain for measuring categorical perception: stimulus continua for robot sounds were presented to subjects, who were asked to complete a discrimination and an identification task. In the former subjects were asked to rate whether stimulus pairs were affectively different, while in the latter they were asked to rate single stimuli affectively. The experiment confirms that Non-linguistic Utterances can convey affect and that they are drawn towards prototypical emotions, confirming that people show categorical perception at a level of inferred affective meaning when hearing robot-like sounds. We speculate on how these insights can be used to automatically design and generate affect-laden robot-like utterances

    Personal Robot Technologies to Support Older People Living Independently

    Get PDF
    The world’s population is ageing, and the number of younger people available to care for the older population is decreasing. Digital technologies, particularly robotic technologies, are considered an important part of the solution to this looming problem. This chapter reviews some of the research over the last decade (2013 – 2023) on the development and evaluation of personal robots to assist older people living independently. The research is divided into three areas: that on older people’s needs and desires in relation to personal robots and their attitudes towards robots; their reactions to personal robots after a brief experience with them; and the evaluation of older people’s longer-term use of personal robots. Strengths and weaknesses of the research are discussed, as well as areas of need for further research.Die Weltbevölkerung altert und die Zahl der jüngeren Menschen, die für die Pflege der älteren Bevölkerung zur Verfügung stehen, nimmt ab. Digitale Technologien, insbesondere Robotertechnologien, gelten als wichtiger Teil der Lösung für dieses drohende Problem. Dieses Kapitel gibt einen Überblick über die Forschung der letzten zehn Jahre (2013 - 2023) zur Entwicklung und Bewertung von persönlichen Robotern, die ältere Menschen dabei unterstützen, ein unabhängiges Leben zu führen. Die Forschung ist in drei Bereiche unterteilt: die Bedürfnisse und Wünsche älterer Menschen in Bezug auf persönliche Roboter und ihre Einstellung zu Robotern; ihre Reaktionen auf persönliche Roboter nach kurzer Erfahrung mit ihnen; und die Bewertung der längerfristigen Nutzung persönlicher Roboter durch ältere Menschen. Stärken und Schwächen der Forschung werden diskutiert, ebenso wie Bereiche, in denen weitere Forschung notwendig ist

    Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review

    Get PDF
    In the digital era, human-robot interaction is rapidly expanding, emphasizing the need for social robots to fluently understand and communicate in multiple languages. It is not merely about decoding words but about establishing connections and building trust. However, many current social robots are limited to popular languages, serving in fields like language teaching, healthcare and companionship. This review examines the AI-driven language abilities in social robots, providing a detailed overview of their applications and the challenges faced, from nuanced linguistic understanding to data quality and cultural adaptability. Last, we discuss the future of integrating advanced language models in robots to move beyond basic interactions and towards deeper emotional connections. Through this endeavor, we hope to provide a beacon for researchers, steering them towards a path where linguistic adeptness in robots is seamlessly melded with their capacity for genuine emotional engagement

    A Study of Non-Linguistic Utterances for Social Human-Robot Interaction

    Get PDF
    The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues

    Robot's Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and its Effects on HRI

    Full text link
    The discussion around the problematic practice of gendering humanoid robots has risen to the foreground in the last few years. To lay the basis for a thorough understanding of how robot's "gender" has been understood within the Human-Robot Interaction (HRI) community - i.e., how it has been manipulated, in which contexts, and which effects it has yield on people's perceptions and interactions with robots - we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robot's "gender" (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robot's "gender" does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.Comment: 29 pages, 1 figure, 3 long tables. The present paper has been submitted for publication to the International Journal of Social Robotics and is currently under revie

    The impact of voice on trust attributions

    Get PDF
    Trust and speech are both essential aspects of human interaction. On the one hand, trust is necessary for vocal communication to be meaningful. On the other hand, humans have developed a way to infer someone’s trustworthiness from their voice, as well as to signal their own. Yet, research on trustworthiness attributions to speakers is scarce and contradictory, and very often uses explicit data, which do not predict actual trusting behaviour. However, measuring behaviour is very important to have an actual representation of trust. This thesis contains 5 experiments aimed at examining the influence of various voice characteristics — including accent, prosody, emotional expression and naturalness — on trusting behaviours towards virtual players and robots. The experiments have the "investment game"—a method derived from game theory, which allows to measure implicit trustworthiness attributions over time — as their main methodology. Results show that standard accents, high pitch, slow articulation rate and smiling voice generally increase trusting behaviours towards a virtual agent, and a synthetic voice generally elicits higher trustworthiness judgments towards a robot. The findings also suggest that different voice characteristics influence trusting behaviours with different temporal dynamics. Furthermore, the actual behaviour of the various speaking agents was modified to be more or less trustworthy, and results show that people’s trusting behaviours develop over time accordingly. Also, people reinforce their trust towards speakers that they deem particularly trustworthy when these speakers are indeed trustworthy, but punish them when they are not. This suggests that people’s trusting behaviours might also be influenced by the congruency of their first impressions with the actual experience of the speaker’s trustworthiness — a "congruency effect". This has important implications in the context of Human–Machine Interaction, for example for assessing users’ reactions to speaking machines which might not always function properly. Taken together, the results suggest that voice influences trusting behaviour, and that first impressions of a speaker’s trustworthiness based on vocal cues might not be indicative of future trusting behaviours, and that trust should be measured dynamically

    Speech dereverberation and speaker separation using microphone arrays in realistic environments

    Get PDF
    This thesis concentrates on comparing novel and existing dereverberation and speaker separation techniques using multiple corpora, including a new corpus collected using a microphone array. Many corpora currently used for these techniques are recorded using head-mounted microphones in anechoic chambers. This novel corpus contains recordings with noise and reverberation made in office and workshop environments. Novel algorithms present a different way of approximating the reverberation, producing results that are competitive with existing algorithms. Dereverberation is evaluated using seven correlation-based algorithms and applied to two different corpora. Three of these are novel algorithms (Hs NTF, Cauchy WPE and Cauchy MIMO WPE). Both non-learning and learning algorithms are tested, with the learning algorithms performing better. For single and multi-channel speaker separation, unsupervised non-negative matrix factorization (NMF) algorithms are compared using three cost functions combined with sparsity, convolution and direction of arrival. The results show that the choice of cost function is important for improving the separation result. Furthermore, six different supervised deep learning algorithms are applied to single channel speaker separation. Historic information improves the result. When comparing NMF to deep learning, NMF is able to converge faster to a solution and provides a better result for the corpora used in this thesis
    • …
    corecore